GenAI-Powered Intelligent Agents as Collaborative Workforces: The Power of Fine-Tuned LLMs
Dr. Ahmed S. ELSHEIKH - EDBAs, MBA/MSc
R&D Manager @ ITIDA ★ AI/Data/Analytics & Digital Platforms Strategist | DX/FinTech/Blockchain & Emerging Tech Monetization Advisor | Business/Enterprise Architect | Governance/BSC/OKR/Agile Expert | Executive Coach
In the 55th edition of?this newsletter, entitled “Role, Context, and Action Awareness: The Simplest Yet Effective Prompt Engineering Tactic,” it was concluded that it is logically intuitive that reducing the overall cost of reaching the optimum response from?these?“General-Purpose Large Language Models”?means designing effective?“Prompt Engineering Tactics”?that make the overall process as efficient as possible to avoid increasing the cost associated with?the long?“Sequence of Attention-Tuning Prompts.”?Furthermore, it was discussed that following the simplest definition of intelligence guides us to the simplest yet effective prompt engineering tactic. This simplest logical, intuitive tactic includes telling the?general-purpose large language models?his expected?“Role to Perform,”?then as much as possible information about his?“Surrounding Environment”?and the required?“Actions to Accomplish.”?By providing the general-purpose large language models?sufficient awareness about these?three pieces of information in sequence, the generated response may be to the point and reduce the overall prompting cost.
However, a logical question may appear if there are some tasks that are performed repetitively within a specific human job. For example, if a particular job requires that you proofread certain documents for errors in several tasks, and at the same time, the same job requires that you ensure the accuracy of data in spreadsheets and analyze these data to derive informed decisions. Then, the same person who performs that same job usually needs to summarize some industrial reports. In this situation, telling the general-purpose large language models information about their role, context, and action may be a waste of time as you tell the general-purpose large language models to switch between different roles many times. This behavior may reduce the effectiveness of the general-purpose large language models because this behavior is the opposite of the previously explained?“Attention-Tuning Strategies” discussed in the 54th edition of this newsletter.
领英推荐
Hence, the focus of this edition of the newsletter is on the above-mentioned situation. Instead of telling the general-purpose large language models to switch between different roles many times, a better strategy would be to use several?“Fine-Tuned Large Language Models.”?Each one of these fine-tuned large language models is well-trained to perform specific roles more effectively and efficiently. For example, if a particular job requires that you proofread certain documents for errors in several tasks, there will be a fine-tuned large language model to perform the proofreading. Another fine-tuned large language model can be used within the same job to ensure the accuracy of data in spreadsheets and analyze these data to derive informed decisions. Then, the same person who performs that same job can use a third fine-tuned large language model to summarize the industrial reports. This means that several?“Generative AI-Powered Intelligent Agents” that are powered by different fine-tuned large language models can be used to automate or augment several tasks within the complex daily workflows.?
Hence, and to conclude,?instead of telling the general-purpose large language models to switch between different roles many times, which may represent a waste of time and resources as well as be considered as the opposite of the previously explained?“Attention-Tuning Strategies,”?a better strategy would be to use several?“Fine-Tuned Large Language Models”?to automate or augment several tasks within the complex daily workflows. This means that several?“Generative AI-Powered Intelligent Agents”?that are powered by different fine-tuned large language models and work as a "Collaborative Workforce" should be used to fully automate or augment complex workflows related to specific jobs. This will ultimately increase efficiency and productivity in the workplace.