GenAI, ML or DL?

GenAI, ML or DL?

In collaboration with my friend Jesús Serrano , PhD in Machine Learning we hope this article continues to provide a better understanding on how you can design and build your own tools using Artificial Intelligence.

As AI technology advances, the impact of practical large language models (LLMs) on society continues to grow. These tools, including well-known examples like ChatGPT, Llama 2 and 3, Pinecone, Bard, and Gemini2, offer near out-of-the-box solutions that can be customized and fine-tuned to meet specific needs. Pretrained using vast data sets and substantial cloud infrastructure, these models alleviate major challenges such as data collection—a significant bottleneck in machine learning, deep learning, and general AI—and the need for costly infrastructure (e.g., NVIDIA chipsets that can exceed $10,000 USD).

However, not every problem can be addressed with pretrained models, which primarily respond to user prompts. What if you need a GenAI tool that can generate specific datasets using regular expressions? Or decrypt messages? Or handle sensitive information that cannot be transferred over the web? In such cases, developing a custom model might be necessary.

This can lead us to several questions:

·????? How can we determine the best approach—using a pre-trained model or creating a new tool?

·????? What are the technical, legal, and cost considerations involved?

?

Understanding these questions is crucial, as each challenge is unique and requires a tailored approach.

When facing a new project, the choice between utilizing machine learning (ML), Deep Learning (DL) or GenAI tools hinges on several key factors as you can see below:

1.??? Technical Infrastructure

The initial consideration is the deployment environment of your solution. Whether hosted in the cloud or on-premises, each setting offers distinct tools and resources. Cloud solutions like AWS, Azure, or Google Cloud provide integrated services such as AWS SageMaker or Bedrock, streamlining the build, test, and deployment of models, including large language models (LLMs). Conversely, an on-premises solution demands a robust setup with GPUs and other infrastructure to support intensive computational tasks.

?

2.??? Stakeholder Expectations

Understanding stakeholder expectations is key. Stakeholders often look beyond the algorithmic output; they're interested in how the model serves broader business goals. It’s crucial to align the model’s output with these objectives, bearing in mind that even sophisticated solutions like LLMs require fine-tuning to meet specific needs effectively.

?

3.??? Data Availability

The foundation of any AI model is data. A model's effectiveness is tied to the quality and quantity of available data. For instance, models like ChatGPT rely on extensive pretraining on diverse datasets. Starting with pre-trained models can be advantageous if you’d like to build something similar, but if unique needs arise, custom model development could be the option, if you have the necessary data to train the model using techniques like Transformers or Recurrent Neural Networks (RNN), depending on the need.

?

4.??? Capabilities and skills

The landscape of AI demands specialized roles such as Data Scientists, ML Engineers, and AI Architects. While building an in-house solution like ChatGPT is ambitious, beginning with a smaller team and leveraging pre-trained models and cloud services can kickstart a functional prototype. As the project matures, additional expertise can be integrated to enhance and scale the solution.

?

5.??? Timelines and Deadlines

The complexity and scope of building a custom model can extend timelines and require significant resources. Projects must adhere to predefined deadlines, which shapes the project management strategy. Opting for preexisting models can accelerate development to meet tight deadlines. Conversely, if time and resources permit, crafting a bespoke solution might be feasible without compromising the delivery schedule.

?

6.??? Privacy and Security Considerations

In the realm of AI, safeguarding data isn’t just a technical necessity but a legal and ethical imperative. The security of the data used and generated by AI systems must be a top priority. When deploying AI solutions, consider the following:

  • Data Encryption: Ensuring data, at rest and in transit, is encrypted to prevent unauthorized access.
  • Access Controls: Implementing robust access controls to ensure that only authorized personnel have access to sensitive data.
  • Compliance with Regulations: Adhering to global and local data protection regulations such as GDPR, HIPAA, or CCPA is crucial. These regulations guide how data should be handled and protect the privacy rights of individuals.
  • Anonymization Techniques: When possible, utilize data anonymization techniques to minimize the risks associated with data breaches.
  • Regular Audits: Conducting regular security audits and vulnerability assessments to identify and mitigate potential threats.


After conducting a comprehensive internal analysis, and deciding to implement a new AI tool using an existing GenAI tool, we have some options to choose from. The table below illustrates a selection of the many available GenAI tools and the types of problems they address.

With new updates and tools being released daily, it’s crucial to stay informed by following leading companies such as OpenAI, Meta, Microsoft, and Nvidia, and subscribing to top AI newsletters. GenAI tools for generating music, audio, and gaming media are rapidly advancing, and soon, we may be able to effortlessly create content in these domains.

If we decide to develop a custom GenAI tool, we should begin by selecting the machine learning algorithm that best addresses our specific problem. The following image and table provide a taxonomy of machine learning algorithms and the tasks they are most commonly used for.


Sometimes the ML algorithm may not be enough to achieve the outcomes we are expecting, or the problem is too big or requires a deeper level of analysis. Even the combination of multiple ML algorithms may not be enough, and we end up with a complex tool that is expensive to maintain and run.

Deep learning takes a step forward on the supervised learning path, the extended use of neural networks and more intensive training allows the model to make even more accurate predictions and forecasting, to the point of creating objects based on the expected output. Some of the most popular algorithms and their applications are listed below:

It's important to remember that no algorithm is better than other for every problem out there. That is proved in the famous No Free Lunch Theorem (Wolpert & Macready, 1997), It's always a good practice to choose at least three different algorithms from the same branch to test and compare results in terms of prediction, accuracy and computational performance.? Also, we have to consider the dimensionality of the data, the objective of the model we want to develop for the problem we are trying to solve and the amount of available data we have.

The scope of an ML project can be vast and seemingly endless, but if we identify the problem we want to solve and from the available data we have, select the features that best describe and provide the result we are expecting. Choosing the ML algorithm or the GenAI tool should be as straightforward as the deployment of the solution to a Cloud or On-premises infrastructure.


In conclusion, the integration of GenAI tools, machine learning, and deep learning algorithms is revolutionizing industries by enabling unprecedented levels of automation, personalization, and innovation. As we navigate this dynamic landscape, it's essential to remain agile and informed, analyzing, evaluating and leveraging the latest advancements to maintain a competitive edge, but remember, not all the solutions would need and AI approach, that's why, as highlighted in the previous article(Go to Article), you'll need to investigate and build your own criteria to decide what is best to solve the problem, area of improvement or efficiency you have previously identified.


The journey of exploring and implementing these cutting-edge solutions is just starting, and the opportunities are boundless. Stay curious, stay updated, and continue to innovate but conscious you did a previous work to better understand the scope, drawbacks and benefits of this type of technology.


--Jose Serrano, PhD in Machine Learning

--Luis Escalante, Master in Artificial Intelligence

要查看或添加评论,请登录

Luis Escalante的更多文章

  • Do you know when to use Artificial Intelligence and when it couldn't be a good idea?

    Do you know when to use Artificial Intelligence and when it couldn't be a good idea?

    Artificial Intelligence (AI) has become a buzzword in the business world, promising to revolutionize everything from…

  • What are Generative AI and LLM?

    What are Generative AI and LLM?

    Have you heard about ChatGPT, Midjourney, Dall-e, and other tools to generate text, images, and videos? Have you heard…

    1 条评论
  • Dejo cosas positivas el 2020?

    Dejo cosas positivas el 2020?

    2020: el a?o de la pandemia, un a?o que se robo muchísimas vidas, destruyo muchísimos negocios, puso otros en riesgo y…

    2 条评论
  • Can leadership be measured quantitatively?

    Can leadership be measured quantitatively?

    I’ve never written about leadership or anything related before, I’m not an expert on this topic either, this is…

    4 条评论

社区洞察

其他会员也浏览了