Unleashing the Power of Generative AI: Embracing its Potential and Navigating its Constraints

Unleashing the Power of Generative AI: Embracing its Potential and Navigating its Constraints

Summary

#Generative #AI has emerged as a topic of both fascination and apprehension, with opinions spanning from unwavering confidence to scepticism. However, these varied reactions often stem from a lack of understanding. As AI advances rapidly, it is crucial to gain a genuine sense of its true nature and inherent limitations. We must recognise that AI is a tool to augment human intelligence, not replace it. By harnessing the collective power of human and artificial intelligence, we can achieve responsible, insightful outcomes that benefit society. Together, let's embrace the opportunities that AI presents and shape a future where human ingenuity flourishes alongside AI advancements, enhancing the quality of human life.

Introduction

Opinions on AI vary greatly, ranging from unwavering confidence in its abilities to scepticism and even fear. A recent incident at a University in the United States (Ref [1], [2]) made headlines when an instructor used ChatGPT to check if students had relied on the same AI system to generate their written assignments. Shockingly, some students received an incomplete grade after ChatGPT falsely claimed authorship of their papers. This eye-opening event shed light on the alarming extent to which individuals mistakenly place their trust in AI as a reliable source of intelligence.

Conversely, some approach AI with scepticism or apprehension. A prime example is the decision by New York City public schools to ban the use of ChatGPT in January 2023, across all computers and Wi-Fi networks (Ref [3]). The ban was driven by concerns regarding the potential negative impact of ChatGPT on student learning. According to Jenna Lyle, a spokesperson for the department, the tool fails to foster critical thinking and problem-solving skills, which are vital for academic and lifelong success. However, it is important to note that the ban on ChatGPT has since been lifted (Ref [4]).

These reactions often arise from a lack of understanding of AI. Artificial intelligence has captivated people worldwide, offering immense potential to enhance our capabilities and gain a competitive edge in various domains. As AI continues to advance at an impressive pace, it becomes crucial to develop a genuine understanding of what AI truly is and the inherent limitations it carries. By dispelling common myths surrounding AI, we can position ourselves better about this technology and embrace it in a manner that truly enhances the quality of human life.

Digital Representation of Human Intelligence

The digitisation of human intelligence through artificial intelligence (AI) brings forth various limitations, challenges, and constraints that need to be carefully addressed and understood.

Accuracy

No alt text provided for this image

Our collective human intelligence encompasses a vast range of knowledge and experiences, including visible manifestations such as music, books, and art, as well as intangible aspects passed down through generations. This comprehensive representation is depicted by the encompassing green circle in Figure 1, symbolising the entirety of our intelligence. Throughout history, we have captured and preserved our intelligence through various mediums like books, drawings, paintings, and music, which form a subset of our collective human intelligence represented by the yellow circle in Figure 1, referred to as externalised intelligence.?

In the modern era, the advent of digital media has revolutionised how we store and retain our intellectual achievements. However, it is important to acknowledge that not all externalised intelligence has been fully captured and preserved as digital media, as illustrated by the remaining portion within the dark blue circle. This highlights the inherent limitations in our ability to digitise and represent the entirety of our intelligence accurately.


No alt text provided for this image
Figure 1 Digitising human intelligence

Let us now consider the possibility of training an AI model with all available digital media in the world. When we train AI models, we aim to create a digitised representation of our intelligence, represented by the dark blue dotted circle in Figure 1. However, it is crucial to recognise that the digitisation process inherently involves some loss of quality. Just as the digitisation of audio and video can result in a loss of fidelity, the digitised intelligence produced by AI models may appear somewhat blurred and lack the pristine quality of the original trained media.

The quality of digitised intelligence heavily relies on the number of parameters in the AI model. For instance, the GPT-3 model is trained with 175 billion parameters. These parameters encompass the learnable weights and biases of artificial neural networks, allowing the model to learn intricate patterns and relationships from the training data. They play a critical role in capturing the knowledge and representations within the model, enabling it to generate coherent and contextually relevant responses.

There is no doubt that we will continue to improve the accuracy of our AI models. However, increasing the number of parameters in a neural network presents notable challenges. It significantly impacts the computational requirements of training and inference, demanding more memory and processing power. Training larger models can be time-consuming and resource-intensive, while inadequate training data for increased parameters can result in overfitting, where the model memorises the training examples instead of learning generalisable patterns. Acquiring diverse and high-quality datasets at scale can also pose challenges, particularly in certain domains where such data may be limited or challenging to obtain.

Thus, while digitised intelligence through AI models has the potential to offer valuable insights and outputs, it is important to understand and address the inherent limitations and challenges associated with the digitisation process. By striving for a comprehensive understanding of these limitations, we can refine our approaches, improve the quality of digitised intelligence, and make informed decisions regarding its applications in various domains.

Hallucination

No alt text provided for this image

The digitised intelligence generated by AI models may not perfectly align with the original training data. While there is typically a large overlap between the generated output and the training data, some knowledge from the training data may be lost, as depicted by the non-overlap dark blue area in Figure 2.

On the other hand, within the light blue dotted non-overlap area in Figure 2, intriguing patterns emerge. Some of this non-overlap can be attributed to the desired generalisation capabilities of AI models. Generalisation allows AI to handle scenarios that may not have been explicitly present in the training data, enabling them to generate responses and content that extend beyond the specific examples encountered during training. This ability to generalise, represented by the green ellipse in Figure 2, allows AI models to provide innovative solutions and explore new perspectives.

No alt text provided for this image
Figure 2 Output of generative AI

However, within the non-overlap areas, represented by the red ellipse in Figure 2, there is also the potential for AI models to generate inaccurate or inappropriate content, referred to as hallucinations. Hallucinations can occur when AI models produce responses that deviate significantly from the intended or expected output, resulting in content that may lack factual accuracy or relevance to the given context. It is crucial to assess the quality and appropriateness of the generated content, particularly in tasks that demand precision and accuracy. In such cases, careful evaluation and human judgment are essential to ensure the reliability and relevance of the AI-generated output.

On the other hand, hallucinations can also serve as a valuable tool in creative tasks. The capacity of AI models to produce unexpected or unconventional responses can spark new ideas, fuel creativity, and act as a catalyst for innovation. AI models can contribute to the generation of fresh perspectives and inspire novel approaches to problem-solving.

Navigating the balance between leveraging the generalisation capabilities of AI models for creativity while ensuring the accuracy and relevance of the generated content is a critical consideration. By establishing mechanisms for human oversight, incorporating ethical guidelines, and continuously refining the training process, we can harness the potential of AI models to generate insightful and original content while minimising the risk of inaccuracies or inappropriate outputs.

Traceability

No alt text provided for this image

The distributed nature of AI models' "memory" across billions of weights in the artificial neural network introduces challenges when it comes to traceability. As AI models generate content, it becomes difficult to trace the precise sources of data that influenced the generated output. This lack of traceability raises concerns regarding the reliability and credibility of the generated content.

The complex interplay of numerous parameters and the vast amount of training data make it challenging to pinpoint the specific data points that contributed to the AI-generated content. The intricate relationships and patterns learned by AI models during training can obscure the exact origins of the information utilised in generating a particular output. As a result, it becomes necessary to rely on human review and verification processes to ensure the accuracy and trustworthiness of the generated content.

Human involvement is crucial in assessing the context, fact-checking, and confirming the reliability of the information generated by AI models. Human reviewers can draw upon their expertise, critical thinking skills, and knowledge of the subject matter to evaluate the AI-generated output for accuracy, consistency, and adherence to ethical standards. By conducting thorough reviews and verifications, human reviewers can help ensure that the generated content aligns with the desired quality and requirements.

In addition to human review, efforts are being made to enhance the traceability of AI-generated content. Research in explainable AI aims to develop methods that provide insights into the decision-making processes of AI models, shedding light on the factors that influenced the output. Techniques such as attention mechanisms and interpretability approaches can offer glimpses into the internal workings of AI models, facilitating traceability and enhancing transparency.

Promoting traceability in AI-generated content is essential for accountability, error detection, and maintaining ethical standards. By incorporating human oversight and advancing explainability methods, we can strive for greater traceability and ensure that AI-generated content is reliable, accurate, and aligns with the intended purpose.

Quality of Data

No alt text provided for this image

The quality of an AI model is undeniably tied to the quality of its training data. While AI models have the capacity to learn from diverse sources, including the Internet, it is imperative to acknowledge the prevalence of inaccurate information and fabricated content on the Internet. The vast amount of data available online means that the training data can inadvertently include misleading or false information.

Furthermore, as more individuals utilise AI models to generate content and contribute it to the internet, there is a potential for future AI models to become biased towards this generated content. This can result in a feedback loop where biased content generated by AI models is continually used as training data, potentially amplifying existing biases and affecting the overall quality and reliability of the information generated.

Since AI models learn from existing data, they are susceptible to inheriting the biases present in society. If these biases are not identified and addressed during the training process, AI models may inadvertently perpetuate and amplify these biases in the content they generate. This raises concerns about fairness, inclusivity, and the potential reinforcement of harmful stereotypes.

To ensure that AI models produce output that is trustworthy and aligned with ethical standards, continuous efforts to improve data quality and mitigate bias are necessary. This includes rigorous data curation, validation, and verification processes to filter out inaccurate or biased information from the training data. Additionally, diversifying the training data by incorporating a wide range of perspectives and ensuring representation from marginalised communities can help mitigate bias and promote fairness in the AI-generated output.

Transparency and accountability are crucial in addressing data quality issues. Openly documenting the sources of training data, ensuring transparency in the training process, and providing mechanisms for users to report inaccuracies or biases can help identify and rectify data-related problems. Incorporating diverse teams with different backgrounds and perspectives in AI development can also contribute to more robust and responsible systems.

By striving for high-quality data and actively mitigating biases, we can build more reliable, trustworthy, and inclusive AI systems that benefit society as a whole. Continuous evaluation, improvement, and ethical considerations are vital to ensure that AI-generated content aligns with the desired standards of accuracy, fairness, and reliability.

Conclusion

No alt text provided for this image

In the rapidly evolving landscape of generative AI, it is evident that blocking the advancement of AI technologies is not a feasible solution. However, it is crucial for our generation to develop AI literacy to understand its limitations and constraints, effectively and ethically use the tools, and focus on developing new skills for the future.

Drawing parallels to calculators, the introduction of AI tools such as generative AI does not eliminate the need to learn foundational skills. Just as calculators reduced the necessity for complex arithmetic calculations but did not replace the need to learn mathematics, generative AI possesses powerful capabilities for generating text and code but is unlikely to replace the need for literacy skills.

As generative AI becomes integrated into our daily tools, such as dictionaries, search engines, text editors, code development, and brainstorming platforms, it will reshape the way we learn and direct our attention. However, it is crucial to recognise the limitations of generative AI. Therefore, cultivating AI literacy becomes essential. This involves understanding how to leverage these tools to enhance productivity while also developing the ability to critically evaluate and validate their outcomes and incorporate human intelligence as a guiding framework.

Similar to the calculator, generative AI will not eliminate the need for literacy skills but may shift the focus towards creativity and analytical skills in conjunction with writing tasks. To adapt to this changing landscape, it is crucial to focus on developing 21st-century skills, such as power skills (Ref [5]). These skills encompass critical and creative thinking, entrepreneurship, ethics, growth mindset, communication, collaboration, empathy, negotiation, self-awareness, accountability, adaptability, professionalism, strategic vision, empowerment of others, and project management. By honing these skills, individuals can remain competitive and navigate the complexities of the modern world.

While AI models provide valuable assistance and augment human intelligence, it is essential to emphasise the importance of human involvement in the decision-making process. AI should be seen as a tool that supports human expertise rather than replacing human judgment. By combining the strengths of AI and human intelligence through collaborative efforts, we can achieve more accurate, insightful, and responsible outcomes that benefit society as a whole. It is through this synergy that we can truly leverage the potential of generative AI while ensuring its alignment with our values and ethical standards. With the right approach and mindset, we can embrace the opportunities presented by generative AI and shape a future where human and artificial intelligence work hand in hand for a better society.

References

  1. Klee, M. (2023). Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers. RolingStone. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601
  2. Ede-Osifo, U. (2023). College instructor put on blast for accusing students of using ChatGPT on final assignments. NBS News. https://www.nbcnews.com/tech/chatgpt-texas-college-instructor-backlash-rcna84888
  3. Rosenblatt, K. (2023). ChatGPT banned from New York City public schools’ devices and networks. NBC News. https://www.nbcnews.com/tech/tech-news/new-york-city-public-schools-ban-chatgpt-devices-networks-rcna64446?
  4. Faguy, A. (2023). New York City Public Schools Reverses ChatGPT Ban. Forbes. https://www.forbes.com/sites/anafaguy/2023/05/18/new-york-city-public-schools-reverses-chatgpt-ban/
  5. Ouellett, K., Clochard-Bossuet, A., Young, S., & Westerman, G. (2020). Human Skills: From Conversations to Convergence. MIT J-WEL. https://jwel.mit.edu/assets/document/human-skills-workshop-report

Author

Beng Tiong Tan is an Associate Partner at IBM, an Executive Architect, and an Open Group distinguished architect with deep expertise in digital transformation and customer experiences. He is recognised for his extensive experience leading and creating first-of-a-kind solutions with numerous awards. He is a consultant and technologist who enjoys the challenges of emerging industries and technologies. He led complex multi-year programs across multiple industries from vision to analysis, to implementation, to application go-live, including accommodations for business and technical requirements and impact. He is passionate about driving a culture of collaborative innovation to build a high-performing team. You can contact him at [email protected] or follow him on LinkedIn and Medium.

Niraj Jha

Application Architect / Agile Coach / DevOps

1 年

Insightful...Will leverage this knowledge in watsonx challange.

Anuj Yadav

Integration Architect at IBM

1 年

A great read

Siddharth Pandey

Lead Engineer | JavaScript | ReactJS | NextJS | AWS | GenAI

1 年

Beng Tiong Tan Do you think that coding will become obsolete in coming years and we will find ourselves surrounded with more and more no code platforms?

回复

要查看或添加评论,请登录

Beng Tiong Tan的更多文章

社区洞察

其他会员也浏览了