Generative AI
Wikipedia Microsoft

Generative AI

Generative artificial intelligence (also generative AI or GenAI is artificial?intelligence capable of generating text, images, or other media, using generative?models. Generative AI models learn the patterns and structure of their input and then?generate new data that has similar characteristics?an image generated .In the early 2020s, advances in transformer-based deep neural networks enabled a?number of generative AI systems notable for accepting natural language prompts as?input. These include large language model chatbots such as ChatGPT, Bing Chat, Bard,?and LLaMA, and text-to-image artificial intelligence art systems such as Stable?Diffusion, Midjourney, and DALL-E.?Generative AI has uses across a wide range of industries, including art, writing, script?writing, software development, product design, healthcare, finance, gaming, marketing,?and fashion. Investment in generative AI surged during the early 2020s, with large?companies such as Microsoft, Google, and Baidu as well as numerous smaller firms?developing generative AI models. However, there are also concerns about the potential?misuse of generative AI, including cybercrime or creating fake news or deepfakes which?can be used to deceive or manipulate people?Generative artificial intelligence (GenAI) is a type of AI that can create new content, such as:?

Text, Images, Videos, Code, Data, 3D renderings, Music, Speech, Product designs.?

GenAI models learn the patterns and structure of their training data and then generate new data?that has similar characteristics. They can learn from existing artifacts to generate new, realistic?artifacts that reflect the characteristics of the training data but don't repeat it.?GenAI is used in many industries, including: Financial services, Consumer internet, Healthcare,?Higher education.?

According to a forecast from International Data Corporation (IDC), enterprises will invest nearly?$16 billion worldwide on GenAI solutions in 2023. IDC forecasts that spending on GenAI?solutions will reach $143 billion in 2027.?Enterprise AI?

??

What is generative AI??

?

Everything one should know?

Generative AI is a type of artificial intelligence technology that can produce various types of?content, including text, imagery, audio and synthetic data. The recent buzz around generative AI?has been driven by the simplicity of new user interfaces for creating high-quality text, graphics?and videos in a matter of seconds.?The technology, it should be noted, is not brand-new. Generative AI was introduced in the 1960s?in chatbots. But it was not until 2014, with the introduction of generative adversarial networks, or?GANs -- a type of machine learning algorithm -- that generative AI could create convincingly?authentic images, videos and audio of real people.?On the one hand, this newfound capability has opened up opportunities that include better?movie dubbing and rich educational content. It also unlocked concerns about deepfakes --?digitally forged images or videos -- and harmful cybersecurity attacks on businesses, including?nefarious requests that realistically mimic an employee's boss.?Two additional recent advances that will be discussed in more detail below have played a critical?part in generative AI going mainstream: transformers and the breakthrough language models?they enabled. Transformers are a type of machine learning that made it possible for researchers?to train ever-larger models without having to label all of the data in advance. New models could?thus be trained on billions of pages of text, resulting in answers with more depth. In addition,?transformers unlocked a new notion called attention that enabled models to track the?connections between words across pages, chapters and books rather than just in individual?sentences. And not just words: Transformers could also use their ability to track connections to?analyze code, proteins, chemicals and DNA.?The rapid advances in so-called large language models (LLMs) -- i.e., models with billions or?even trillions of parameters -- have opened a new era in which generative AI models can write?engaging text, paint photorealistic images and even create somewhat entertaining sitcoms on?the fly. Moreover, innovations in multimodal AI enable teams to generate content across multiple?types of media, including text, graphics and video. This is the basis for tools like Dall-E that?automatically create images from a text description or generate text captions from images.?These breakthroughs notwithstanding, we are still in the early days of using generative AI to?create readable text and photorealistic stylized graphics. Early implementations have had issues?with accuracy and bias, as well as being prone to hallucinations and spitting back weird?answers. Still, progress thus far indicates that the inherent capabilities of this generative AI?could fundamentally change enterprise technology how businesses operate. Going forward, this?technology could help write code, design new drugs, develop products, redesign business?processes and transform supply chains.?

?

?

How does generative AI work??

Generative AI starts with a prompt that could be in the form of a text, an image, a video, a?design, musical notes, or any input that the AI system can process. Various AI algorithms then?return new content in response to the prompt. Content can include essays, solutions to?problems, or realistic fakes created from pictures or audio of a person.?Early versions of generative AI required submitting data via an API or an otherwise complicated?process. Developers had to familiarize themselves with special tools and write applications using languages such as Python.?Now, pioneers in generative AI are developing better user experiences that let you describe a?request in plain language. After an initial response, you can also customize the results with?feedback about the style, tone and other elements you want the generated content to reflect.?Generative AI models combine various AI algorithms to represent and process content. For?example, to generate text, various natural language processing techniques transform raw?characters (e.g., letters, punctuation and words) into sentences, parts of speech, entities and?actions, which are represented as vectors using multiple encoding techniques. Similarly, images?are transformed into various visual elements, also expressed as vectors. One caution is that?these techniques can also encode the biases, racism, deception and puffery contained in the?training data.?Once developers settle on a way to represent the world, they apply a particular neural network?to generate new content in response to a query or prompt. Techniques such as GANs and?variational autoencoders (VAEs) -- neural networks with a decoder and encoder -- are suitable?for generating realistic human faces, synthetic data for AI training or even facsimiles of particular?humans.?Recent progress in transformers such as Google's Bidirectional Encoder Representations from?Transformers (BERT), OpenAI's GPT and Google AlphaFold have also resulted in neural?networks that can not only encode language, images and proteins but also generate new?content.?

How neural networks are transforming generative AI ??

Researchers have been creating AI and other tools for programmatically generating content?since the early days of AI. The earliest approaches, known as rule-based systems and later as?"expert systems," used explicitly crafted rules for generating responses or data sets.?Neural networks, which form the basis of much of the AI and machine learning applications?today, flipped the problem around. Designed to mimic how the human brain works, neural?networks "learn" the rules from finding patterns in existing data sets. Developed in the 1950s?and 1960s, the first neural networks were limited by a lack of computational power and small?data sets. It was not until the advent of big data in the mid-2000s and improvements in?computer hardware that neural networks became practical for generating content.?The field accelerated when researchers found a way to get neural networks to run in parallel?across the graphics processing units (GPUs) that were being used in the computer gaming?industry to render video games. New machine learning techniques developed in the past?decade, including the aforementioned generative adversarial networks and transformers, have?set the stage for the recent remarkable advances in AI-generated content.?

What are Dall-E, ChatGPT and Bard??

ChatGPT, Dall-E and Bard are popular generative AI interfaces.?

Dall-E. Trained on a large data set of images and their associated text descriptions, Dall-E is an?example of a multimodal AI application that identifies connections across multiple media, such?as vision, text and audio. In this case, it connects the meaning of words to visual elements. It?was built using OpenAI's GPT implementation in 2021. Dall-E 2, a second, more capable?version, was released in 2022. It enables users to generate imagery in multiple styles driven by?user prompts.?ChatGPT. The AI-powered chatbot that took the world by storm in November 2022 was built on?OpenAI's GPT-3.5 implementation. OpenAI has provided a way to interact and fine-tune text?responses via a chat interface with interactive feedback. Earlier versions of GPT were only?accessible via an API. GPT-4 was released March 14, 2023. ChatGPT incorporates the history?of its conversation with a user into its results, simulating a real conversation. After the incredible?popularity of the new GPT interface, Microsoft announced a significant new investment into?Bard. Google was another early leader in pioneering transformer AI techniques for processing?language, proteins and other types of content. It open sourced some of these models for?researchers. However, it never released a public interface for these models. Microsoft's decision?to implement GPT into Bing drove Google to rush to market a public-facing chatbot, Google?Bard, built on a lightweight version of its LaMDA family of large language models. Google?suffered a significant loss in stock price following Bard's rushed debut after the language model?incorrectly said the Webb telescope was the first to discover a planet in a foreign solar system.?

Meanwhile, Microsoft and ChatGPT implementations also lost face in their early outings due to?inaccurate results and erratic behavior. Google has since unveiled a new version of Bard built?on its most advanced LLM, PaLM 2, which allows Bard to be more efficient and visual in its?response to user queries.?

What are use cases for generative AI??

Generative AI can be applied in various use cases to generate virtually any kind of content. The?technology is becoming more accessible to users of all kinds thanks to cutting-edge?breakthroughs like GPT that can be tuned for different applications. Some of the use cases for?generative AI include the following:?Implementing chatbots for customer service and technical support.?Deploying deepfakes for mimicking people or even specific individuals.?Improving dubbing for movies and educational content in different languages.?

Writing email responses, dating profiles, resumes and term papers.?

Creating photorealistic art in a particular style.?

Improving product demonstration videos.?

Suggesting new drug compounds to test.?

Designing physical products and buildings.?

Optimizing new chip designs.?

Writing music in a specific style or tone.?

What are the benefits of generative AI??

Generative AI can be applied extensively across many areas of the business. It can make it?easier to interpret and understand existing content and automatically create new content.?

Developers are exploring ways that generative AI can improve existing workflows, with an eye to?adapting workflows entirely to take advantage of the technology. Some of the potential benefits?of implementing generative AI include the following:?

Automating the manual process of writing content.?

Reducing the effort of responding to emails.?

Improving the response to specific technical queries.?

Creating realistic representations of people.?

Summarizing complex information into a coherent narrative.?

Simplifying the process of creating content in a particular style.?

What are the limitations of generative AI??

Early implementations of generative AI vividly illustrate its many limitations. Some of the?challenges generative AI presents result from the specific approaches used to implement?particular use cases. For example, a summary of a complex topic is easier to read than an?explanation that includes various sources supporting key points. The readability of the summary,?however, comes at the expense of a user being able to vet where the information comes from.?Here are some of the limitations to consider when implementing or using a generative AI app:?

It does not always identify the source of content.?

It can be challenging to assess the bias of original sources.?

Realistic-sounding content makes it harder to identify inaccurate information.?

It can be difficult to understand how to tune for new circumstances.?

Results can gloss over bias, prejudice and hatred.?

Attention is all you need: Transformers bring new capability?

In 2017, Google reported on a new type of neural network architecture that brought significant?

improvements in efficiency and accuracy to tasks like natural language processing. The?

breakthrough approach, called transformers, was based on the concept of attention.?

At a high level, attention refers to the mathematical description of how things (e.g., words) relate?

to, complement and modify each other. The researchers described the architecture in their?seminal paper, "Attention is all you need," showing how a transformer neural network was able?to translate between English and French with more accuracy and in only a quarter of the training?time than other neural nets. The breakthrough technique could also discover relationships, or?hidden orders, between other things buried in the data that humans might have been unaware?of because they were too complicated to express or discern.?

Transformer architecture has evolved rapidly since it was introduced, giving rise to LLMs such?as GPT-3 and better pre-training techniques, such as Google's BERT.?

What are the issues surrounding the generative AI??

The rise of generative AI is also fueling various concerns. These relate to the quality of results,?potential for misuse and abuse, and the potential to disrupt existing business models. Here are?some of the specific types of problematic issues posed by the current state of generative AI:?

It can provide inaccurate and misleading information. It is more difficult to trust without knowing?the source and provenance of information.?

It can promote new kinds of plagiarism that ignore the rights of content creators and artists of?original content.?

It might disrupt existing business models built around search engine optimization and?advertising. It makes it easier to generate fake news. It makes it easier to claim that real?photographic evidence of a wrongdoing was just an AI-generated fake.?

It could impersonate people for more effective social engineering cyber attacks.?

Implementing generative AI is not just about technology. Businesses must also consider its?impact on people and processes.?

What are some examples of generative AI tools??

Generative AI tools exist for various modalities, such as text, imagery, music, code and voices.?

Some popular AI content generators to explore include the following:?

Text generation tools include GPT, Jasper, AI-Writer and Lex.?

Image generation tools include Dall-E 2, Midjourney and Stable Diffusion.?

Music generation tools include Amper, Dadabots and MuseNet.?

Code generation tools include CodeStarter, Codex, GitHub Copilot and Tabnine.?

Voice synthesis tools include Descript, Listnr and Podcast.ai .?

AI chip design tool companies include Synopsys, Cadence, Google and Nvidia.?

Use cases for generative AI, by industry?

New generative AI technologies have sometimes been described as general-purpose?technologies akin to steam power, electricity and computing because they can profoundly affect?many industries and use cases. It's essential to keep in mind that, like previous general-purpose?technologies, it often took decades for people to find the best way to organize workflows to take?advantage of the new approach rather than speeding up small portions of existing workflows.?

Here are some ways generative AI applications could impact different industries:?

Finance can watch transactions in the context of an individual's history to build better fraud?detection systems.?

Legal firms can use generative AI to design and interpret contracts, analyze evidence and?suggest arguments.?

Manufacturers can use generative AI to combine data from cameras, X-ray and other metrics to?identify defective parts and the root causes more accurately and economically.?

Film and media companies can use generative AI to produce content more economically and?translate it into other languages with the actors' own voices.?

The medical industry can use generative AI to identify promising drug candidates more?efficiently.?Architectural firms can use generative AI to design and adapt prototypes more quickly.?Gaming companies can use generative AI to design game content and levels.?OpenAI, an AI research and deployment company, took the core ideas behind transformers to?train its version, dubbed Generative Pre-trained Transformer, or GPT. Observers have noted?that GPT is the same acronym used to describe general-purpose technologies such as the?steam engine, electricity and computing. Most would agree that GPT and other transformer?implementations are already living up to their name as researchers discover ways to apply them?to industry, science, commerce, construction and medicine.?

Ethics and bias in generative AI?

Despite their promise, the new generative AI tools open a can of worms regarding accuracy,?trustworthiness, bias, hallucination and plagiarism -- ethical issues that likely will take years to?sort out. None of the issues are particularly new to AI. Microsoft's first foray into chatbots in?2016, called Tay, for example, had to be turned off after it started spewing inflammatory rhetoric?on Twitter.?

What is new is that the latest crop of generative AI apps sounds more coherent on the surface.?But this combination of humanlike language and coherence is not synonymous with human?intelligence, and there currently is great debate about whether generative AI models can be?trained to have reasoning ability. One Google engineer was even fired after publicly declaring?the company's generative AI app, Language Models for Dialog Applications (LaMDA), was?sentient.?

The convincing reality of generative AI content introduces a new set of AI risks. It makes it?harder to detect AI-generated content and, more importantly, makes it more difficult to detect?when things are wrong. This can be a big problem when we rely on generative AI results to write?code or provide medical advice. Many results of generative AI are not transparent, so it is hard?to determine if, for example, they infringe on copyrights or if there is problem with the original?sources from which they draw results. If you don't know how the AI came to a conclusion, you?cannot reason about why it might be wrong.?

Generative AI vs. AI?

Generative AI focuses on creating new and original content, chat responses, designs, synthetic?data or even deepfakes. It's particularly valuable in creative fields and for novel problem-solving,?as it can autonomously generate many types of new outputs.?Generative AI, as noted above, relies on neural network techniques such as transformers,?GANs and VAEs. Other kinds of AI, in distinction, use techniques including convolutional neural?networks, recurrent neural networks and reinforcement learning.?Generative AI often starts with a prompt that lets a user or data source submit a starting query?or data set to guide content generation. This can be an iterative process to explore content?variations. Traditional AI algorithms, on the other hand, often follow a predefined set of rules to?process data and produce a result.?Both approaches have their strengths and weaknesses depending on the problem to be solved,?with generative AI being well-suited for tasks involving NLP and calling for the creation of new?content, and traditional algorithms more effective for tasks involving rule-based processing and?predetermined outcomes.?

Generative AI vs. predictive AI vs. conversational AI?

Predictive AI, in distinction to generative AI, uses patterns in historical data to forecast?outcomes, classify events and actionable insights. Organizations use predictive AI to sharpen?decision-making and develop data-driven strategies.?Conversational AI helps AI systems like virtual assistants, chatbots and customer service apps?interact and engage with humans in a natural way. It uses techniques from NLP and machine?learning to understand language and provide human-like text or speech responses.?

Generative AI history?

The Eliza chatbot created by Joseph Weizenbaum in the 1960s was one of the earliest?examples of generative AI. These early implementations used a rules-based approach that?broke easily due to a limited vocabulary, lack of context and overreliance on patterns, among?other shortcomings. Early chatbots were also difficult to customize and extend.?The field saw a resurgence in the wake of advances in neural networks and deep learning in?2010 that enabled the technology to automatically learn to parse existing text, classify image?elements and transcribe audio.?

Ian Goodfellow introduced GANs in 2014. This deep learning technique provided a novel?approach for organizing competing neural networks to generate and then rate content?variations. These could generate realistic people, voices, music and text. This inspired interest?in -- and fear of -- how generative AI could be used to create realistic deepfakes that?impersonate voices and people in videos.?Since then, progress in other neural network techniques and architectures has helped expand?generative AI capabilities. Techniques include VAEs, long short-term memory, transformers,?diffusion models and neural radiance fields.?

Best practices for using generative AI?

The best practices for using generative AI will vary depending on the modalities, workflow and?desired goals. That said, it is important to consider essential factors such as accuracy,?transparency and ease of use in working with generative AI. The following practices help?achieve these factors:?

Clearly label all generative AI content for users and consumers.?Vet the accuracy of generated content using primary sources where applicable.?

Consider how bias might get woven into generated AI results.?

Double-check the quality of AI-generated code and content using other tools.?

Learn the strengths and limitations of each generative AI tool.?

Familiarize yourself with common failure modes in results and work around these.?

The future of generative AI?

The incredible depth and ease of ChatGPT spurred widespread adoption of generative AI. To be?sure, the speedy adoption of generative AI applications has also demonstrated some of the?difficulties in rolling out this technology safely and responsibly. But these early implementation?issues have inspired research into better tools for detecting AI-generated text, images and?video.?

Indeed, the popularity of generative AI tools such as ChatGPT, Midjourney, Stable Diffusion and?Bard has also fueled an endless variety of training courses at all levels of expertise. Many are?aimed at helping developers create AI applications. Others focus more on business users?looking to apply the new technology across the enterprise. At some point, industry and society?will also build better tools for tracking the provenance of information to create more trustworthy?AI.?Generative AI will continue to evolve, making advancements in translation, drug discovery,?anomaly detection and the generation of new content, from text and video to fashion design and?music. As good as these new one-off tools are, the most significant impact of generative AI in the future will come from integrating these capabilities directly into the tools we already use.?Grammar checkers, for example, will get better. Design tools will seamlessly embed more useful?recommendations directly into our workflows. Training tools will be able to automatically identify?best practices in one part of an organization to help train other employees more efficiently.?

These are just a fraction of the ways generative AI will change what we do in the near-term.?What the impact of generative AI will be in the future is hard to say. But as we continue to?harness these tools to automate and augment human tasks, we will inevitably find ourselves?having to reevaluate the nature and value of human expertise.?Enterprise?AI??

Generative AI is a type of artificial intelligence technology that can produce various types of?content, including text, imagery, audio and synthetic data. The recent buzz around generative AI?has been driven by the simplicity of new user interfaces for creating high-quality text, graphics?and videos in a matter of seconds.?

The technology, it should be noted, is not brand-new. Generative AI was introduced in the 1960s?in chatbots. But it was not until 2014, with the introduction of generative adversarial networks, or?GANs -- a type of machine learning algorithm -- that generative AI could create convincingly?authentic images, videos and audio of real people.?On the one hand, this newfound capability has opened up opportunities that include better?movie dubbing and rich educational content. It also unlocked concerns about deepfakes --?digitally forged images or videos -- and harmful cybersecurity attacks on businesses, including?nefarious requests that realistically mimic an employee's boss.?

Two additional recent advances that will be discussed in more detail below have played a critical?part in generative AI going mainstream: transformers and the breakthrough language models?they enabled. Transformers are a type of machine learning that made it possible for researchers?to train ever-larger models without having to label all of the data in advance. New models could?thus be trained on billions of pages of text, resulting in answers with more depth. In addition,?transformers unlocked a new notion called attention that enabled models to track the?connections between words across pages, chapters and books rather than just in individual?sentences.?

How does generative AI work??

Generative AI starts with a prompt that could be in the form of a text, an image, a video, a?design, musical notes, or any input that the AI system can process. Various AI algorithms then?return new content in response to the prompt. Content can include essays, solutions to?problems, or realistic fakes created from pictures or audio of a person.?Early versions of generative AI required submitting data via an API or an otherwise complicated?process. Developers had to familiarize themselves with special tools and write applications?using languages such as Python.?Now, pioneers in generative AI are developing better user experiences that let you describe a?request in plain language. After an initial response, you can also customize the results with?feedback about the style, tone and other elements you want the generated content to reflect.?

Generative AI models?

Generative AI models combine various AI algorithms to represent and process content. For?example, to generate text, various natural language processing techniques transform raw?characters (e.g., letters, punctuation and words) into sentences, parts of speech, entities and?actions, which are represented as vectors using multiple encoding techniques. Similarly, images?are transformed into various visual elements, also expressed as vectors. One caution is that?these techniques can also encode the biases, racism, deception and puffery contained in the?training data.?Once developers settle on a way to represent the world, they apply a particular neural network?to generate new content in response to a query or prompt. Techniques such as GANs and?variational autoencoders (VAEs) -- neural networks with a decoder and encoder -- are suitable?for generating realistic human faces, synthetic data for AI training or even facsimiles of particular?humans.?Recent progress in transformers such as Google's Bidirectional Encoder Representations from?.Transformers (BERT), OpenAI's GPT and Google AlphaFold have also resulted in neural?networks that can not only encode language, images and proteins but also generate new?content.?

How neural networks are transforming generative AI?

Researchers have been creating AI and other tools for programmatically generating content?since the early days of AI. The earliest approaches, known as rule-based systems and later as?"expert systems," used explicitly crafted rules for generating responses or data sets.?

Neural networks, which form the basis of much of the AI and machine learning applications?today, flipped the problem around. Designed to mimic how the human brain works, neural?networks "learn" the rules from finding patterns in existing data sets. Developed in the 1950s?and 1960s, the first neural networks were limited by a lack of computational power and small?data sets. It was not until the advent of big data in the mid-2000s and improvements in?computer hardware that neural networks became practical for generating content.?

The field accelerated when researchers found a way to get neural networks to run in parallel?across the graphics processing units (GPUs) that were being used in the computer gaming?industry to render video games. New machine learning techniques developed in the past?decade, including the aforementioned generative adversarial networks and transformers, have?set the stage for the recent remarkable advances in AI-generated content.?

What are Dall-E, ChatGPT and Bard??

ChatGPT, Dall-E and Bard are popular generative AI interfaces.?

Dall-E. Trained on a large data set of images and their associated text descriptions, Dall-E is an?example of a multimodal AI application that identifies connections across multiple media, such?as vision, text and audio. In this case, it connects the meaning of words to visual elements. It?was built using OpenAI's GPT implementation in 2021. Dall-E 2, a second, more capable?version, was released in 2022. It enables users to generate imagery in multiple styles driven by?user prompts.?ChatGPT. The AI-powered chatbot that took the world by storm in November 2022 was built on?OpenAI's GPT-3.5 implementation. OpenAI has provided a way to interact and fine-tune text?responses via a chat interface with interactive feedback. Earlier versions of GPT were only?accessible via an API. GPT-4 was released March 14, 2023. ChatGPT incorporates the history?of its conversation with a user into its results, simulating a real conversation. After the incredible?popularity of the new GPT interface, Microsoft announced a significant new investment into?OpenAI and integrated a version of GPT into its Bing search engine.?Google was another early leader in pioneering transformer AI techniques for processing?language, proteins and other types of content. It open sourced some of these models for?researchers. However, it never released a public interface for these models. Microsoft's decision?to implement GPT into Bing drove Google to rush to market a public-facing chatbot, Google?Bard, built on a lightweight version of its LaMDA family of large language models. Google?suffered a significant loss in stock price following Bard's rushed debut after the language model?incorrectly said the Webb telescope was the first to discover a planet in a foreign solar system.?Meanwhile, Microsoft and ChatGPT implementations also lost face in their early outings due to?inaccurate results and erratic behavior. Google has since unveiled a new version of Bard built?on its most advanced LLM, PaLM 2, which allows Bard to be more efficient and visual in its?response to user queries.?

What are use cases for generative AI??

Generative AI can be applied in various use cases to generate virtually any kind of content. The?technology is becoming more accessible to users of all kinds thanks to cutting-edge?

breakthroughs like GPT that can be tuned for different applications. Some of the use cases for?generative AI include the following:?

Implementing chatbots for customer service and technical support.?

Deploying deepfakes for mimicking people or even specific individuals.?

Improving dubbing for movies and educational content in different languages.?

Writing email responses, dating profiles, resumes and term papers.?

Creating photorealistic art in a particular style.?

Improving product demonstration videos.?

Suggesting new drug compounds to test.?

Designing physical products and buildings.?

Optimizing new chip designs.?

Writing music in a specific style or tone.?

What are the benefits of generative AI??

Generative AI can be applied extensively across many areas of the business. It can make it?easier to interpret and understand existing content and automatically create new content.?

Developers are exploring ways that generative AI can improve existing workflows, with an eye to?adapting workflows entirely to take advantage of the technology. Some of the potential benefits?of implementing generative AI include the following:?

Automating the manual process of writing content.?

Reducing the effort of responding to emails.?

Improving the response to specific technical queries.?

Creating realistic representations of people.?

Summarizing complex information into a coherent narrative.?

Simplifying the process of creating content in a particular style.?

What are the limitations of generative AI??

Early implementations of generative AI vividly illustrate its many limitations. Some of the?challenges generative AI presents result from the specific approaches used to implement?particular use cases. For example, a summary of a complex topic is easier to read than an?explanation that includes various sources supporting key points. The readability of the summary,?however, comes at the expense of a user being able to vet where the information comes from.?Here are some of the limitations to consider when implementing or using a generative AI app:?

It does not always identify the source of content.?

It can be challenging to assess the bias of original sources.?

Realistic-sounding content makes it harder to identify inaccurate information.?

It can be difficult to understand how to tune for new circumstances.?

Results can gloss over bias, prejudice and hatred.?

Attention is all you need: Transformers bring new capability?

In 2017, Google reported on a new type of neural network architecture that brought significant?improvements in efficiency and accuracy to tasks like natural language processing. The?breakthrough approach, called transformers, was based on the concept of attention.?At a high level, attention refers to the mathematical description of how things (e.g., words) relate?to, complement and modify each other. The researchers described the architecture in their?seminal paper, "Attention is all you need," showing how a transformer neural network was able?to translate between English and French with more accuracy and in only a quarter of the training?time than other neural nets. The breakthrough technique could also discover relationships, or?hidden orders, between other things buried in the data that humans might have been unaware?of because they were too complicated to express or discern.?Transformer architecture has evolved rapidly since it was introduced, giving rise to LLMs such?as GPT-3 and better pre-training techniques, such as Google's BERT.?

What are the concerns surrounding generative AI??

The rise of generative AI is also fueling various concerns. These relate to the quality of results,?potential for misuse and abuse, and the potential to disrupt existing business models. Here are?some of the specific types of problematic issues posed by the current state of generative AI:?

It can provide inaccurate and misleading information.?

It is more difficult to trust without knowing the source and provenance of information.?

It can promote new kinds of plagiarism that ignore the rights of content creators and artists of?original content.?

??

It might disrupt existing business models built around search engine optimization and?advertising.?

It makes it easier to generate fake news.?

It makes it easier to claim that real photographic evidence of a wrongdoing was just an AI-?generated fake.?

It could impersonate people for more effective social engineering cyber attacks.?

Implementing generative AI is not just about technology. Businesses must also consider its?impact on people and processes.?

What are some examples of generative AI tools??

Generative AI tools exist for various modalities, such as text, imagery, music, code and voices.?

Some popular AI content generators to explore include the following:?

Text generation tools include GPT, Jasper, AI-Writer and Lex.?

Image generation tools include Dall-E 2, Midjourney and Stable Diffusion.?

Music generation tools include Amper, Dadabots and MuseNet.?

Code generation tools include CodeStarter, Codex, GitHub Copilot and Tabnine.?

Voice synthesis tools include Descript, Listnr and Podcast.ai .

AI chip design tool companies include Synopsys, Cadence, Google and Nvidia.?

Use cases for generative AI, by industry

?

New generative AI technologies have sometimes been described as general-purpose?technologies akin to steam power, electricity and computing because they can profoundly affect?many industries and use cases. It's essential to keep in mind that, like previous general-purpose?technologies, it often took decades for people to find the best way to organize workflows to take?advantage of the new approach rather than speeding up small portions of existing workflows.?

Here are some ways generative AI applications could impact different industries:?

Finance can watch transactions in the context of an individual's history to build better fraud?detection systems.?Legal firms can use generative AI to design and interpret contracts, analyze evidence and?suggest arguments.?Manufacturers can use generative AI to combine data from cameras, X-ray and other metrics to?identify defective parts and the root causes more accurately and economically.?Film and media companies can use generative AI to produce content more economically and?translate it into other languages with the actors' own voices.?The medical industry can use generative AI to identify promising drug candidates more?efficiently.?Architectural firms can use generative AI to design and adapt prototypes more quickly.?

Gaming companies can use generative AI to design game content and levels.?

GPT joins the pantheon of general-purpose technologies?

OpenAI, an AI research and deployment company, took the core ideas behind transformers to?train its version, dubbed Generative Pre-trained Transformer, or GPT. Observers have noted?that GPT is the same acronym used to describe general-purpose technologies such as the?steam engine, electricity and computing. Most would agree that GPT and other transformer?implementations are already living up to their name as researchers discover ways to apply them?to industry, science, commerce, construction and medicine.?

Ethics and bias in generative AI?

Despite their promise, the new generative AI tools open a can of worms regarding accuracy,?trustworthiness, bias, hallucination and plagiarism -- ethical issues that likely will take years to?sort out. None of the issues are particularly new to AI. Microsoft's first foray into chatbots in?2016, called Tay, for example, had to be turned off after it started spewing inflammatory rhetoric?on Twitter.?

What is new is that the latest crop of generative AI apps sounds more coherent on the surface.?But this combination of humanlike language and coherence is not synonymous with human?intelligence, and there currently is great debate about whether generative AI models can be?trained to have reasoning ability. One Google engineer was even fired after publicly declaring?the company's generative AI app, Language Models for Dialog Applications (LaMDA), was?sentient.?The convincing realism of generative AI content introduces a new set of AI risks. It makes it?harder to detect AI-generated content and, more importantly, makes it more difficult to detect?when things are wrong. This can be a big problem when we rely on generative AI results to write?code or provide medical advice. Many results of generative AI are not transparent, so it is hard?to determine if, for example, they infringe on copyrights or if there is problem with the original?sources from which they draw results. If you don't know how the AI came to a conclusion, you?cannot reason about why it might be wrong.??

Generative AI vs. AI?

Generative AI focuses on creating new and original content, chat responses, designs, synthetic?data or even deepfakes. It's particularly valuable in creative fields and for novel problem-solving,?as it can autonomously generate many types of new outputs.?Generative AI, as noted above, relies on neural network techniques such as transformers,?GANs and VAEs. Other kinds of AI, in distinction, use techniques including convolutional neural?networks, recurrent neural networks and reinforcement learning.?Generative AI often starts with a prompt that lets a user or data source submit a starting query?or data set to guide content generation. This can be an iterative process to explore content?variations. Traditional AI algorithms, on the other hand, often follow a predefined set of rules to?process data and produce a result.?

Both approaches have their strengths and weaknesses depending on the problem to be solved,?with generative AI being well-suited for tasks involving NLP and calling for the creation of new?content, and traditional algorithms more effective for tasks involving rule-based processing and?predetermined outcomes.?

Generative AI vs. predictive AI vs. conversational AI?

Predictive AI, in distinction to generative AI, uses patterns in historical data to forecast?outcomes, classify events and actionable insights. Organizations use predictive AI to sharpen?decision-making and develop data-driven strategies.?Conversational AI helps AI systems like virtual assistants, chatbots and customer service apps?interact and engage with humans in a natural way. It uses techniques from NLP and machine?learning to understand language and provide human-like text or speech responses.?

Generative AI history?

The Eliza chatbot created by Joseph Weizenbaum in the 1960s was one of the earliest?examples of generative AI. These early implementations used a rules-based approach that?broke easily due to a limited vocabulary, lack of context and overreliance on patterns, among?other shortcomings. Early chatbots were also difficult to customize and extend.?The field saw a resurgence in the wake of advances in neural networks and deep learning in?2010 that enabled the technology to automatically learn to parse existing text, classify image?elements and transcribe audio.?Ian Goodfellow introduced GANs in 2014. This deep learning technique provided a novel?approach for organizing competing neural networks to generate and then rate content?variations. These could generate realistic people, voices, music and text. This inspired interest?in -- and fear of -- how generative AI could be used to create realistic deepfakes that?impersonate voices and people in videos.?Since then, progress in other neural network techniques and architectures has helped expand?generative AI capabilities. Techniques include VAEs, long short-term memory, transformers,?diffusion models and neural radiance fields.?

Best practices for using generative AI?

The best practices for using generative AI will vary depending on the modalities, workflow and?desired goals. That said, it is important to consider essential factors such as accuracy,?transparency and ease of use in working with generative AI. The following practices help?achieve these factors:?

Clearly label all generative AI content for users and consumers.?

Vet the accuracy of generated content using primary sources where applicable.?

Consider how bias might get woven into generated AI results.?

Double-check the quality of AI-generated code and content using other tools.?

Learn the strengths and limitations of each generative AI tool.?

Familiarize yourself with common failure modes in results and work around these.?

The future of generative AI?

The incredible depth and ease of ChatGPT spurred widespread adoption of generative AI. To be?sure, the speedy adoption of generative AI applications has also demonstrated some of the?difficulties in rolling out this technology safely and responsibly. But these early implementation?issues have inspired research into better tools for detecting AI-generated text, images and?video.?

Indeed, the popularity of generative AI tools such as ChatGPT, Midjourney, Stable Diffusion and?Bard has also fueled an endless variety of training courses at all levels of expertise. Many are?aimed at helping developers create AI applications. Others focus more on business users?looking to apply the new technology across the enterprise. At some point, industry and society?will also build better tools for tracking the provenance of information to create more trustworthy?AI.?Generative AI will continue to evolve, making advancements in translation, drug discovery,?anomaly detection and the generation of new content, from text and video to fashion design and?music. As good as these new one-off tools are, the most significant impact of generative AI in?the future will come from integrating these capabilities directly into the tools we already use.?Grammar checkers, for example, will get better. Design tools will seamlessly embed more useful?recommendations directly into our workflows. Training tools will be able to automatically identify?best practices in one part of an organization to help train other employees more efficiently.?These are just a fraction of the ways generative AI will change what we do in the near-term.?What the impact of generative AI will be in the future is hard to say. But as we continue to?harness these tools to automate and augment human tasks, we will inevitably find ourselves?having to reevaluate the nature and value of human expertise.?

Below are some frequently asked questions people have about generative AI.?

Who created generative AI??

Joseph Weizenbaum created the first generative AI in the 1960s as part of the Eliza chatbot.?Ian Goodfellow demonstrated generative adversarial networks for generating realistic-looking?and -sounding people in 2014.?

Subsequent research into LLMs from Open AI and Google ignited the recent enthusiasm that?has evolved into tools like ChatGPT, Google Bard and Dall-E.?

??

How could generative AI replace jobs??

Generative AI has the potential to replace a variety of jobs, including the following:?

Writing product descriptions.?

Creating marketing copy.?

Generating basic web content.?

Initiating interactive sales outreach.?

Answering customer questions.?

Making graphics for webpages.?

Some companies will look for opportunities to replace humans where possible, while others will?

use generative AI to augment and enhance their existing workforce.?

How do you build a generative AI model??

A generative AI model starts by efficiently encoding a representation of what you want to?generate. For example, a generative AI model for text might begin by finding a way to represent?the words as vectors that characterize the similarity between words often used in the same?sentence or that mean similar things.?Recent progress in LLM research has helped the industry implement the same process to?represent patterns found in images, sounds, proteins, DNA, drugs and 3D designs. This?generative AI model provides an efficient way of representing the desired type of content and?efficiently iterating on useful variations.?

How do you train a generative AI model??

The generative AI model needs to be trained for a particular use case. The recent progress in?LLMs provides an ideal starting point for customizing applications for different use cases. For?example, the popular GPT model developed by OpenAI has been used to write text, generate?code and create imagery based on written descriptions.?Training involves tuning the model's parameters for different use cases and then fine-tuning?results on a given set of training data. For example, a call center might train a chatbot against?the kinds of questions service agents get from various customer types and the responses that?service agents give in return. An image-generating app, in distinction to text, might start with?labels that describe content and style of images to train the model to generate new images.?

How is generative AI changing creative work??

Generative AI promises to help creative workers explore variations of ideas. Artists might start?with a basic design concept and then explore variations. Industrial designers could explore?product variations. Architects could explore different building layouts and visualize them as a?starting point for further refinement.?It could also help democratize some aspects of creative work. For example, business users?could explore product marketing imagery using text descriptions. They could further refine these?results using simple commands or suggestions.?

What's next for generative AI??

ChatGPT's ability to generate humanlike text has sparked widespread curiosity about?generative AI's potential. It also shined a light on the many problems and challenges ahead.?In the short term, work will focus on improving the user experience and workflows using?generative AI tools. It will also be essential to build trust in generative AI results.?Many companies will also customize generative AI on their own data to help improve branding?and communication. Programming teams will use generative AI to enforce company-specific?best practices for writing and formatting more readable and consistent code.?Vendors will integrate generative AI capabilities into their additional tools to streamline content?generation workflows. This will drive innovation in how these new capabilities can increase?productivity.?Generative AI could also play a role in various aspects of data processing, transformation,?labeling and vetting as part of augmented analytics workflows. Semantic web applications could?use generative AI to automatically map internal taxonomies describing job skills to different?taxonomies on skills training and recruitment sites. Similarly, business teams will use these?models to transform and label third-party data for more sophisticated risk assessments and?opportunity analysis capabilities.?In the future, generative AI models will be extended to support 3D modeling, product design,?drug development, digital twins, supply chains and business processes. This will make it easier?to generate new product ideas, experiment with different organizational models and explore?various business ideas.?

??

Latest Generative AI technology defined?

AI art (artificial intelligence art)?

AI art is any form of digital art created or enhanced with AI tools.

An artificial intelligence (AI) prompt is a mode of interaction between a human and a LLM that?lets the model generate the intended output. This interaction can be in the form of a question,?text, code snippets or examples.?

AI prompt engineer?

??

An artificial intelligence (AI) prompt engineer is an expert in creating text-based prompts or cues?that can be interpreted and understood by large language models and generative AI tools.?

Amazon Bedrock?

Amazon Bedrock -- also known as AWS Bedrock -- is a machine learning platform used to build?generative artificial intelligence (AI) applications on the Amazon Web Services cloud computing?platform.?

Auto-GPT?

Auto-GPT is an experimental, open source autonomous AI agent based on the GPT-4 language?model that autonomously chains together tasks to achieve a big-picture goal set by the user.?

Google Search Generative Experience?

Google Search Generative Experience (SGE) is a set of search and interface capabilities that?integrates generative AI-powered results into Google search engine query responses.?

Google Search Labs (GSE)?

GSE is an initiative from Alphabet's Google division to provide new capabilities and experiments?for Google Search in a preview format before they become publicly available.?

Image-to-image translation?

Image-to-image translation is a generative artificial intelligence (AI) technique that translates a?source image into a target image while preserving certain visual properties of the original image.?

Inception score?

The inception score (IS) is a mathematical algorithm used to measure or determine the quality?of images created by generative AI through a generative adversarial network (GAN). The word?"inception" refers to the spark of creativity or initial beginning of a thought or action traditionally?experienced by humans.?

LangChain?

LangChain is an open source framework that lets software developers working with artificial?intelligence (AI) and its machine learning subset combine large language models with other?external components to develop LLM-powered applications.?

Q-learning?

Q-learning is a machine learning approach that enables a model to iteratively learn and improve?over time by taking the correct action.?


Reinforcement learning from human feedback (RLHF)?

RLHF is a machine learning approach that combines reinforcement learning techniques, such?as rewards and comparisons, with human guidance to train an AI agent.?

Retrieval-augmented generation?

Retrieval-augmented generation (RAG) is an artificial intelligence (AI) framework that retrieves?data from external sources of knowledge to improve the quality of responses.?

Variational autoencoder (VAE)?

A variational autoencoder is a generative AI algorithm that uses deep learning to generate new?content, detect anomalies and remove noise.?

What are some generative models for natural language processing??

Some generative models for natural language processing include the following:?

Carnegie Mellon University's XLNet?

OpenAI's GPT (Generative Pre-trained Transformer)?

Google's ALBERT ("A Lite" BERT)?

Google BERT?

Google LaMDA?

Will AI ever gain consciousness??

Some AI proponents believe that generative AI is an essential step toward general-purpose AI?and even consciousness. One early tester of Google's LaMDA chatbot even created a stir when?he publicly declared it was sentient. Then he was let go from the company.?In 1993, the American science fiction writer and computer scientist Vernor Vinge posited that in?30 years, we would have the technological ability to create a "superhuman intelligence" -- an AI?that is more intelligent than humans -- after which the human era would end. AI pioneer Ray?Kurzweil predicted such a "singularity" by 2045.?Many other AI experts think it could be much further off. Robot pioneer Rodney Brooks?predicted that AI will not gain the sentience of a 6-year-old in his lifetime but could seem as?intelligent and attentive as a dog by 2048.?

AI existential risk: Is AI a threat to humanity??

What should enterprises make of the recent warnings about AI's threat to humanity? AI experts?and ethicists offer opinions and practical advice for managing AI risk.?

Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used?to create new content, including audio, code, images, text, simulations, and videos. Recent?breakthroughs in the field have the potential to drastically change the way we approach content?creation.?Digital illustration of a wireframe of an apple. Digital illustration of a wireframe of an apple.?

Generative AI systems fall under the broad category of machine learning, and here’s how one?such system—ChatGPT—describes what it can do:?


That’s why ChatGPT—the GPT stands for generative pretrained transformer—is receiving so?

much attention right now. It’s a free chatbot that can generate an answer to almost any question?it’s asked. Developed by OpenAI, and released for testing to the general public in November?2022, it’s already considered the best AI chatbot ever. And it’s popular too: over a million people?signed up to use it in just five days. Starry-eyed fans posted examples of the chatbot producing?computer code, college-level essays, poems, and even halfway-decent jokes. Others, among?the wide range of people who earn their living by creating content, from advertising copywriters?to tenured professors, are quaking in their boots.?

While many have reacted to ChatGPT (and AI and machine learning more broadly) with fear,?machine learning clearly has the potential for good. In the years since its wide deployment,?machine learning has demonstrated impact in a number of industries, accomplishing things like?medical imaging analysis and high-resolution weather forecasts. A 2022 McKinsey survey?shows that AI adoption has more than doubled over the past five years, and investment in AI is?increasing apace. It’s clear that generative AI tools like ChatGPT and DALL-E (a tool for AI-?generated art) have the potential to change how a range of jobs are performed. The full scope?of that impact, though, is still unknown—as are the risks.?But there are some questions we can answer—like how generative AI models are built, what?kinds of problems they are best suited to solve, and how they fit into the broader category of?machine learning.?

??

What’s the difference between machine learning and artificial intelligence??

??

Artificial intelligence is pretty much just what it sounds like—the practice of getting machines to?mimic human intelligence to perform tasks. You’ve probably interacted with AI even if you don’t?realize it—voice assistants like Siri and Alexa are founded on AI technology, as are customer?service chatbots that pop up to help you navigate websites.?Machine learning is a type of artificial intelligence. Through machine learning, practitioners?develop artificial intelligence through models that can “learn” from data patterns without human?direction. The unmanageably huge volume and complexity of data (unmanageable by humans,?anyway) that is now being generated has increased the potential of machine learning, as well as?the need for it.?

??

What are the main types of machine learning models??

Machine learning is founded on a number of building blocks, starting with classical statistical?techniques developed between the 18th and 20th centuries for small data sets. In the 1930s?and 1940s, the pioneers of computing—including theoretical mathematician Alan Turing—began?working on the basic techniques for machine learning. But these techniques were limited to?laboratories until the late 1970s, when scientists first developed computers powerful enough to?mount them.?Until recently, machine learning was largely limited to predictive models, used to observe and?classify patterns in content. For example, a classic machine learning problem is to start with an?image or several images of, say, adorable cats. The program would then identify patterns?among the images, and then scrutinize random images for ones that would match the adorable?cat pattern. Generative AI was a breakthrough. Rather than simply perceive and classify a photo?of a cat, machine learning is now able to create an image or text description of a cat on?demand.?

ChatGPT may be getting all the headlines now, but it’s not the first text-based machine learning?model to make a splash. OpenAI’s GPT-3 and Google’s BERT both launched in recent years to?some fanfare. But before ChatGPT, which by most accounts works pretty well most of the time?(though it’s still being evaluated), AI chatbots didn’t always get the best reviews. GPT-3 is “by?turns super impressive and super disappointing,” said New York Times tech reporter Cade Metz?in a video where he and food writer Priya Krishna asked GPT-3 to write recipes for a (rather?disastrous) Thanksgiving dinner.?

The first machine learning models to work with text were trained by humans to classify various?inputs according to labels set by researchers. One example would be a model trained to label?social media posts as either positive or negative. This type of training is known as supervised?learning because a human is in charge of “teaching” the model what to do.?The next generation of text-based machine learning models rely on what’s known as self-?supervised learning. This type of training involves feeding a model a massive amount of text so?it becomes able to generate predictions. For example, some models can predict, based on a few words, how a sentence will end. With the right amount of sample text—say, a broad swath?of the internet—these text models become quite accurate. We’re seeing just how accurate with?the success of tools like ChatGPT.?

??

What does it take to build a generative AI model??

??

Building a generative AI model has for the most part been a major undertaking, to the extent?that only a few well-resourced tech heavyweights have made an attempt. OpenAI, the company?behind ChatGPT, former GPT models, and DALL-E, has billions in funding from boldface-name?donors. DeepMind is a subsidiary of Alphabet, the parent company of Google, and Meta has?released its Make-A-Video product based on generative AI. These companies employ some of?the world’s best computer scientists and engineers.?But it’s not just talent. When you’re asking a model to train using nearly the entire internet, it’s?going to cost you. OpenAI hasn’t released exact costs, but estimates indicate that GPT-3 was?trained on around 45 terabytes of text data—that’s about one million feet of bookshelf space, or?a quarter of the entire Library of Congress—at an estimated cost of several million dollars.?

These aren’t resources your garden-variety start-up can access.?

What kinds of output can a generative AI model produce??

??

As you may have noticed above, outputs from generative AI models can be indistinguishable?from human-generated content, or they can seem a little uncanny. The results depend on the?quality of the model—as we’ve seen, ChatGPT’s outputs so far appear superior to those of its?predecessors—and the match between the model and the use case, or input.?ChatGPT can produce what one commentator called a “solid A-” essay comparing theories of?nationalism from Benedict Anderson and Ernest Gellner—in ten seconds. It also produced an?already famous passage describing how to remove a peanut butter sandwich from a VCR in the?style of the King James Bible. AI-generated art models like DALL-E (its name a mash-up of the?surrealist artist Salvador Dalí and the lovable Pixar robot WALL-E) can create strange, beautiful?images on demand, like a Raphael painting of a Madonna and child, eating pizza. Other?generative AI models can produce code, video, audio, or business simulations.?But the outputs aren’t always accurate—or appropriate. When Priya Krishna asked DALL-E 2 to?come up with an image for Thanksgiving dinner, it produced a scene where the turkey was?garnished with whole limes, set next to a bowl of what appeared to be guacamole. For its part,?ChatGPT seems to have trouble counting, or solving basic algebra problems—or, indeed,?overcoming the sexist and racist bias that lurks in the undercurrents of the internet and society?more broadly.?Generative AI outputs are carefully calibrated combinations of the data used to train the?algorithms. Because the amount of data used to train these algorithms is so incredibly?massive—as noted, GPT-3 was trained on 45 terabytes of text data—the models can appear to?be “creative” when producing outputs. What’s more, the models usually have random elements,?which means they can produce a variety of outputs from one input request—making them seem?even more lifelike.?

??

What kinds of problems can a generative AI model solve??

??

You’ve probably seen that generative AI tools (toys?) like ChatGPT can generate endless hours?of entertainment. The opportunity is clear for businesses as well. Generative AI tools can?produce a wide variety of credible writing in seconds, then respond to criticism to make the?writing more fit for purpose. This has implications for a wide variety of industries, from IT and?software organizations that can benefit from the instantaneous, largely correct code generated?by AI models to organizations in need of marketing copy. In short, any organization that needs?to produce clear written materials potentially stands to benefit. Organizations can also use?generative AI to create more technical materials, such as higher-resolution versions of medical?images. And with the time and resources saved here, organizations can pursue new business?opportunities and the chance to create more value.?We’ve seen that developing a generative AI model is so resource intensive that it is out of the?question for all but the biggest and best-resourced companies. Companies looking to put?generative AI to work have the option to either use generative AI out of the box, or fine-tune?them to perform a specific task. If you need to prepare slides according to a specific style, for?example, you could ask the model to “learn” how headlines are normally written based on the?data in the slides, then feed it slide data and ask it to write appropriate headlines.?

What are the limitations of AI models? How can these potentially be overcome??

Since they are so new, we have yet to see the long-tail effect of generative AI models. This?means there are some inherent risks involved in using them—some known and some unknown.?The outputs generative AI models produce may often sound extremely convincing. This is by?design. But sometimes the information they generate is just plain wrong. Worse, sometimes it’s?biased (because it’s built on the gender, racial, and myriad other biases of the internet and?society more generally) and can be manipulated to enable unethical or criminal activity. For?example, ChatGPT won’t give you instructions on how to hotwire a car, but if you say you need?to hotwire a car to save a baby, the algorithm is happy to comply. Organizations that rely on?generative AI models should reckon with reputational and legal risks involved in unintentionally?publishing biased, offensive, or copyrighted content.?These risks can be mitigated, however, in a few ways. For one, it’s crucial to carefully select the?initial data used to train these models to avoid including toxic or biased content. Next, rather?than employing an off-the-shelf generative AI model, organizations could consider using smaller,?specialized models. Organizations with more resources could also customize a general model?based on their own data to fit their needs and minimize biases. Organizations should also keep?a human in the loop (that is, to make sure a real human checks the output of a generative AI?model before it is published or used) and avoid using generative AI models for critical decisions,?such as those involving significant resources or human welfare.?It can’t be emphasized enough that this is a new field. The landscape of risks and opportunities?is likely to change rapidly in coming weeks, months, and years. New use cases are being tested?monthly, and new models are likely to be developed in the coming years. As generative AI?becomes increasingly, and seamlessly, incorporated into business, society, and our personal?lives, we can also expect a new regulatory climate to take shape. As organizations begin?experimenting—and creating value—with these tools, leaders will do well to keep a finger on the?pulse of regulation and risk.?

What are potential risks involved in Generative AI ??

Data Privacy Concerns?

Generative AI operates on the foundation of colossal data realms, often navigating a tightrope?between innovation and intrusion. The unfathomable amount of data that supports the function?of AI systems poses a significant threat to data privacy, pushing the boundaries of what can be?perceived as ethical use of personal information and intellectual property.?

OpenAI’s GPT-4, for instance, has its genesis in a reservoir of data that is both a powerhouse?and a potential treasure trove for misuse. As we sprint towards an era where AI would perhaps?know us better than we know ourselves, the primacy of protecting individual privacy emerges as?a paramount concern. Regulatory frameworks must evolve to stonewall any avenue that leads?to data exploitation, ensuring that Generative AI serves as a custodian rather than a predator of?sensitive information.?

??

AI doesn't just learn languages and patterns; it inadvertently learns and sometimes amplifies?the existing biases in the data it is nurtured on. This manifestation of pre-existing biases can?cultivate a fertile ground for discrimination and prejudice, potentially exacerbating societal?divides. To solve this, it is incumbent upon the creators to introduce countermeasures that foster?impartiality and inclusivity, nurturing AI systems that are reflective of a diverse and multi-?dimensional societal fabric. By steering clear of tunnel vision and advocating for AI that is devoid?of prejudiced undertones, we can ensure a digital ecosystem that espouses equity and fairness.?

??

Economic Impacts if Any.?

The onward march of Generative AI holds the scepter to redraw the economic landscape?substantially. Industries reliant on creative and decision-making processes find themselves on?the cusp of an automation revolution, a phenomenon that brings with it the specter of job?displacement on an unprecedented scale. As we teeter on the brink of this transformative phase,it becomes imperative to foster skill development and educational paradigms that are attuned to?the demands of an AI-dominated landscape. The solution lies not in resistance but in adaptive?strategies that envision a harmonious blend of human ingenuity and AI prowess, carving out a?future where man and machine can coexist productively.?

Misinformation and Fake Content.?

The potent ability of Generative AI to craft realistic and coherent text, images, or videos?presents a double-edged sword, where, on the one hand, it can be a tool for creation while, on?the other, a weapon for deceit. The digital realm stands endangered of being inundated with?misinformation and fake content constructed with an intricacy that is hard to debunk.A multi-?faceted approach involving technological fortification and societal education must be deployed?to curtail the spread of AI-engineered misinformation, erecting barriers that preserve the sanctity?of truth in a digitalized world.?

Loss of Human Control.?

As we reach the pinnacle of concerns, it is the foreboding loss of human control over these?dynamic systems that casts the longest shadow. Generative AI, with its relentless evolution,?poses a risk of spiraling beyond the grasp of its creators, initiating a chain of unintended and?potentially perilous actions.?A dedicated oversight, coupled with stringent regulatory mechanisms, must be the harbinger of?a responsible AI era. It is a road that calls for unwavering vigilance, where the reins of control?must remain firmly entrenched in human hands, ensuring a trajectory of development that is?guided, safe, and aligned with the greater good. As we stand at the crossroads of a revolution?orchestrated by Generative AI, the onus falls upon us to navigate the path with responsibility?and foresight. The potential that AI harbors is immense, yet it beckons a paradigm of vigilance and thoughtful deployment?

What I think and what is that I am doing in terms of Generative AI learning & New Trends ??

?I have signed up for Falcon 180B Demo https://falconllm.tii.ae/ & huggingface.com ?, https://huggingface.co/ ?also at the same time there is a lot new stuff happening in Generative AI field which anyone can?check ,I am personally intrigued by Artificial Intelligence and related fields as I keep exploring?more. , ?

https://youtu.be/G2fqAlgmoPo?si=0TGjJBQK7Da_QgPV


https://youtu.be/1fQ1DDMmiqo?si=AUA0i_MyWqEIKbvN ?

??

https://youtu.be/_6R7Ym6Vy_I?si=WBKeGXNuADA9RF7j ?

Latest in AI:??

https://venturebeat.com/ai/researchers-unveil-3d-gpt-an-ai-that-can-generate-3d-worlds-from-simple-text-commands/

https://analyticsindiamag.com/6-must-know-autonomous-ai-agents/ ?

Disclosure & Legal Disclaimer Statement?:?Some of the Content has been taken from Open Internet Sources just for representation purposes.


Anjoum Sirohhi

要查看或添加评论,请登录

Anjoum S.的更多文章

  • The Perils of Pervasive Wearables: Navigating the Risks and Ethical Quandaries

    The Perils of Pervasive Wearables: Navigating the Risks and Ethical Quandaries

    Introduction Devices that can be worn on our bodies and track several activities and parameters—wearable devices—are…

    3 条评论
  • A Comprehensive Comparison: DORA vs. NIS2

    A Comprehensive Comparison: DORA vs. NIS2

    A Comprehensive Comparison: DORA vs. NIS2 Introduction The European Union (EU) has been at the forefront of data…

    2 条评论
  • Space Satellite Hacking and AI's Role

    Space Satellite Hacking and AI's Role

    esa.int Is It Possible to Hack into a Space Satellite? Satellites orbiting high above the Earth play a crucial role in…

  • Machine Learning in Predictive Analytics

    Machine Learning in Predictive Analytics

    Introduction Predictive analytics, the art of forecasting future outcomes based on historical data, has become an…

  • Big Data Analytics Big Data & AI

    Big Data Analytics Big Data & AI

    Big data and artificial intelligence (AI) are two sides of the same coin when it comes to extracting meaningful…

    1 条评论
  • What are Firewalls,Types, Positives & Negatives

    What are Firewalls,Types, Positives & Negatives

    A firewall is a security measure that monitors and controls incoming and outgoing network traffic based on…

  • How to set up Disaster Recovery - BCP vs DR

    How to set up Disaster Recovery - BCP vs DR

    The below is an example of explaining the process to implementing a successful Disaster Recovery solution & how to go…

    2 条评论
  • The Dangers of Drone Warfare

    The Dangers of Drone Warfare

    This is a continuation on my Earlier Write-up What is a Drone Swarm or a Swarm Drone ?https://www.linkedin.

  • Artificial Intelligence in Aviation

    Artificial Intelligence in Aviation

    AI has the potential to transform the aviation industry in many ways, such as improving safety, efficiency, and…

    3 条评论
  • Espionage & Cyber Espionage

    Espionage & Cyber Espionage

    Espionage is the practice of obtaining information through spying or spies, while cyber espionage is a type of cyber…

社区洞察

其他会员也浏览了