How 2022 became the year of generative AI

How 2022 became the year of generative AI

No alt text provided for this image

How 2022 became the year of generative AI

www.mgireservationsandbookings.co.uk


There has been a lot of excitement (and hype) surrounding generative AI in 2022. Social media platforms such as Twitter and Reddit are filled with images created by generative machine learning models such as DALL-E and Stable Diffusion — but generative AI is not new.

With a few notable exceptions, most of the technologies we’re seeing today have existed for several years. However, the convergence of several trends has made it possible to productize generative models and bring them to everyday applications.


There has been a lot of excitement (and hype) surrounding generative AI (artificial intelligence) in 2022. Social media platforms such as Twitter and Reddit are filled with images created by generative machine learning models such as DALL-E and Stable Diffusion. Startups building products on top of generative models are attracting funding despite the market downturn. And Big Tech companies are integrating generative models into their mainstream products.

Generative AI is not new. With a few notable exceptions, most of the technologies we’re seeing today have existed for several years. However, the convergence of several trends has made it possible to productize generative models and bring them to everyday applications. The field still has many challenges to overcome, but there is little doubt that the market for generative AI is bound to grow in 2023.

Scientific improvements in generative AI

Generative AI became popular in 2014 with the advent of generative adversarial networks (GANs), a type of deep learning architecture that could create realistic images — such as faces — from noise maps. Scientists later created other variants of GANs to perform other tasks such as transferring the style of one image to another. GANs and the variational autoencoders (VAE), another deep learning architecture, later ushered in the era of deepfakes, an AI technique that modifies images and videos to swap one person’s face for another.

2017 saw the advent of the transformer, a deep learning architecture underlying large language models (LLMs) such as GPT-3, LaMDA and Gopher. The transformer is used to generate text, software code and even protein structures. A variation of the transformer, the “vision transformer,” is also used for visual tasks such as image classification. An earlier version of OpenAI’s DALL-E used the transformer to generate images from text.?

Transformers are scalable, which means their performance and accuracy improve as they are made larger and fed more data. But more importantly, transformer models can be trained through unsupervised or self-supervised learning, meaning they require no or very little human-annotated data, which has been one of the main bottlenecks of deep learning.

Contrastive Language-Image Pre-training (CLIP), a technique introduced by OpenAI in 2021, became pivotal in text-to-image generators. CLIP is very effective at learning shared embeddings between images and text by learning from image-caption pairs collected from the internet. CLIP and diffusion (another deep learning technique for generating images from noise) were used in OpenAI’s DALLE-2 to generate high-resolution images with stunning detail and quality.

As we moved toward 2022, better algorithms, larger models and bigger datasets helped improve the output of generative models, creating better images, writing high-quality software code and generating long stretches of (mostly) coherent text.

Discovering the right applications

Generative models were first presented as systems that could take on big chunks of creative work. GANs became famous for generating complete images with little input. LLMs like GPT-3 made the headlines for writing full articles.

But as the field has evolved, it has become evident that generative models are unreliable when left on their own. Many scientists agree that current deep learning models — no matter how large they are — lack some of the basic components of intelligence, which makes them prone to committing unpredictable mistakes.?

Product teams are learning that generative models perform best when they are implemented in ways that give greater control to users.?

The past year has seen several products that use generative models in smart, human-centric ways. For example, Copy AI, a tool that uses GPT-3 to generate blog posts, has an interactive interface in which the writer and the LLM write the outline of the article and flesh it out together.?

Applications built with DALL-E 2 and Stable Diffusion also highlight user control with features that allow for editing, regenerating or configuring the output of the generative model.

As Douglas Eck, principal scientist at Google Research, said at a recent AI conference, “It’s no longer about a generative model that creates a realistic picture. It’s about making something that you created yourself. Technology should serve our need to have agency and creative control over what we do.”

Creating the right tools and infrastructure

In tandem with the algorithms and applications, the computational infrastructure and platforms for generative models have evolved. This has helped many companies integrate generative AI into their applications without the need for the specialized skills required to set up and run generative models.

Product teams with seasoned machine learning engineers can use open-source generative models such as BLOOM and Stable Diffusion. Meanwhile, teams that don’t have in-house machine learning talent can choose from a wide variety of solutions such as OpenAI API, Microsoft Azure, and HuggingFace Inference Endpoints. These platforms abstract away the complexities of setting up the models and running them at scale.

Also of note is the evolution of MLops platforms, which are making it possible to set up complete pipelines for gathering feedback data, versioning datasets and models, and fine-tuning models for specific applications.

What’s next for generative AI?

The generative AI industry still has challenges to overcome, including ethical and copyright complications.?

But it is interesting to see the generative AI space develop. For the moment, the main winners are Big Tech companies with data, compute power and an established market and products to deliver the added value of generative models. For example, Microsoft is taking advantage of its cloud infrastructure, its exclusive access to OpenAI’s technology and the huge market for its office and creativity tools to bring the power of generative models to its users.

Adobe is also preparing to integrate generative AI in its video and graphic design tools. And Google also has several generative AI products in the works.?

Down the road, however, the real power of generative AI might manifest itself in new markets. Who knows, maybe generative AI will usher in a new era of applications that we had never thought of before.



How companies are balancing low-code/no-code innovation with security

While the benefits of low-code/no-code are well established, adopting them at scale can be a challenge. One of those challenges is ensuring that citizen developers are educated in security.?

The benefits of low code/no code platforms are well known, but adopting them at scale across enterprise has proven to be a challenge for many companies. The “Dos and Don’ts of Upskilling Citizen Developers Across Your Org” panel at VB’s latest Low-Code/No-Code Summit dug into the citizen development movement, from security concerns to scaling challenges and more.

“Organizations are sitting on this enormous pool of talent within the organization,” said Dali Ninkovic, manager (global B2C), PMI Citizen Developer Practice at PMI (Project Management Institute). “Low-code/no-code technology enables this talent to surface and be creative and innovative. There’s nobody better to start these new initiatives within the organizations than people who are not, by definition, professional programmers. This is something that results in enormous value.”

When it comes to tapping into this talent pool, however, companies need to start small. A discovery and experimentation process within a single department helps prove the value of the initiative to the company at large. From there, a gradual, thoughtful roll-out to other departments helps business leaders develop a strategy for growing the citizen development program across the organization.

“In the discovery phase, you’re just trying to get comfortable with the tools,” said Pete Schaefer, director, information security at TrackVia. “Right around where you get into adoption, if you haven’t already, I would really encourage citizen developers to deliberately engage with your IT or your security teams. You want to make sure you’re using the platform in an appropriate manner based on the type of data you need to process and protect.”

Lenka Pincot, head of Agile transformation, Raiffeisenbank Czech Republic, agreed, noting that citizen development is supposed to bring business and IT people closer, not to set them apart.

“Each of these roles has different responsibilities and knowledge,” she explained. “If we make them work together and share their experiences, then we enable business innovation in a fast and easy way — what low-code/no-code platforms are developed for. At the same time we provide enough information and assurance for IT specialists that the intention is not to create a shadow IT, but to help solve business issues faster.”

Ninkovic agreed, and recommended that organizations embrace a security-first approach across the organization to ensure that shadow IT issues never develop.

“Make sure that the platform is validated and vetted by IT,” he said. “Most of the applications of citizen development that I’ve seen are going to be created on top of existing data and core systems like SAP or Oracle Financials, things like that. IT needs to make sure that the correct access for citizen developers is enabled for these APIs. Obviously, before these applications are rolled out, I would definitely recommend that companies implement some form of approval process in line with their IT standards and guidelines, to ensure that all of the CD-created apps meet all of the security standards and have a continuation, a long lifespan and maintenance plans in place and so on.”

Before citizen development initiatives can be fully adopted, organizations also need to put a governance strategy in place, Ninkovic added. It should define clear objectives for the citizen development initiative, including which departments will be involved, how IT will oversee development, and how citizen developers should balance their daily job priorities, and how teams determine which business problems will actually benefit from an application. He also recommended putting what he calls a command center in place: a combination of IT people, business people and stakeholders who set policies and guidelines in place, oversee the program and oversee regular reporting from teams.

Pincot noted that her company uses what they call “centers of expertise,” or smaller groups of specialists who manage each aspect of their citizen development strategy — a global center of expertise for citizen development that handles global security policies, including local specialists who can help business teams implement solutions properly, and so on.

“Citizen development means you give really powerful tools into the hands of people who can use them by themselves,” she said. “It’s critical to balance centralized decision-making or centralized governance versus enough freedom and empowerment to use these tools for innovation.”

These communities of people can exchange ideas, answer questions and develop a collective body of knowledge that keeps the program evolving and continuously unlocks new and innovative ideas, while implementing them safely.

In smaller companies without the same kind of governance structures, employee education is key to ensuring citizen developers understand the basics of security and why it’s needed, Schaefer said, without shutting down innovation. With too many restrictions and too many guardrails, citizen developers are discouraged from pursuing new ideas.

It’s about “balancing security risk, innovation, speed of providing a solution,” he explained. “The hard part is, how do you understand what that balance is, and even get agreement on what that balance should be?”

Pincot pointed to Raiffeisenbank’s hackathon event, in which the winning team created an application that allowed employees to exchange unwanted items and donate to charity.

“That was amazing, because in this type of application, they would probably never make a list of IT priorities or business priorities,” she said. “This is where we need to enable innovation, not to use technology only to improve something that we didn’t figure out properly before or fix some data transfers and so on, but really put something in the hands of people who have good ideas — and they can’t wait for the priorities of funding because it’s not business-critical. They have a great idea that can really improve culture and the well-being of people in the organization.”

要查看或添加评论,请登录

社区洞察

其他会员也浏览了