How does ChatGPT work?
“Any sufficiently advanced technology is indistinguishable from magic” -?Arthur C. Clarke, “Profiles of the Future: An Inquiry into the Limits of the Possible”, 1962.
The world is in awe of?ChatGPT?and deservingly so - it's one of very few "automagical" products that embodies what AI can do, packaged in a way that's easy to consume. What is behind this AI? I tried to dig into it in order to understand what's under the hood.
A disclaimer:?I'm by no means an expert in generative AI - just an enthusiast learning the space to my best abilities and a product manager who likes to understand how things work in order to understand the potential, pitfalls, and likely market changes stemming from application of the new technology.
First thing that ChatGPT has to do is understand what it is being asked to do.
It does it with the help of the?Transformer Architecture.
Transformers are a type of artificial neural network architecture that is used to solve the problem of transduction or transformation of input sequences into output sequences in deep learning applications.
The transformer architecture is a neural network architecture that was introduced in the paper?"Attention Is All You Need"?by Google researchers in 2017. It is commonly used in natural language processing tasks, such as machine translation and text generation.
The main component of the transformer architecture is the?self-attention?mechanism, which allows the model to weigh the importance of different words in the input when generating its output.
This allows the model to focus on specific parts of the input when generating its output, rather than processing the entire input sequentially.
The transformer architecture also includes a?multi-head self-attention?mechanism, which allows the model to attend to different parts of the input simultaneously.
This allows the model to capture different types of relationships between the words in the input.
The transformer architecture also includes?feed-forward neural network layers, which are used to process the output from the self-attention mechanism.
These layers allow the model to learn more complex relationships between the words in the input.
The architecture includes a positional encoding which is added to the input before it is processed by the self-attention mechanism. This encoding allows the model to take into account the position of the words in the input, which is important for tasks such as machine translation where the order of the words is important.
The attention layer can access all previous states and weight them according to a learned measure of relevance, providing relevant information about far-away tokens.
Transformer architecture is built on the back of prior solutions - recurrent neural networks (RNNs) and LSTMs (Long-Short Term Memory).
RNN's solved the problem of allowing the information to be passed from one step of the network to the next, which is useful for processing sequential data, such as language. RNN's, however, suffered from the vanishing gradients problem, where the gradient signal that is used to update the model weights during training becomes very small. This makes it difficult for the model to learn and can result in slow convergence or even failure to converge.
LSTMs (Long-Short Term Memory) were introduced to address the vanishing gradient problem in RNNs. LSTMs have a gating mechanism that helps to regulate the flow of information through the network, allowing them to maintain a stronger gradient signal even for very long sequences.
Like?recurrent neural networks?(RNNs), transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as?translation?and?text summarization. However, unlike RNNs, transformers process the entire input all at once.
领英推荐
Finally, the model uses a decoder which is designed to generate coherent and fluent text. It uses the information from the encoder and the attention mechanism to generate a coherent and fluent answer.
In a GPT model, the decoder is the portion of the network that takes the input token representation and produces the final output sequence.
The decoder typically works by using a self-attention mechanism to calculate the attention scores between the input tokens and generates a weighted sum of the input representations to obtain a context representation. The context representation is then passed through a feedforward network and a softmax activation to produce the probability distribution over the vocabulary for each token in the output sequence.
--------------
GAMEX x BLOCKCHAIN x NFTs x 3D & VIRTUAL PRODUCTION
Facebook Community:?https://www.facebook.com/groups/gamex.platform/
Zalo Community:?https://zalo.me/g/fmuxxb875
Telegram community:https://t.me/joinchat/SrgZGV4st88wMzdl
Metaverse x Digital Twins x | XR, AR, VR, MR | Vietnam Community:
Facebook Community: https://www.facebook.com/groups/metaverse.xplatform
Zalo Community: https://zalo.me/g/fadiok583
Telegram community: https://t.me/+bXeMnagqtz85ZTg9
THE CMOxTECHNOLOGY (connect Top 100 CTO Vietnam Network): https://zalo.me/g/mpabza598
CTO Community Network: https://linktr.ee/CTO_Vietnam_Networks
BLOCKCHAIN: https://linktr.ee/Blockchain_NFTs_groups
N?NG NGHI?P: https://linktr.ee/smartfarmxtechnology_groups
Digital Experience Networks: https://linktr.ee/digital_experience_networks
DIGITAL TRANSFORMATION:?https://linktr.ee/digital_transformation_network