Do-BERT
Yogesh Haribhau Kulkarni
AI Advisor (Helping organizations in their AI journeys) | PhD (Geometric Modeling) | Tech Columnist (Marathi)
BERT (Bidirectional Encoder Representations from Transformers) has taken the world of NLP (Natural Language Processing) by storm.
Language-text is essentially a sequence of words. So, traditional methods like RNNs (Recurrent Neural Networks) and LSTMs (Long Short Term Memory) used to be ubiquitous in Language Modeling (predicting next word. Remember, typing SMS?). But they would not remember previous words a bit far away. Then came 'Attention is All you need' and its architecture called, `Transformer'.
BERT is a Transformer-based machine learning technique for NLP pre-training developed by in 2018 by Jacob Devlin and his colleagues from Google.
Following sketchnote gives overview of BERT
References
I help Ambitious Product Managers transform into Highly Respected Product Leaders making 50%+ Income in a Dream Role | Free Lesson -> ipmworkshop.com
2 年I prefer ERNIE. I am also clearly not in NLP
Short but sweet. Thank you for the input Yogesh Kulkarni
Python | Data Science | Machine Learning | Artificial Intelligence | Generative AI | NLP | Lets connect
2 年Yogesh Kulkarni awesome one I came across in recent days... Short and easy to understand..??
Doctoral Generative AI Researcher |AI leader |Hands on Guy |AI Architect|AI Product Manager| Generative AI Expert | AI Product Strategy maker|M.Tech in DataScience from BITS
2 年Yogesh Kulkarni love your hand draw sketchnote way.. of explaining #1pagertech
Sr. Data Scientist at Globant | IIM Kozhikode |Generative AI | Youtuber
2 年Short and sweet. Do-BERT ??