Notes on Artificial Intelligence, Machine Learning and Deep Learning for curious people
?zgür (Ozzie) Genc
Senior Tech Leadership | Digital Transformation (ex-P&G, ex-Bain) | CIO
Repost from my originally published blog at Medium in Feb 2019.
AI has been the most intriguing topic of 2018 according to McKinsey. It is mentioned as the key enabler now at the #1 and #3 spot of Gartner Top 10 Strategic Technology Trends for 2019.
AI became a catch-all term that refers to any computer program that automatically does something. Many people make referrals to AI without actually knowing what it really means. There is public debate on whether it is an evil or savior for humanity. Thus this is yet another attempt to compile & explain the introductory AI/ML concepts to go beyond this buzz for non-practitioners and curious people.
Artificial intelligence as an academic discipline was founded in 50s. Actually the “AI” term was coined by John McCarthy, an American computer scientist, back in 1956 at The Dartmouth Conference. According to John McCarthy, AI is “The science and engineering of making intelligent machines, especially intelligent computer programs”.
Evolution of AI — Source: https://www.embedded-vision.com/
Though it was not until recently it became part of daily life thanks to advances in big data availability and affordable high computing power. AI works at its best by combining large amounts of data sets with fast, iterative processing and intelligent algorithms. This allows the AI software to learn automatically from patterns or features in that vast data sets. It is typical now we see AI news and examples on the mainstream news. Arguably the popularity milestone with public awareness was AlphaGo artificial intelligence program that ended humanity’s 2,500 years of supremacy in May 2017 at the ancient board game GO using a machine learning algorithm called “reinforcement learning”. Then these kinds of AI news become part of our daily digests with self-driving cars, Alexa/Siri like digital assistants frenzy, real time face recognition at airports, human genome projects, Amazon/Netflix algorithms, AI composers/artists, hand writing recognition, Email marketing algorithms and the list can go on and on. While Deep neural network, the most advanced form of AI, is at the top of the Gartner ’s 2018 hype cycle that is a sign of inflated expectations, self-driving cars have already made millions of miles with relatively satisfactory safety records.
Source: Gartner
AI technologies will continue disrupting in 2019 and will become even more widely available due to affordable cloud computing and big data explosion. I do not recall any other tech domain right now that attracts so many smart people & vast resources from both the open source/maker community and the largest enterprises at the same time.
What is the difference between Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL)?
While people often use these terms interchangeably, I think below is a good conceptual depiction to differentiate these 3 terms. AI is really a broad term and somewhat this also causes every company to claim their product has AI these days ?? Then ML is a subset of AI, and consists of the more advanced techniques and models that enable computers to figure things out from the data and deliver AI applications. ML is the science of getting computers to act without being explicitly programmed (Stanford University).
Source: https://blogs.oracle.com/bigdata/difference-ai-machine-learning-deep-learning
Finally, DL is a newer area of ML that that uses multi-layered artificial neural networks to deliver high accuracy in tasks such as object detection, speech recognition, language translation and other recent breakthroughs that you hear in the news. Beauty and strength of DL is they can automatically learn/extract/translate the features from data sets such as images, video or text, without introducing traditional hand-coded code or rules. Impressive!
Source: https://www.xenonstack.com/blog/data-science/log-analytics-deep-machine-learning-ai/
Double click on traditional machine learning models:
In Machine Learning there are different models that generally fall into 3 different categories: (1)Supervised Learning, (2) Unsupervised Learning and (3) Reinforcement Learning.
- Supervised learning: Involves an output label associated with each instance in the dataset. This output can be discrete/categorical (red, dog, panda, ford mustang, STOP sign, spam…) or real-valued. Right now, almost all learning is supervised. Your data has known labels as output. It involves a supervisor that is more knowledgeable than the neural network itself. For example, the supervisor feeds some example data about which the supervisor already knows the answers. The supervisor guides the system by tagging the output. For example, a supervised machine learning system that can learn which emails are ‘spam’ and which are ‘not spam’. The algorithm would be first trained with available input data set (of zillions of emails) that is already tagged with this classification to help the machine learning system learn the characteristics or parameters of the ‘spam’ email and distinguish it from those of ‘not spam’ emails. Just as a three-year-old learns the difference between a ‘block’ and a ‘soft toy’, the supervised machine learning system learns which email is ‘spam’ and which is ‘not spam’. Techniques such as linear or logistic regressions and decision tree classification fall under this category of learning.
Regression: This is a type of problem where we need to predict and forecast for the continuous-response values. Some examples are what is the price of house in a specific city with 3 bedrooms and above 2,000 sqft? Predicting financial results, stock prices or how many total runs can be on board in a cricket game. You have an existing data set & outputs (supervised learning) and your algorithm predicts the outcome based on a fitting function.
Classification: Where you need to categorize a certain observation into a group. In the below picture, if you’re given a dot you need to classify it as either a blue dot or a red dot. Few more examples would be — to predict if a given email is spam or not spam? Is a detected particle a Higgs Boson or a normal sub-atomic particle? Assigning a certain news article into a group — like sports, weather, or science. Will it rain today or not? Is this picture a cat or not? Detecting fraud or evaluating risk for frauds or insurance under writing.
Source: https://medium.freecodecamp.org/using-machine-learning-to-predict-the-quality-of-wines-9e2e13d7480d
2. Unsupervised Learning — This is an ‘unaided’ type learning when your data typically has no known output labels or any feedback loop. This is useful when there is no example data set with known answers and your are searching for a hidden pattern. In this case, clustering i.e. dividing a set of elements into groups according to some unknown pattern is carried out based on the existing data sets. The system has to understand itself from the data set we provide. In general, unsupervised learning is a bit difficult to implement and thus it’s not used as widely as supervised learning. Most popular types are clustering and association as below.
Clustering: This is a type of unsupervised learning problem where we group similar things together. Some examples are: Given news articles or books, cluster them into different types of themes. Given a set of tweets, cluster them based on content of tweet. Could also be used for politics, health care, shopping, real estate etc.
Source: https://brilliant.org/wiki/k-means-clustering/
Association: An association rule is where you would be discovering the exact rules that will describe the large portions of your data. Example: People who buy X are also the ones who tend to buy Y. We may encounter it when we receive a book or movie recommendation based upon previous purchases or searches. These algorithms are also used for market basket analysis using our online or offline retailers shopping (point of sales) data. Shortly given many baskets, association techniques helps us understand which items inside a basket predict another item in the same basket.
Associations between selected items using a data set on an actual grocery transaction over 30 days. Larger circles imply higher support, while red circles imply higher lift. i.e. The most popular transaction was of pip and tropical fruits. Relatively many people buy sausage along with sliced cheese. Source: kdnuggest
3. Reinforcement Learning (RL)? Now instead of telling the child which toy to put in which box, you reward the child with a ‘big hug’ when the child makes the right choice or you make a ‘sad face’ when the child makes the wrong action. Very quickly after a few iterations the child learns which toys need to go into which box — this is called Reinforcement Learning. Systems are trained by receiving virtual “rewards” or “punishments”, essentially learning by trial and error.
This strategy built on observation and trial & error to achieve goals or maximize reward. The agent makes a decision by observing its environment. If the observation is negative, the algorithm adjusts its weights to be able to make a different required decision the next time. One can count Reinforcement learning as part of the Deep learning as well based on # of hidden nodes and the complexity of algorithms (more on this later). Reinforcement learning algorithms try to find the best ways to earn the greatest reward. Rewards can be winning a game, earning more money or beating other opponents. They present state-of-art results on very human tasks, for instance, this paper from the University of Toronto shows how a computer can beat human in old-school Atari video games.
Google DeepMind has used reinforcement learning to develop systems that can play games, including video games and board games such as GO. AlphaGo won a game with more board states than chess at 10 to the power of 170 — is greater than the number of atoms in the universe against a den 9 GO master. A combination of reinforcement learning and human-supervised learning was used to build “value” and “policy” neural networks that also used the search tree to execute its game play strategies. The software learned from 30 million moves played in human-on-human games.
While writing this blog, another RNN driven breakthrough news was published where AI agents developed by Google’s DeepMind beat human pros at Starcraft II — a first in the world of artificial intelligence. Games like Starcraft II are harder for computers to play than board games like chess or Go.
Google DeepMind’s researchers used reinforcement learning for training these AlphaStar agents. Agents play the game by trial and error while trying to reach certain goals like winning or simply staying alive. They learn first by copying human players and then play one another where strongest agents survive, and the weakest are discarded. DeepMind estimated that its AlphaStar agents each racked up about 200 years of game time in this way at an accelerated rate. RNN is taking humanity to the singularity point at least within the games context!
It may sound a bit overwhelming if you come across the ML types first time but below is a visual summary to wrap up ML.
Source: https://www.newtechdojo.com/list-machine-learning-algorithms/
Deep Learning
It is widely accepted now that the deep learning techniques have the potential to create trillion dollar industries across the industries. Like ML, “Deep Learning” is also a method of statistical learning that extracts features or attributes from raw data sets. The main point of difference is DL does this by utilizing multi-layer artificial neural networks with many hidden layers stacked one after the other. DL also has somewhat more sophisticated algorithms and requires more powerful computational resources. These are specially designed computers with high performance CPUs or GPUs. They could be on premise ($$) or as workloads on Cloud. You can still use your laptop for prototyping… See my other article for an applied example.
In this article, I would like to introduce 3 popular DL models. They are Convolutional Neural Networks, Recurrent Neural Networks and Generative Adversarial Networks.
Is deep learning inspired from human brain ? What are the Artificial Neural Networks?
How does a small child learn to recognize the difference between a school bus and a regular transit bus? How do we subconsciously perform complex pattern recognition tasks without even noticing? The answer is we have a biological neural network that is connected to our nervous systems. Our brains are very complex networks with about 10 billion neuron each connected to 10 thousand other neurons.
Each of these neurons receives electro-chemical signals and passes these messages to other neurons. Actually, we do not even well know how our brain neurons work. We do not know enough about neuroscience and the deeper functions of the brain to be able to correctly model how the brain works. DL is only inspired by the functionality of our brain cells called neurons which lead to the concept of artificial neural networks (ANN). ANN is modeled using layers of artificial neurons to receive input and apply an activation function along with a human set threshold. It may sound sci-fi to non-practitioners but DL is already in our daily lives. Deep learning has already achieved near or better than human level image classification, speech/hand writing recognition and of course the autonomous driving. Complex ad targeting or news feeds are all over when we surf the net.
Source: https://www.datacamp.com/community/tutorials/deep-learning-python
In the most basic feed forward neural network (top right), there are five main components to artificial neurons. From left to right, these are:
- Input nodes. Each input node is associated with a numerical value, which can be any real number. Example could be one pixel value of an image.
- Connections. Similarly, each connection that departs from the input node has a weight (w) associated with it and this can be any real number. The ANN runs and propagates millions of times to optimize these “w” values. You need the high computational power to make this in short time.
- Next, all the values of the input nodes and weights of the connections are brought together. They are used as inputs for a weighted sum.
- This result will be the input for a transfer or activation function. Just like a biological neuron only fires when a certain threshold is exceeded, the artificial neuron will also only fire when the sum of the inputs exceeds a threshold. These are parameters set by us (more on ethics later).
- As a result, you have the output node, which is associated with the function of the weighted sum of the input nodes.
What is the “Deep” in deep learning?
Deep-learning networks are distinguished from the more general single-hidden-layer neural networks by their depth. Depth is the number of node layers where there are more than one hidden layers thus need for more computation power for forward/backward optimization while training, testing and eventually running these ANNs.
Source: https://verneglobal.com/blog/deep-learning-at-scale
Among the layers, you can distinguish an input layer, hidden layers and an output layer. The layers act like the biological neurons that you have read about above. The outputs of one layer serve as the inputs for the next layer.
Convolutional neural networks (CNN): These are one of most popular applied DL cases. They are great for image/video processing or computer vision applications. CNNs are deep artificial neural networks that are used primarily to classify images (e.g. label what they see), cluster them by similarity (photo search), and perform object recognition within scenes. These are algorithms that can identify faces, individuals, street signs, tumors, flowers and many other aspects of visual data. Self driving cars or drones will increasing use CNN capabilities. The most popular applied corporate cases are probably optical character recognition (OCR) to digitize text to automate data entry.
Source: https://cdn-images-1.medium.com/max/1000/1*eEKb2RxREV6-MtLz2DNWFQ.gif
In the above example, our CNN algorithm sees the image differently vs. the human brain. Each image is a 3-dimensional arrays of numbers, known as pixels where you have width, height and depth. Width and height depends on the image resolution. The 3rd dimension (depth) is of the Red-Green-Blue (RGB) values for the color code (unless you are using a black & white image as input).
How our DL algorithm sees an image. — Source: https://cs231n.github.io/classification/
Technically, deep learning CNN receives these images to pass through a series of convolution layers with filters (basic depiction below). The way the below CNN layers work is a longer separate topic and here is a good article to start.
Of course initially these filters don’t know where to look for image features like edges or curves and the previously mentioned weights are random numbers (like a baby with fresh mind). We typically have a large training data set with thousands of images with pre-identified labels. The model first makes a forward pass, calculates the initial weights, makes a prediction of the outcome label (i.e. this is a dog) and compares it with the truth that is the existing training set labels. Because this is a training set we already know the outcome labels thus depending on the success of the prediction, a loss function is calculated and the network makes a back pass while updating its weights. The way the computer is able to adjust its weights to decrease the loss is through a method called back propagation. Now the model performs a backward pass through the network, which is determining which weights contributed most to the loss and finding ways to fine tune these weights so that the loss decreases thru consecutive passes.
Initially the calculated loss is expected to be very high and it is expected to decrease to a minimum after many (but fixed) times of forward/backward passes. At the end hopefully the network should be trained well enough so that the weights of the layers are tuned correctly.
Then we run testing to be able to see whether our CNN model works. We should have a different set of images plus its respective labels and pass the testing set of images through the CNN. We compare the outputs to the testing set to see if and how well our network works! Naturally the more DATA you have the better your model could be tuned thru training and testing. That ‘s why big data enables deep learning. After we have a good enough model, it is ready to be used for real life scenarios… while we continue tuning the model. Obviously it is way more complex than this but this is the super high level & simplified logic for how most of the ANNs work for training and testing.
Source: Unknown Let me know if anyone knows the source. These are some DL terminology examples :)
Another real-life example of computer vision is in action in China. Alibaba launched City Brain System in its birthplace in Hangzhou, China where an AI center optimizes the traffic controls.
https://www.youtube.com/watch?v=v4_2QuS4Xns
CNN like algorithms already dominate our daily life: Facebook — automatic tagging, Google — photo search, Pinterest — home feed personalization. I am really looking forward the days CNN would help with treatments for the visually impaired.
Recursive (Recurrent) Neural Networks (RNN): Sometimes used interchangeably recursive neural network is just a generalization of a recurrent network while having the same acronym. An RNN simply uses previous input sources within the calculations. Say you are analyzing handwriting, you can predict words and future letters much better if you remember the previous letters. Another way to think about RNNs is that they have a “memory” which captures information about what has been calculated so far. RNN can remember the former inputs, which gives them a big edge over other artificial neural networks when it comes to sequential and context-sensitive tasks such as speech recognition.
RNNs are considered maybe the most powerful model for NLP. RNNs are also used for language translations, composing music, writing novels, Wikipedia articles or Shakespearean poems, write AI tweets… You can train it to write machine generated Obama speeches or compose non-existent “Beatles” songs. Interesting huh! Karpathy’s blog, who is the current head of Tesla AI, has written one of most popular deep learning RNN articles to further refer to.
AI or real Shakespeare?
Source: Karpathy — RNN generated Shakespearean piece
Generative Adversarial Networks (GAN): GANs were invented by Ian Goodfellow, who’s now staff research scientist at Google Brain, and his associates from the University of Montreal in 2014. Yann LeCun, the director of Facebook AI said: “Generative Adversarial Networks is the most interesting idea in the last ten years in Machine Learning.” GAN makes the neural nets more human by allowing it to CREATE rather than just training it with data sets.
A generative adversarial network is composed of two neural networks: a generative network and a discriminative network. In the starting phase, a Generator model takes random noise signals as input and generates a random noisy (fake) image as the output. Gradually with the help of the Discriminator, it starts generating images of a particular class that look real.
The Discriminator which is the advisory of Generator is fed with both the generated images as well as a certain class of images at the same time, allowing it to tell the generator how the real image looks like.
Generator and Discriminator are pitting one against the other (thus the “adversarial”) and compete during the training where their losses push against each other to improve behaviors (via backpropagation). The goal of the generator is to pass without being caught while the goal of the discriminator is to identify the fakes.
After reaching a certain point, the Discriminator will be unable to tell if the generate image is a real or a fake image, and that is when we can see images of a certain class (class that the discriminator is trained with) being generated by out Generator that never actually existed before! Experts sometimes describe this as the generative network trying to “fool” the discriminative network, which has to be trained to recognize particular sets of patterns and models.
Source: https://github.com/junyanz/CycleGAN There are other very interesting examples and the source code at this github account by Jun-Yan Zhu, Researcher at MIT CSAIL.
Source: Another video GAN example by Jun-Yan Zhu
GANs could be used for increasing the resolution of an image, recreating popular images or paintings or generating an image from text, producing photo realistic depictions of product prototypes, generate realistic speech audio of real people (OMG!) as well as producing fashion/merchandise shots.
Source: Wikipedia. GANs were used to create the 2018 painting Edmond de Belamy which sold for $432,500.
Generative adversarial networks very popular in social-media. Beware of deepfake videos! If you feed it with enough faces data set, it can create completely new fake faces that are super realistic but also do not exist! Below is NVIDIA’s AI producing fake human photos using GAN.
Source: NVIDIA’s AI produces Fake Human Photos with Unbelievable Quality | QPT
Deep Learning for Natural Language Processing (NLP): Actually NLP is a broader topic though it gained huge popularity recently thanks to machine learning. NLP is the ability of computers to analyze, understand and generate human language, including speech. For example you can do sentiment analysis given any text. NLP can make AI recommendations after parsing thru movie/book reviews or web. NLP can run chatbots/digital assistants for front end tasks using text or audio interactions. Alexa/Siri/Cortana/Google Assistant are the famous digital personas using NLP engines.
The next stage of NLP is natural language interaction, which enables people to communicate with computers using everyday language to complete tasks. I am sure you watched Google CEO, Sundar Pichai showing how the Google Assistant can make a few calls and book a haircut appointment for you. Other known used cases are enterprise search or opinion mining (sentiment analysis) . There is already a large choice of NLP engines that are readily available to embed into everyday uses whether it is call centers, chat-bots, translators, auto-predictors, spam filters or the new vast domain of digital assistants.
Behavior of the sentiment neuron. Colors show the type of sentiment. Source: https://blog.openai.com/unsupervised-sentiment-neuron/
How to become a practitioner with machine learning?
(1) Learn some Python to get started and (2) experiment with Keras (or one of the other popular DL libraries below). (3) Take a practical real world problem and tackle it.
As the compiler/editor, I prefer to use Jupyter for my experiments though you have plenty of choice for alternative code editors. No need to be perfect from the get go. Be agile, fail fast & cheap, change course when needed and eventually you will get there.
Deep learning frameworks are changing rapidly. While my preference is Keras due to its user friendly API, TensorFlow is the current champion behind Google backing as below graph. Another heavyweight PyTorch has Facebook backing. CNTK is backed by Microsoft. Apache’s MXnet is an easily scalable framework and backed by Amazon. Shortly, the landscape is very dynamic and will continue evolving. Pick one and experiment!
Source: https://towardsdatascience.com/deep-learning-framework-power-scores-2018-23607ddf297a
You need to have a properly set up laptop environment to get started. I would advise to start with installing the Anaconda package. This would suffice for initial needs and then you will continue installing other libraries as needed.
Source: https://towardsdatascience.com/data-scientist-is-it-the-sexiest-job-of-the-21st-century-35a5bf409363
For further scale you can leverage one of the big cloud ML platforms like AWS SageMaker, Microsoft Azure AI, Google Cloud Platform ML & TensorFlow and other alternative players. Create an account using free credits to get started.
Finally there are many online resources and courses available. There is no all-included place but you can start with Medium articles, YouTube videos, AI Blogs, Stanford courses, online books or excellent courses on Coursera/Udemy/Datacamp programs.
Leading AI companies:
Nvidia and Intel produces the special microprocessors that greatly accelerate ML calculations. Google, Amazon, Microsoft and IBM (and many more companies) provide cloud infrastructure, ML services as well as higher level frameworks to accelerate the modeling, training and testing work. In 2019, almost every medium/large company would use ML or DL in their business. In other part of the world, most Chinese AI companies have ties to Baidu, Alibaba and Tencent. The competition is on.
AI at work place - Intelligent process automation for the enterprise
I think this domain will exponentially grow behind all the AI capabilities we referenced earlier. We have only scratched the surface yet with replacing the repetitive & manual tasks with Robotic Process Automation(RPA). Next frontier is transforming and digitizing the E2E cognitive processes. Traditional RPA is now being combined with AI and other digital automation tools e.g., optical character recognition (OCR), workflow (Business Process Management), Chatbots (NLP), human-in-the-loop cognitive processing, virtual employees, auto ML. These will disrupt the workplace of the future and the whole business process outsourcing industry.
Intelligent process automation (IA) is an exponentially growing domain and quiet many players already… Recognized RPA players are Blueprism, UI Path, Automation Anywhere and Workfusion as well as many other up and coming start ups. Larger blue chip companies (IBM, SAP, SalesForce, Microsoft, Pega, Oracle, SAP …) also started either offering or acquiring similar IA capabilities.
OK got it then what is the difference between Data Science and Machine Learning?
Nisarg Dave ‘s image below does a great job of showing the interdisciplinary nature of data science and it is at the intersection of all these diverse fields. Data scientists need to have multi-disciplinary skills to be able to create a data set to test, create the code needed for the algorithms and deliver an innovative business insight. That is maybe why it is sexiest job of the 21st century! Having said that you do not need to be a Data scientist to do AI work. Not at all.
Source: Nisarg Dave
AI and Ethics
Today AI doesn’t have free will or consciousness but smart people do the learning before the ML models are deployed. While the core objective of AI is to augment humans, there is a lot of discussion around ethics of AI as well. Below are some other key topics to think and reflect.
Microsoft CEO Satya Nadella “ We need to take accountability for the AI we create...”
· Will AI create unemployment? This is not a new fear. In the beginning AI will eliminate some of the human tasks though if we can find ways to adopt and re-skill ourselves then it has potential to create more jobs than it eliminates. This is maybe somewhat similar to transition from horses to cars during the first industrial revolution. Similar story when ATMs or computers came around in 70s and 80s.
· Biased robots: Algorithms are programmed and designed by human thus this is an important topic to raise awareness, develop policies and maybe to regulate as a force for good. We should ensure our training sets, algorithms or parameters are not “biased” against the goal of the critical applications.
· Security/Privacy: This is maybe the most discussed topic right now. We probably need better regulations and policies like GDPR in Europe.
· Inequality of AI capabilities: There is certainly a digital divide already with NA/EU and China dominating the AI world. There is opportunity to further democratize AI education across the world. I am more optimistic on this case given availability of free online information and open source work.
· Artificial errors and mistakes: Software glitches could easily cause AI mistakes. I think there needs to be clear accountability & regularity ownership. I.e. who is responsible if a self-driving car or a drone makes a severe accident? Society needs to ensure that our complex AI systems do what we want them to do.
· Human interactions & cognitive skills: This is a real social impact and already happening. The more we leverage robots the interactions would go down and our dependence on AI will increase. What is the solution?
· Finally, the Singularity. we are probably still far away from a time when robots would overtake humans though it is worth considering this from now!
Source: www.weforum.org
Conclusion
I think machine and deep learning, like data science in general, is as much art as science. When you start studying the AI field, your head may turn in the beginning about models, data sets, methods and all. I would encourage to pick a favorite ML domain and going deeper. It is computer vision for me these days. The fluency only comes with practice like everything else in life.
We will see many more news and inventions in AI domain in 2019. I also expect to see more advances and applied cases at the “edges” for ML to be available with mobile phones, ear buds, watches and other portable devices beyond high power computers only. So beware for more!
I would like to finish with one of my favorite quotes from Satya Nadella “I believe in a world that will have an abundance of artificial intelligence, but what will be scarce is real intelligence and human qualities, like empathy. I think great innovation comes from the empathy you have for the problems you want to solve for people.”
Happy deep learning in 2020!
Resources and further reading:
https://www.pyimagesearch.com/
https://blogs.oracle.com/bigdata/difference-ai-machine-learning-deep-learning
https://towardsdatascience.com/machine-learning-vs-deep-learning-62137a1c9842
https://towardsdatascience.com/cousins-of-artificial-intelligence-dda4edc27b55
https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_neural_networks.htm
https://www.newtechdojo.com/list-machine-learning-algorithms/
Manager - Cognizant Consulting
5 年Thank you for putting this together Ozgur. Beautifully explained.
Business Consultant Service Automation - Strategic Corporate Accounts at Konica Minolta Business Solutions (UK) Ltd
5 年Great overview of AI and its application and definitions. I will share. Great work.