Machine Learning & Deep learning
The theory of Machine Learning emerged in 1950 when Alan Turing questioned whether machines could think. In 1959, engineer Arthur Samuel applied Machine Learning theory to create an algorithm that played checkers, resulting in a program that improved with each game played.
The initial concept of Machine Learning arose from pattern recognition without the need for subsequent programming. Its capability to process new data and enhance its understanding of programmed categories brought benefits beyond computing. Today, ML is widely adopted across various sectors. For instance, in Business Intelligence, machines analyze and collect data to assist analysts in creating decision-making indicators. In cybersecurity, machines are programmed to recognize voice patterns, biometrics, and facial readings.
Deep Learning emerged as a subset of Machine Learning, focusing on neural networks to simulate the human brain. By using neural networks to interpret patterns and convert them into data, the addition of more layers of neurons deepens the understanding of these patterns, hence the term Deep Learning. It is currently utilized in diverse fields such as healthcare, analyzing exam images to propose different health hypotheses for patients. It is also employed by media platforms to generate personalized recommendations and ads based on user data collected during service use.
Machine Learning and Deep Learning offer myriad possibilities, echoing Alan Kay's assertion that "The machine have to work like a human being." Thus, there are few barriers between the unreal and software, only areas that remain unstudied and unapplied.
“The best way to predict the future is to invent it” - Alan Kay
Analista de Performance de Software SR. / Cientista de Dados | Jmeter | SRE | CTFL
1 年Parabéns, Gustavo Donizete! Muito maneiro o artigo ??