Benefits which MNCs are getting from AI/ML
What is Artificial Intelligence?
After seeing the term “Artificial Intelligence” the most common thought that one expects is “to make computers intelligent so that they can act intelligently!!! or making computer to think like human” .
As technologies has moved to advanced level AI continue to grow, they will have a great impact on our quality of life.
It can help in lessening the error to an extreme level and gives the output with proper accuracy .
What is Machine Learning?
The scientific field of MACHINE LEARNING (ML) is a branch of artificial intelligence, as defined by Computer Scientist and machine learning pioneer Tom M. Mitchell: “Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience .”
In normal words we can say :- “Automating and improving the learning process of computers based on their experiences without being actually programmed i.e. without any human assistance”.
The process starts with feeding good quality data and then training our machines(computers) by building machine learning models using the data and different algorithms. The choice of algorithms depends on what type of data do we have and what kind of task we are trying to automate.
1. Alphabet — Google
Alphabet is Google’s parent company. Waymo, the company’s self-driving technology division, began as a project at Google. Today, Waymo wants to bring self-driving technology to the world not only to move people around but to reduce the number of crashes. Its autonomous vehicles are currently shuttling riders around California in self-driving taxis. Right now, the company can’t charge fare and a human driver still sits behind the wheel during the pilot program. Google signaled its commitment to deep learning when it acquired DeepMind. Not only did the system learn how to play 49 different Atari games, but the AlphaGo program was also the first to beat a professional player at the game of Go. Another AI innovation from Google is Google Duplex. Using natural language processing, an AI voice interface can make phone calls and schedule appointments on your behalf. Learn even more about how Google is incorporating artificial intelligence and machine learning into operations.
DeepMind, Alphabet Inc.’s artificial intelligence research unit, today detailed new machine learning technology it has developed to make Google Maps more useful. Maps have more than a billion users worldwide who rely on the service to plan their travel routes. One of the service’s most central features is its ability to generate time of arrival estimates, helping drivers view key information such as how soon they need to depart to catch a train. DeepMind teamed up with sister company Google LLC to reduce inaccuracies in the time of arrival estimates. Their collaboration, the unit detailed this morning, has produced a double-digit reduction in the percentage of inaccuracies. In one case, prediction errors dropped by no less than 51%. DeepMind achieved this improvement by implementing a so-called “graph” neural network in Maps to help with arrival time estimation. A graph is a data structure that stores data points and the relationships between them in the form of interconnected dots. This structure, DeepMind has found, lends itself well to capturing the interconnected nature of road systems. But the process wasn’t as simple in the case of Maps because of differences in the way roads are built. An AI that is trained to estimate the duration of highway trips won’t necessarily be capable of doing the same for urban roads, and much smaller differences can cause accuracy issues as well. DeepMind solved the challenge by taking advantage of its neural network’s graph structure. The unit’s engineers organized the road data that the AI processes to estimate arrival times into “Supersegments” also based on a graph structure, much like the AI itself. These Supersegments are sufficiently flexible that DeepMind’s neural network managed to overcome training data differences.
2. Facebook
Different ways Facebook uses Deep Learning to gain value and help Facebook achieve its goals of providing greater convenience to users, and enabling them to learn more about us.
a). Textual Analysis
A large proportion of the data shared on Facebook is still text. Video may involve larger data volumes in terms of megabytes, but in terms of insights, text can still be just as rich. A picture may paint 1,000 words, but if you just want to answer a simple question, you often don’t need 1,000 words. Every bit of data that isn’t essential to answering your question is just noise, and more importantly, a waste of resources to store and analyze.
Facebook uses a tool it developed itself called DeepText to extract meaning from words we post by learning to analyze them contextually. Neural networks analyze the relationship between words to understand how their meaning changes depending on other words around them. Because this is semi-unsupervised learning, the algorithms do not necessarily have reference data — for example, a dictionary — explaining the meaning of every word. Instead, it learns for itself based on how words are used.
This means that it won’t be tripped up by variations in spelling, slang, or idiosyncrasies of language use. In fact, Facebook says the technology is “language agnostic” — due to the way it assigns labels to words, it can easily switch between working across different human languages and apply what it has learned from one to another.
At present the tool is used to direct people towards products they may want to purchase based on conversations they are having ths vedio gives an example of how it decides whether providing a user with a shopping link is appropriate or not, depending on the context.
b). Facial recognition
Facebook uses a DL application called DeepFace to teach it to recognize people in photos. It says that its most advanced image recognition tool is more successful than humans in recognizing whether two different images are of the same person or not — with DeepFace scoring a 97% success rate compared to humans with 96%.
It’s fair to say that the use of this technology has proven controversial. Privacy campaigners said it went too far as it would allow Facebook — based on a high-resolution photograph of a crowd — to put names to many of the faces which is clearly an obstacle to our freedom to move in public anonymously. EU legislators agreed and persuaded Facebook to remove the functionality from European citizens’ accounts in 2013. Back then the social media giant was using an earlier version of the facial recognition tool which did not use Deep Learning. Facebook has been somewhat quiet about the development of this technology since it first hit headlines, and can be assumed to be waiting on the outcome of pending privacy cases before saying more about their plans to roll it out.
c). Targeted advertising
Facebook uses deep neural networks — the foundation stones of deep learning — to decide which adverts to show to which users. This has always been the cornerstone of its business, but by tasking machines themselves to find out as much as they can about us, and to cluster us together in the most insightful ways when serving us ads, it hopes to maintain a competitive edge against other high-tech competitors such as Google who are fighting for supremacy of the same market.
3. Microsoft
Microsoft
AI for Accessibility
According to Bellard, strong customer demand for assistive technologies development platforms catalyzed the launch of AI for Accessibility, Microsoft’s second AI for Good initiative. It followed on the heels of — and was largely modeled after — the company’s AI FOR EARTH program, which provides training and resources to organizations looking to tackle problems relating to climate, water, agriculture, and biodiversity.
“We wondered if a similar [approach] would work for the disability community and also the innovation community that works on the technology for people,” said Bellard.
Proposals are accepted on a rolling basis and are evaluated “on their scientific merit,” Bellard says, in addition to their innovativeness and scalability. Selected applicants receive compute credits for Microsoft’s Azure AI platform in increments of $10,000, $15,000, or $20,000, depending on their project’s scope and needs, and additional funds to cover costs related to collecting or labeling data, refining models, or other engineering-related work. They also gain access to Microsoft engineers, who work with them to accelerate development and incorporate their innovations into “platform-level” partner services. To that end, InnerVoice combines avatars with written text, pictures, and video to create experiences that help learners identify the connections between speech and language. Its videos abstract concepts, while the avatars label what’s happening using facial expressions and emotional tone of voice, and users practice conversations with the avatars — a superhero, drawing, or photograph of a loved one — and learn words by taking pictures that machine learning algorithms detect and label.
4. IBM
IBM
IBM is hard at work trying to disentangle the concepts behind artificial intelligence (AI) to clients, explaining to them how the technology makes decisions. Eighty-two percent of C-suite executives it researched said they wanted to use AI but were concerned about unconscious bias and the skills needed. It’s offering AI for a range of services and has implemented it itself in areas such as recruitment where it’s used to make sure there is no bias in how job descriptions are written, according to IBM Senior Vice President and Chief Marketing Officer Michelle Peluso.
“Technology can help to make sure there’s not bias in promotions and the like and so (there is) this grounded belief at IBM that inclusion is part of our ‘brand state’,” she told CNBC’s “Marketing Media Money.”
There are several ways marketers can best use AI, Peluso said. The first is in getting to know customers. “It allows us to understand more about our customers. We can analyze tone. We can listen in on chat bots, we can analyze personality and social (media), so we have the ability to develop a richer understanding of our customers,” she said. AI is also being used in how businesses interact with their customers, allowing chat bots to answer customer service queries, for example. The nature of advertising — where traditionally messages are broadcast to people one-way — could also become more of an interaction. “We can say in a digital ad (for example) what’s in your refrigerator … And (it will) give you a great recipe, or (AI can) tell us why you’re interested in a certain car. And we’ll tailor the content live to make sure you’re getting the answer, so it will change (so the advertising is) actually interacting … with customers,” Peluso said.
5. Amazon
Amazon
The company that fixes the mode for so many aspects of customer experience is breaking down internal silos and proving how other firms can do the same. Amazon, a leader in leveraging customer-centric innovation, has taken its business to the next level by redesigning the company around its AI and machine learning applications. Some of them involved rethinking current plans, like the company’s robotics space and its huge Amazon Web Services (AWS) business. Others are entirely new businesses, like Amazon Echo, healthcare division, and more.
For some reason, the company always gets my attention, and the Amazon Artificial Intelligence strategy even made me more curious. I love to constantly check how the company launches new products and develop go to market strategy for its innovations.
How does Amazon leverage Artificial Intelligence?
The company that fixes the mode for so many aspects of customer experience is breaking down internal silos and proving how other firms can do the same. Amazon, a leader in leveraging customer-centric innovation, has taken its business to the next level by redesigning the company around its AI and machine learning applications. Some of them involved rethinking current plans, like the company’s robotics space and its huge Amazon Web Services (AWS) business. Others are entirely new businesses, like Amazon Echo, healthcare division, and more.
How AI drives Amazon business growth
The role of AI in Amazon’s recommendation engine is enormous, as it generates 35 percent of the company’ company’s revenue. Through collecting data from individual customer preferences and purchases, the company’s recommendation engine tends to personalize the list of products that customers need to purchase. The massive quantity of data formed or gathered is used to organize a “360-degree view” of an individual client. Using that profile, Amazon can find other people who fit into the same criteria based on hundreds of touchpoints (data sources) make recommendations for them as well.
When you visit Amazon Book Store, you can scan a QR code at the register, and then the store associated will offer an optional paper receipt. The purchase record is stored in a user’s app account order history — as simple as that.
Amazon Book Stores displays books face out without a price. What space waste and confusing label!. In fact, this is data. Books are face out with placards displaying reviews and ratings. No price tag! Why? The firm wants you to pull up the app and scan the book to buy it there. Can you imagine the data flow? Not only for merchandise analysis but also for training the algorithms and machine learning infrastructure.