Intelligence Augmentation(IA)
Sreehari S S
Senior Technical Architect @ IBS Software | Cloud Computing | AI | Innovations
Person 1: Hey what is the new trend in Industry.
Person 2: Oh man, everyone is working with Machine Learning and AI.
Person 1: Oh really, then we must do as well.
Person 2: I already started learning, it is simple; feed historical data to a algorithm and predict.
Person 1: Oh really! that simple? let’s do it on some use case.
Really, is it that simple as it sounds.
ML model gives prediction from historical data we use to train the model. Quality of the Data used in Model training is very important. If we feed rogue data, we get rogue output. How to get Quality data for training? It requires Knowledge about the Domain and each data and steps that we are working on.
AI doesn’t exist for the sake of demonstrating the capabilities of technology.
It exists to serve a real-life purpose and should make a task or series of tasks easier and more convenient to accomplish, increasing efficiency and saving a user’s time. When an AI is not built to accomplish a clearly defined purpose, it can easily get out of control, making tasks more difficult, wasting time, and generally causing the user grief and frustration.
I remember one time a candidate was explaining about project, which was to determine the winner of a car race using image recognition. They explained the intent was to take pictures from top view of the finishing point and use machine learning model to determine who the winner is. Idea was good, but had one problem, they were not aware of how car race work, instead they were creating a model based on their understanding from the sprint race. Now you understand the irony, right? We are trying to help but it might end up being a rogue model.
The risk is in the AI itself, a rogue model can be trained intentionally or accidentally. The fuzziness of AI goals makes it a breeding ground for any malware, risk is in the secret and silent coordination between AI systems. These automated coordinated attack can spiral quickly, as we have already witnessed in 2015 with Facebook AI systems. Facebook’s artificial intelligence created its OWN secret language after going rogue during experiment, Social network accidentally created chat-bot with "minds" of their own. Accidentally is the point, even they didn't know what went wrong. A model may learn that there are quicker or more convenient ways for it to achieve its goals that run counter to its instructions by its human operator. it is also about teaching robots how to make contextual choices.
As programmers we have to be very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there's another, easier route toward solving a given problem, machine learning will likely find it. Ethics needs to be seen as an important practical consideration for anyone using and building machine learning systems. If we fail to do so the consequences could be serious. Although these might seem like edge cases, it’s vital that everyone in the industry takes responsibility. This isn’t something we can leave up to regulation or other organizations the people who can really affect change are the developers and engineers on the ground.
It’s true that machine learning and artificial intelligence systems will be operating in ways where ethics isn’t really an issue – and that’s fine. But by focusing on machine learning ethics, and thinking carefully about the impact of your work you will ultimately end up building better systems that are more robust and have better outcomes. New concept called Augmented intelligence (or Intelligence Augmentation (IA)), instead of Artificial intelligence (AI) which is the broad concept of machines being able to carry out tasks that, if performed by a human, would require the person to use their "intelligence" to perform the tasks.
IA, emphasizes the fact that AI technologies have been developed specifically to help humans, rather than to replace them. In this way, augmented intelligence applications combine human and machine intelligence.
While the underlying technologies powering AI and IA are the same, the goals and applications are fundamentally different: AI aims to create systems that run without humans, whereas IA aims to create systems that make humans better. To be clear, this is not a separate category of technology, but simply a different way of thinking about its purpose.