How do I effectively weigh the risks and benefits of decision making using machine learning?
Machine Learning
Perspectives from experts on the questions that matter for Machine Learning
This article was an early beta test. See all-new collaborative articles about Machine Learning to get expert insights and join the conversation.
The potential benefits of delegating decisions to machine learning algorithms are manifold. They can help us cope with the complexity and uncertainty of the modern world, where human decision makers often face information overload, cognitive biases and bounded rationality. They can also enhance efficiency and accuracy, and reduce costs and errors. For example, machine learning algorithms can help doctors and patients make better informed and personalized choices based on large and diverse data sources, such as electronic health records and wearable devices. They can also help managers and employees improve productivity and performance by automating routine tasks and providing insights and feedback.
However, the potential risks of delegating decisions to machine learning algorithms are also significant. They can undermine human agency and responsibility, as well as erode trust. They can also generate unintended and undesirable consequences, such as discrimination and bias. Machine learning algorithms may discriminate against certain groups or individuals based on proxies or correlations that are not justified or explainable, such as race, gender or zip code. They can also manipulate human behavior by nudging actions that benefit the algorithm or its owner, but not the user or the society. They are also prone to malfunctioning, being hacked or being misused, such as in autonomous weapons or cyberattacks.
How can we balance the benefits and risks of delegating decisions to machine learning algorithms, and what are the criteria and principles that should guide us? One possible approach is to adopt a human-centered and value-based perspective that recognizes the role and rights of human stakeholders in the design, deployment and evaluation of algorithms. According to this perspective, machine learning algorithms should serve human values and interests, such as fairness, justice and security, and should respect human norms and laws, such as human rights, ethical codes and legal regulations. Moreover, machine learning should be aligned with human expectations and preferences, and should support human oversight and intervention, such as by providing explanations and feedback.
However, this approach also raises several challenges and questions, such as how to define, measure, and operationalize human values and interests, as well as how to resolve conflicts and trade-offs among different stakeholders and values. Moreover, this approach may not be sufficient or appropriate for some contexts and scenarios, where human values and interests are ambiguous or where human oversight and intervention are impractical or harmful. For example, how should we deal with situations where human interests are incompatible with the goals or constraints of the machine learning algorithm, such as in military or environmental applications? How should we deal with situations where human oversight and intervention are limited or impossible, such as in high-speed or high-risk domains, or where they are biased or irrational, such as in personal or emotional domains?
领英推荐
Delegating decisions to machine learning algorithms is a dynamic process that requires careful and continuous assessment and adjustment. It also requires a multidisciplinary and participatory approach that involves not only technical and domain experts, but also ethical and legal scholars, policy makers and regulators, and most importantly, the users and the beneficiaries of the algorithms. Ultimately, the question is not so much about the power we should give to machine learning, but how we can co-create and co-evolve with them in a way that enhances human dignity and responsibility.
Explore more
This article was edited by LinkedIn News Editor Felicia Hou and was curated leveraging the help of AI technology.
?? Feds Fight Back with AI
2 年We need to ensure we use AI responsibly and ethically. This will come from more education around this topic. Machines help gather data and synthesize information, but the ultimate decision should be with each individual person.
Program Manager I Business Intelligence I Global Supply Chain I Strategic Planning I SAP I Secret Clearance I Army Veteran
2 年This is question that can stir up a lot of alternatives when making decisions. Do we use our experience and knowledge base or put it in the hands of machine learning to make decisions. I think it’s a mixture of both. Balance our experience against the data ML produces to make the overal best informed decision.
Chief AI & Data Analytics Officer
2 年I agree this is a very complex question and one that I have discussed in my travels at the Analytics Hall of Fame with Medical Doctors and how much they are willing to concede decision-making to algorithms. I think many of the books on AI frame this the best, which is around viewing the guidance from Algorithms and ML as input and augmentation. For example, Medical Practice would want to avoid listening to AI/ML if through the physicians training, she or he knew that a medication could be helpful for a patient’s condition even though the ML model said differently as one illustrative potential use case.?As already outlined in many of the responses ML may not have all of the data to make a decision with 100% accuracy. So as with any model the accuracy rate is key.?Some Decision making has an element of art to it as well as science. Not to mention that we need to balance the need for privacy in how we train Algorithms. So, to recap, a lot depends on the data available, its accuracy, and the permissible purpose of the data(privacy, data protection and ethics). Thanks for asking. See Positivity Tech/Marcia Tal’s work and Tom Davenport books on related Algorithm Development and AI Topics.
Writer (Self-employed)
2 年Much of the commentary on this question is from a "professional" perspective. Unfortunately, it's an unregulated profession with no agreed code of ethics. I say that as one who was a practitioner for almost 50 years. Let's change the perspective. As an essentially naive end user, how do I weigh the risks and benefits? I would consider several questions. First, how important is the decision? Second, how much information do I have about how the decision will be made (what factors are the algorithm considering)? Third, do I know and trust the provider? Fourth, how valuable is the speed of an automated decision? Fifth, after the decision is made, can I get an explanation? Sixth (I was going to say finally, but I suspect there should be more), do I have recourse if I disagree with it?
Helping Leaders, Startups, and IT Professionals Automate and Optimize with NEXUS AI Hub
2 年To ensure fair, transparent, and accountable algorithms while using machine learning, we must take many actions. To do this, select and pre-process representative, bias-free data to train the algorithms. This prevents algorithms from learning and repeating biased patterns. To achieve accurate and fair decisions, algorithms should be tested often. Audits and benchmarking studies can evaluate algorithms' performance and discover flaws. Transparency regarding algorithms and their judgments is also crucial. This may include explaining the algorithms' decision-making processes and letting experts and the public access their training data and code. Finally, algorithm decisions must be held accountable. To make sure algorithms are ethical and lawful, independent review boards or ethical committees can be used. Careful data selection, regular examination and testing, transparency, and accountability are needed to balance machine learning judgments. These procedures can assist ensure fair, unbiased, and human-valued algorithms.