The AI-Control-Problem must now be the Top Priority
Murat Durmus
CEO & Founder @ AISOMA AG | Thought-Provoking Thoughts on AI | Member of the Advisory Board AI Frankfurt | Author of the book "MINDFUL AI" | AI | AI-Strategy | AI-Ethics | XAI | Philosophy
The AI control problem refers to ensuring that artificial intelligence (AI) systems act in ways that are consistent with human values, goals, and interests. This complex problem involves understanding and mitigating the potential risks associated with AI while ensuring that these systems are trustworthy, safe, and ethical; To address the AI control problem, researchers and practitioners in the field of AI ethics and governance are working to develop the best practices, ethical principles, and technical solutions that can help ensure that AI systems behave in ways that are socially responsible and consistent with human values; This includes areas such as transparency, accountability, fairness, and explainability in AI systems and the development of AI systems that can learn from human feedback and preferences.
The following points should be considered:
Murat
More thought-provoking thoughts:
Mindful AI: Reflections on Artificial Intelligence
领英推荐
Thought-Provoking Quotes & Reflections on Artificial Intelligence
New Book Release: Beyond the Algorithm: An Attempt to Honor the Human Mind in the Age of Artificial Intelligence (Wittgenstein Reloaded)
Deutsche Ausgabe:
Developing a better AI that optimise complex industrial problems by 20-40% in production!
2 年It probably can not be solved within ML... but with OR this can be done...
Storyteller | Linkedin Top Voice 2024 | Senior Data Engineer@ Globant | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP'2022
2 年Informative and important pick on #AI
HCLS Industry Principal | Senior Solutions Architect
2 年Murat Durmus I've been wondering, could there be any usefulness to safe AI of combining linear/additive models where errors rebound/revert to mean vs exponential/multiplicative models where errors compound? Should AI that doesn't contain a human in the loop be regulated to use composite models containing both batch linear (safer, more explainable) and online exponential (more accurate, less explainable) models like reinforced learning so alerts go blaring when predictions, recommendations, or actions deviate significantly from the linear model over some period of time? Maybe we could catch the paperclip factory from turning the world into paperclips.