Naive Expectation of Full AI Explainability and Transparency
Aditya Mohan
AI Expert, Philosopher-Scientist, & VC/PE. Strategy, M&A, & Litigation. AGI, Embodied AI, & Aviation.
It was clear from Alan M Turing’ arguments in 1950 and holds true even now that a human cannot be a machine and a machine, just like humans, can be unpredictable.?
Turing in Computing Machinery and Intelligence, Journal of Philosophy, Mind, 1950:
“To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree.”
Any solution system that is built with an expectation of full explainability and transparency is neither possible nor reasonable. Deep learning algorithms especially and the AI models built using it such as GPT-4, StableDiffusion, MidJourney, ChatGPT, and Google Bard, have some level of intuition and replicates to a certain degree how the human mind thinks and makes decisions. Such advanced AI solutions do not provide a clear logic or a path on how a decision may have been derived.?
Details on the above in context of AI policy and governance can found below in my policy article with Roger Bickerstaff from Bird & Bird Law Firm.
Our AI Policy blog here: https://www.robometricsagi.com/blog/ai-policy
To learn more about my company, check out: https://www.robometricsagi.com
We are hiring. Check out: https://www.robometricsagi.com/careers
Don't hesitate to reach out and connect with me here: https://www.dhirubhai.net/in/aditya621/