Addressing the Alignment Problem
David Creelman
Insights for managers on what to do about AI (among other things!) & 経営人事推進機構
Anyone with a serious interest in AI will be concerned about the alignment problem. This is the problem that it's somewhere between hard and impossible to ensure an AI will do what you want it to do rather than go off on some wild tangent.
This issue is related to more everyday concerns about the hallucinations and biases we find in AI, however, it addresses these concerns at a more fundamental level.
If you are concerned about the alignment problem you may feel stymied because there is no place within your role or even within the organization to deal with it. Lots of people will have some interest: anyone in governance, people looking at risk, and certainly some of the folks kicking around IT. But I expect you'll find the time spent on the topic insufficient to grapple with the complexities.
So if you do feel your organization ought to be building the capability to address the alignment problem then let me know and we'll scope out a project. It can take you to a more robust foundation than shallow angst about AI and ethics.