A Road to AGI: Sidestepping Cassandra(s) & Navigating the Next Leap in Artificial Intelligence
There are many alarmist conversations about the looming rise of #AGI and its potential to overtake intellectually-demanding jobs. But what's the reality? Well, by a number of measures, if you're in your 40s or older (such as myself), it's unlikely to happen in (y)our lifetime.
The development of AI models that understand context and reason like humans is still in its early stages or limited to narrow applications, which will be crucial to achieving significant advances in AI capabilities on a monumentally broad level. Present AI systems still struggle with tasks that require human-like perception, common sense, and context-based reasoning. For example, while AI has made significant advances in computer vision and natural language processing, it remains limited in its ability to fully understand and interpret nuanced human communication or sensory data in the way that people do. This highlights the significant gap between current AI capabilities and what would be required for AGI.
Surveys of AI experts have shown that while there is optimism about AGI's potential, there is also a consensus that achieving a fully developed AGI could take anywhere from 20 to over 100 years. Additionally, the ethical and societal implications of AGI are being heavily debated, with discussions focusing on how to manage risks and ensure that AI development aligns with human values. While AI might automate a number of tasks in the near future, experts suggest that this will likely lead to new job creation rather than wholesale job elimination. Historical trends show that as simpler tasks get automated, humans often move into more complex roles that require creativity, strategic thinking, and emotional intelligence—areas where AI still lags behind. Anton Korinek argues that we may actually see an acceleration of AGI versus Human overtake in tasks (seen below in "CHART 1" from his article, titled "AI may be on a trajectory to surpass human intelligence; we should be prepared") and potential wage collapse. His stance reads less alarmist, and more cautionary.
There is a significant focus on the geopolitical dimensions of the AI race, with the US and China likely to engage in intense competition for dominance in AGI and superintelligence capabilities. This competition is expected to shape not only the economic landscape but also the strategic and military power balance on a global scale. Alongside raw computational power, innovations in algorithmic efficiencies are equally crucial. The documents reviewed in preparation for this post numbering in the hundreds of pages, detail that enhancements in algorithmic techniques could amplify AI capabilities without a corresponding increase in computational costs, thus making AGI more feasible.
As implied previously, there is a wide range of predictions about when AGI might be achieved. Some researchers, have suggested that AGI could be realized within the next 5 to 20 years. Leopold Aschenbrenner explores the race for AGI in his latest (titled, "Situational Awareness - The Decade Ahead") to interesting effect, and going into great detail leading into the potential of AGI infancy by as early as 2027. However, other experts remain skeptical, pointing out that AGI's development could take much longer due to the complex challenges involved in replicating human-like intelligence and reasoning capabilities.
Even if (or when?) AGI does arrive, those who have been learning how to harness AI's power—particularly white-collar professionals who integrate these tools to enhance their work—will be well-positioned to thrive in their fields.
These perspectives suggest that while AI and AGI development are advancing, achieving the kind of general intelligence that would completely replace human intellectual jobs is not imminent. By embracing AI as a collaborator rather than a competitor, they can ensure a continued career in their chosen domains.
REFERENCE(S):
Aschenbrenner, L. (2024). Situational awareness - The decade ahead. Situational Awareness. https://situational-awareness.ai
Berruti, F. (2020). An executive primer on artificial general intelligence. McKinsey & Company. https://www.mckinsey.com/capabilities/operations/our-insights/an-executive-primer-on-artificial-general-intelligence
Graham, R. (2022) Discourse analysis of academic debate of ethics for AGI. AI & Soc 37, pp.1519–1532. https://doi.org/10.1007/s00146-021-01228-7
Korinek, A. (2023). AI may be on a trajectory to surpass human intelligence; we should be prepared. International Monetary Fund. https://www.imf.org/en/Publications/fandd/issues/2023/12/Scenario-Planning-for-an-AGI-future-Anton-korinek
Mahler, T. (2022). Regulating artificial general intelligence (AGI). Law & Artificial Intelligence. Information Technology & Law Series, vol 35. T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-523-2_26
Senior Managing Director
4 个月Jon A. Longoria, DFMC2 Very well-written & thought-provoking.