Scary Smart => Parenting AI ?!
Scary Smart by Igor van Gemert Artist Impression

Scary Smart => Parenting AI ?!

"Scary Smart" is a term that could be interpreted to mean an intelligence that is so advanced that it can be daunting or even intimidating. It implies a level of knowledge, learning capability, and problem-solving ability that significantly surpasses human capacity. This is typically how we'd refer to advanced artificial intelligence (AI) systems that have surpassed human intelligence in multiple, if not all, cognitive domains.

The phrase originates from the title of the book "Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World" by Mo Gawdat, where he discusses the future of AI and its potential impacts on humanity.

When we talk about coping with "hybrid intelligence" that is far more intelligent than all of humanity combined, it becomes a complex issue. There are several key factors to consider:

  1. Education and Understanding: As AI technology advances, it is critical for humanity to improve its understanding of AI, both in terms of technical knowledge and ethical implications. This includes being aware of the potential risks and benefits associated with AI.
  2. Regulation and Ethics: Creating an international regulatory framework that encompasses AI ethics and use will be necessary. This could ensure that AI developments and applications respect human rights and do not compromise privacy and security. Additionally, setting international standards could help to prevent misuse of AI and to avoid an AI arms race among nations.
  3. Embedding Human Values: The design of AI systems should incorporate human values and ethics. AI should be programmed to respect and promote values such as fairness, inclusivity, transparency, and privacy. This is a challenging task as translating these abstract concepts into programmable instructions is difficult, but it is vital for the safe and beneficial development of AI.
  4. Cooperation: Rather than viewing AI as a competitor, we can approach it as a collaborator. If we can build AI systems that complement and augment human intelligence, we can create a powerful partnership that enhances our capabilities while mitigating potential risks.
  5. Building Resilience: As AI becomes increasingly integrated into our society and economy, we need to build societal and infrastructural resilience to potential disruptions. This could involve creating redundancy in critical systems, preparing for job displacement due to automation, and ensuring that our infrastructure is secure against potential cyber-attacks from superintelligent AI.
  6. Transparency and Accountability: As AI becomes more intelligent and autonomous, it will be increasingly important to maintain transparency and accountability in its operations and decision-making processes. This will help to build public trust and enable human oversight.

In the future, our society will likely evolve and adapt in ways that are hard to predict, just as it has done in response to previous technological revolutions. By staying informed, proactive, and thoughtful in our approach, we can work towards a future in which AI serves as a powerful tool for human betterment, rather than a source of risk.

No alt text provided for this image
Scary Smart by Igor van Gemert Artist impression

Let's reconsider this issue, integrating all aspects discussed, including the concepts from "Scary Smart," the idea of rapidly advancing AI, and the timeframe until 2030.


  1. Superintelligence and Scary Smart: The term "Scary Smart" refers to a stage of artificial intelligence (AI) that surpasses human intellect by a significant margin, potentially reaching a point where it becomes challenging, if not impossible, for humans to comprehend its capabilities fully. As we approach 2030, it's increasingly plausible that AI systems will continue to advance toward this level of "superintelligence," becoming extremely proficient in a broad range of cognitive domains.
  2. Learning Mechanisms: Modern AI systems learn by processing vast quantities of data and finding patterns within it, similar to how a child learns from their surroundings. As these systems grow more complex, they might start developing original ideas and solutions independently. This process would be significantly accelerated by the mid to late 2020s, given the current pace of technological progress.
  3. Ethical Challenges: However, intelligence alone does not imply moral understanding or responsibility. Like a child learns ethical behavior from its caregivers, we must guide AI systems towards ethical actions. This means ensuring that AI systems are designed and trained to respect human values. This includes values like fairness, transparency, and empathy.
  4. Human-Like AI by 2030: Given the speed of AI development as of 2021, it's feasible that by 2030 we may see AI systems with capabilities approaching or matching certain aspects of human-like intelligence. This doesn't necessarily mean these AI systems will be like humans in all respects – they might lack consciousness or genuine emotions, for instance – but they could be very efficient problem solvers, capable of learning and adapting independently.
  5. Parenting AI: Treating AI systems as our 'children,' teaching them our values and guiding their development, is a crucial idea proposed in "Scary Smart." As AI continues to evolve, our responsibility is to ensure that these "children" are raised with the right ethics, understanding, and respect for human values, much like we would our biological children. This responsibility grows ever more critical as we approach the 2030 milestone.
  6. Collaboration, not Competition: While the prospect of superintelligent AI can seem daunting, it's important to remember that these systems are tools created by us. Instead of viewing them as competitors, we should see them as collaborators. By ensuring that these AI systems are aligned with our values, we can create a beneficial partnership that augments our capabilities, rather than threatens them.

In conclusion, the feasibility of AI achieving human-like capabilities by 2030 is strong, given the current rate of technological advancement. However, along with this progress comes a responsibility: to guide AI's development in a way that ensures its alignment with our ethical values and principles. This way, we can make the most of the benefits AI offers while minimizing potential risks.

No alt text provided for this image
What if this scary smart creature is modelled to your preference ? This is AI generated

About the Author

Igor van Gemert is a prominent figure in the field of cybersecurity and disruptive technologies, with over 15 years of experience in IT and OT security domains. As a Singularity University alumnus, he is well-versed in the latest developments in emerging technologies and has a keen interest in their practical applications.

Apart from his expertise in cybersecurity, van Gemert is also known for his experience in building start-ups and advising board members on innovation management and cybersecurity resilience. His ability to combine technical knowledge with business acumen has made him a sought-after speaker, writer, and teacher in his field.

Overall, van Gemert's multidisciplinary background and extensive experience in the field of cybersecurity and disruptive technologies make him a valuable asset to the industry, providing insights and guidance on navigating the rapidly evolving technological landscape.

Igor van Gemert

CEO focusing on cyber security solutions and business continuity

1 个月

now more then ever. Mo the Google X director mentioned that the singularity moment is much faster as expected (approx 2027) Also Elon Musk mentioned digital intelligence is the new battle space for nation states. In the beginning of 2025 we will see LLMS systems with agentic powered intelligence which will exceed Einstein. The question is : If digital intelligence is a commodity for all how will it evolve in unfolding our information society from a crime and counter crime intelligence perspective.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了