AI Risk

AI Risk

科技、信息和网络

Helping organisations realise the benefits and manage the risks of AI development and deployment.

关于我们

AI Risk (www.ai-risk.co) provides research, strategy and innovation services that help organisations realise the benefits of AI and reduce the risks of development and deployment. Enterprise risks include: 1.) Strategic Risk (being out-competed in the market); 2.) Financial Risk (making poor investment decisions); 3.) Operational Risk (systems that are vulnerable to malicious attack); 4.) Regulatory Risk (impact of/cost of compliance with new laws)

网站
www.ai-risk.co
所属行业
科技、信息和网络
规模
2-10 人
类型
私人持股
创立
2024

AI Risk员工

动态

  • 查看AI Risk的公司主页,图片

    682 位关注者

    We are delighted to launch our '9 Types' framework for Organisational Augmentation with GenAI. It helps companies appreciate the potential to use AI to dramatically improve the way they operate....Enjoy!

    查看Simon Torrance的档案,图片

    Expert on Strategy & Innovation; Systemic Risks; Technology Adoption | Founder, AI Risk | CEO, Embedded Finance & Insurance Strategies | Guest lecturer, Singularity University | Keynote speaker

    ???? ???? ?????????????????? ?????? ?????????????? ???? ?????????????????? ???? ??????????????. Are you ready? In our latest edition of AI Risk & Strategy, we introduce our new '?? ?????????? ???? ???????????????????????????? ????????????????????????' ??????????????????— a strategic vision for building AI-powered enterprises. Stimulated by Jensen Huang's vision of 100 million AI assistants augmenting Nvidia's human workforce, this framework helps traditional companies understand how to fully leverage AI to amplify workforce capabilities. The ?????????????????? highlights how nine distinct types of ??????????-?????????????? ?????????????? ???????????????????? ?????? ?????????????? ("synthetic employees") can help companies win in their markets. It's not about a distant future; it's about what's possible today, and making effective moves now. We share tangible examples and case studies. The framework shows how companies can achieve hybrid human-digital workforces to "???? ???????? ???????? ???????? ????????." AI is not just about personal productivity and efficiency—it's about achieving strategic agility and redefining competitive advantage. Are you ready to augment your organization? Please read and share the full article to explore the '9 Types' and see how AI could transform your business. ? And please do get in touch if you'd like to know how to apply it to your company... ??♀? Many thanks again to my colleagues at AI Risk for fantastic input to the framework, including in particular Yannick Even and Stephan Thoma. =========== For more on AI Risk & Strategy, subscribe to our newsletter:?https://lnkd.in/ez2qRx_z #AI #OrganisationalAugmentation #FutureOfWork #GenAI

    The '9 Types' Framework for Organisational Augmentation: a strategic vision for the AI-powered enterprise

    The '9 Types' Framework for Organisational Augmentation: a strategic vision for the AI-powered enterprise

    Simon Torrance,发布于领英

  • 查看AI Risk的公司主页,图片

    682 位关注者

    The Future of AI: great interviews with AI industry leaders...Worth carefully digesting...

    查看Richard Turrin的档案,图片

    Helping you make sense of going Cashless | Best-selling author of "Cashless" and "Innovation Lab Excellence" | Consultant | Speaker | Top media source on China's CBDC, the digital yuan | China AI and tech

    ?? WEEKEND READ?? The Future Of AI: Expert Insights From Industry Professionals CapGemini with a LONG and THOUGHTFUL read on AI that is a perfect "Weekend Read." This read is epic and covers AI from virtually every angle. It doesn't shy away from AI's problems and is a credit to the author, Capgemini. ?? I salute Capgemini for producing excellent research covering everything from AI to payments. Their reports are simply some of the best in the business. Did you hear that, McKinsey??? Remember as you read this "tome" that all the people interviewed are in the AI business. They are not disinterested or objective. They want you to see the bright future of AI that they see. That isn't all bad. We can learn a lot from them, and their optimism is contagious. That said, remember that this document might minimize the pitfalls, trials, and difficulties many will face on the road to AI adoption. Perhaps this isn't the place to highlight them as it sells an important vision, but do keep that in mind! That's why I recommend not reading this as a "how to" guide, but as an inspirational read to help direct you to the possible better futures we can all have with AI. Plenty of other documents will deal with the risks to avoid along the way! If you keep that in mind, it's a fabulous long-read and is inspirational. ??What do you think? Am I too harsh, or is that helpful? Obviously, they all want to sell you on AI. Show me the money! ??Reposters you are the best! Thanks so much for sharing! ---------------------- ?? My name is Rich, I’m not an AI I write quality content for human interaction. ?? #Fintech, #AI and #Tech at the speed of #Asia and #China. ??Onalytica No.4 Global Fintech Influencer with two best-sellers. Like this post? Want to see more? ?? Follow me. ?? Click on “view my blog” ?? for more!

  • 查看AI Risk的公司主页,图片

    682 位关注者

    Important review of AI Risks from the OECD.AI...

    查看OECD.AI的公司主页,图片

    39,123 位关注者

    What do we want an AI-driven world to look like, and are we on the right track to get there? "Assessing potential future artificial intelligence risks, benefits and policy imperatives", a new report from the OECD, distils research and expert insights on prospective AI benefits, risks and policy imperatives. READ THE REPORT ?? https://lnkd.in/eAf2PyEZ As the swift evolution of AI technologies calls for policymakers to consider and proactively manage AI-driven change, the report identifies ten policy benefits and risks to prioritise. Finally, it enumerates actions that policymakers can take to fill the gaps with a comprehensive set of approaches. The report was guided by the OECD’s Expert Group on AI Futures, a group of international AI experts established to help understand and anticipate how AI could develop and the potential impacts on our societies. A special thanks to the group's co-chairs,?Francesca Rossi,?Michael Sch?nstein?and?Stuart Russell,?for their guidance. Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Karine Perset?Celine Caira?Luis Aranda?Jamie Berryhill??Lucia Russo?John Leo Tarver ??? #aipolicy #artificialintelligence #trustworthyai

    • 该图片无替代文字
  • 查看AI Risk的公司主页,图片

    682 位关注者

    Significant AI risks to the financial system...

    查看Richard Turrin的档案,图片

    Helping you make sense of going Cashless | Best-selling author of "Cashless" and "Innovation Lab Excellence" | Consultant | Speaker | Top media source on China's CBDC, the digital yuan | China AI and tech

    ?? MUST READ?? FSB's AI Warning: Amplifies Vulnerabilities Despite its Benefits Better late than never, the Financial Stability Board (FSB) finally acknowledges that AI may negatively impact stability and amplify vulnerabilities. If you thought using AI to control nuclear arms was a bad idea, wait until you see the FSB's list of problems with AI in financial markets. The truth is that the FSB doesn’t know AI’s impact. AI is so new and unevenly disbursed within the financial system that no one, including the FSB, can determine where the vulnerabilities lie. That isn’t to say that we should fear a crash tomorrow, but we should be mindful of AI's potential impact on stability. Remember programmatic trading??This predicament will seem eerily similar for those old enough to remember flash crashes and the impact of programmed trading on markets. Like AI, no one knew how widely disbursed programmed trading was or what would happen if all the programs sold stocks at once until it happened. I suspect our next AI crash will be similar! ??VULNERABILITIES ?? Third-party dependencies and service provider concentration: Greater reliance on and market concentration among AI service providers can increase systemic third-party dependencies in the financial sector. ?? Market correlations: AI-driven correlation vulnerabilities could interact negatively with increasing levels of automation in financial markets and greater speed and accessibility. ?? Cyber threat: LLMs and GenAI could enhance cyber threat actors’ capabilities and increase the frequency and impact of cyber-attacks, including those on vendors. ?? Model risk, data quality, and governance: Wider uptake of complex AI approaches could increase model risk for FIs that cannot effectively validate, monitor, and, when necessary, correct AI models. ??STRAIGHT TALK?? I am the first to admit that your broker’s chatbot won’t likely cause a crash and that many AI uses are benign.? That said, many new AIs are quietly being brought online for their predictive power, and we don’t know where they are concentrated. We do know that most AIs are black boxes, and their masters can’t fully predict their actions, making them less predictable than programmatic trading. For one example, look at Ant International’s latest product for corporate treasury management, which predicts currency exchange rates hourly to help reduce transaction costs. While treasury transfers won't tank markets, they’re a great example of how AI and pricing are connected and likely already deeply buried in markets. AI is great, but someone better get busy regulating it before we repeat another flash crash or worse. ?? Am I too pessimistic, or can you see the next AI flash crash coming, too? ??Reposters you are the best! Thanks so much for sharing! ---------------------- ?? My name is Rich; I’m not an AI; I like to write! ?? #Fintech, #AI and #Tech at the speed of #Asia and #China. Want to see more? ?? Follow me.

  • 查看AI Risk的公司主页,图片

    682 位关注者

    Wonderful podcast discussion with our CEO Simon Torrance and Rob Price from Futuria on the potential and practicalities of 'Human-Agent Teams'. Access via the usual platforms, and on Spotify here https://lnkd.in/eQjKfN8q

    查看Rob Price的档案,图片

    Innovative AgenticAI Founder | Leading Futuria, CDR, DRF | Expert in GenAI | Podcast Host | Former Chief Digital Officer, COO | Transforming ideas into solutions ?? #DigitalResponsibility #AIInnovation #GenAI

    #Futurise Season 2 Episode 2 Further exploring the Art of the #AgenticAI Possible with Simon Torrance, #CEO and #Founder of AI Risk. It is a fascinating conversation about what is now possible with AI teams, or AI team members in the organisation. Of course, beyond "what is possible", the question is also "what is right?" How do we find the right balance between human and AI team members, and how can we apply the concept of High Performing Teams to hybrid teams? We start the conversation in this episode, and will continue to return to it in future episodes throughout this season. My thanks to Simon for joining me as my guest in this episode and sharing his insights and experience in this fast accelerating topic of Agentic AI. #AITeamMembers #HighPerformingHybridTeams

  • 查看AI Risk的公司主页,图片

    682 位关注者

    Many thanks to Insurtech Insights for publishing this thought leadership from our Founder...The same message applies to all industries that rely on 'knowledge workers': banking, wealth management, professional services, technology services, healthcare services, creative services, etc etc... See our latest article that provides a strategic overview: https://lnkd.in/eA28-MnH

    查看Insurtech Insights的公司主页,图片

    123,397 位关注者

    THOUGHT LEADERSHIP: Simon Torrance, Founder and CEO of AI Risk, describes what he sees as the most advanced deployment of ‘Agentic AI’ anywhere in the insurance industry today, setting it in the context of other forms of AI-enabled augmentation and automation, and proposing a holistic approach to AI strategy that will help companies succeed in the Age of AI. Read the full story here: https://lnkd.in/eQj7SZJd #insurtechinsights #thoughtleadership #ai #innovation #insurance

    • 该图片无替代文字
  • 查看AI Risk的公司主页,图片

    682 位关注者

    We're delighted to be launching the 'AI Visionaries Club (London)' on 5th December with our first invitation-only roundtable: https://lnkd.in/ez8izUTH

    查看Simon Torrance的档案,图片

    Expert on Strategy & Innovation; Systemic Risks; Technology Adoption | Founder, AI Risk | CEO, Embedded Finance & Insurance Strategies | Guest lecturer, Singularity University | Keynote speaker

    "???? ???????? ????. ??????????????: ??????????'?? ?????? ???????? ???????" - it's the question on the lips of most leaders today, and the focus of the '???? ?????????????????????? ????????'s first roundtable on 5th Dec: https://lnkd.in/ez8izUTH. Delighted to be joined by two world-class experts - David Shrier, Professor of Practice, AI & Innovation at Imperial College Business School and MD of Visionary Future LLC, and Dr Paul Dongha, Head of Responsible AI & AI Strategy at NatWest Group - who will be stimulating a 'curated discussion' with specially-invited senior execs from across sectors. Many thanks in advance to Atomico for hosting the event at their lovely offices in Fitzrovia and to my team at AI Risk for organising it. We will be developing the Club in line with the needs of our community... This first event is 'invitation-only'. If you are leading AI strategy at a large enterprise and are interested in joining the roundtable, do get in touch - we have a few spaces available. If you are interested in the Club in general, please reach out to me via a Direct Message... =========== In the meantime, for more on AI Risk & Strategy, do subscribe to our newsletter:?https://lnkd.in/ez2qRx_z

    • 该图片无替代文字
  • 查看AI Risk的公司主页,图片

    682 位关注者

    Enterprise AI is moving from 'experimental' to 'essential', says this in-depth report by The Wharton School. The key to successful adoption of Gen AI will be proper use cases that can scale, and measurable ROI as well as organization structures and cultures that can adapt to the new technology. We couldn't agree more... Thanks to Jeremy Korst Mary Purk Stefano Puntoni Brian Smith for great analysis. =========== For actionable insights on AI Risk & Strategy, do subscribe to our newsletter:?https://lnkd.in/eKaSpwFM

  • 查看AI Risk的公司主页,图片

    682 位关注者

    More than 30% of all (US) workers could see at least 50% of their occupation’s tasks disrupted by GenAI, according to the The Brookings Institution. Unlike previous automation technologies that primarily affected routine, blue collar work, GenAI is likely to disrupt a different array of “cognitive” and “nonroutine” tasks, especially in middle- to higher-paid professions. Despite the high stakes for workers, business and society is not prepared for the potential risks and opportunities that generative AI is poised to bring. The report emphasizes the importance of developing strategies to proactively shape AI’s impact on work and workers. This includes fostering worker engagement in AI design and implementation, enhancing worker voice through unions or other means, and developing public policies that ensure workers benefit from AI while mitigating harms such as job loss and inequality. We strongly agree! https://lnkd.in/d5hK4K2f

    Generative AI, the American worker, and the future of work

    Generative AI, the American worker, and the future of work

    https://www.brookings.edu

  • 查看AI Risk的公司主页,图片

    682 位关注者

    Thanks to Peter Slattery, PhD for sharing his analysis of state-run AI Safety Institutes ('AISIs') across the globe...A useful resource! =========== For more on AI Risk & Strategy, do subscribe to our newsletter: https://lnkd.in/eKaSpwFM

    查看Peter Slattery, PhD的档案,图片
    Peter Slattery, PhD Peter Slattery, PhD是领英影响力人物

    Lead at the AI Risk Repository | MIT FutureTech

    "Following the Seoul AI Safety Summit, we have seen the announcement of a substantial network of state-run AI Safety Institutes (AISIs) across the globe. What progress has been made? How do their plans and motivations differ? And what can we learn about how to set up AISIs effectively??This brief analyses the development, structure, and goals of the first wave of AISIs. Key findings: Diverse Approaches: Countries have adopted varied strategies in establishing their AISIs, ranging from building new institutions (UK, US) to repurposing existing ones (EU, Singapore). Funding Disparities: Significant variations in funding levels may impact the relative influence and capabilities of different AISIs. The UK leads with £100 million secured until 2030, while others like the US face funding uncertainties. International Cooperation: While AISIs aim to foster global collaboration, tensions between national interests and international cooperation remains a challenge for AI governance. Efforts like the UK-US partnership on model evaluations highlight potential for effective cross-border cooperation. Regulatory Approaches: There’s a spectrum from voluntary commitments (UK, US) to hard regulation (EU), with ongoing debates about the most effective approach for ensuring AI safety while fostering innovation. Focus Areas: Most AISIs are prioritising AI model evaluations, standard-setting, and international coordination. However, the specific risks and research areas vary among institutions. Future Uncertainties: The evolving nature of AI technology and relevant geopolitical factors create significant uncertainties for the future roles and impacts of AISIs. Adaptability will be key to their continued relevance and effectiveness." This work from The International Center for Future Generations - ICFG is quite helpful for understanding the existing institutes and their overlaps and differences. Link in comments.

    • 该图片无替代文字

相似主页