AI Fringe Day 2: Expanding the Conversation on AI Safety in Practice
Day 2 - AI Fringe

AI Fringe Day 2: Expanding the Conversation on AI Safety in Practice

The vibrant energy from Day 1 at the AI Fringe Hub in London didn't wane as attendees converged for the second day, aptly titled 'Expanding the Conversation: Defining AI Safety in Practice.' The day promised an in-depth look into the intricate world of AI safety, focusing on its practical implications, its stakeholders, and its evolving definition.

Francine Bennett, Interim Director of the Ada Lovelace Institute, opened the day with a keynote that seamlessly blended thought-provoking insights with urgent calls to action.

Keynote

Kicking off with the assertion that AI is presently governed by a medley of existing legislation centred around data governance, Bennett wasted no time in highlighting the holes in this setup. "Our patchwork of laws has glaring gaps," she stated, laying the groundwork for her exploration of AI-induced harms. From the amplification of misleading information on social platforms to the inadvertent outcomes like self-driving car mishaps, and even misuse harms where AI is wielded with malicious intent - the spectrum was broad and alarming.

She discussed the supply chain's darker aspects - the ethical quandaries of human labelling, the contentious use of personal data or intellectual property for model training, and the structural systemic harms like market concentration. Bennett cautioned, "The ripple effects of these harms span the entirety of the AI data lifecycle, yet our repository of evidence on additional harms remains surprisingly sparse."

Drawing on a recent evidence review, Bennett highlighted a palpable public unease concerning AI. "The public doesn't just want regulations; they yearn for regulations that genuinely represent their best interests," she emphasised.

The growing adoption of AI by businesses and its potential in addressing substantial challenges was unmistakably clear. Yet, the repercussions of AI system failures, especially in high-stake scenarios like financial access, could be cataclysmic. Bennett reiterated a theme that echoed through the day: the criticality of public trust. "Evidencing trustworthiness isn't a luxury; it's a necessity," she asserted.

Driving her point on the necessity of preemptive governance, Bennett crafted a compelling analogy between the AI industry and the airline industry, painting a hypothetical scenario to highlight the urgency of regulation. ‘Imagine a world where the airline industry is given optional research on safely landing planes, rather than strict regulations enforcing it, we would have chaos’ she illustrated vividly. "In a similar vein, while the National AI Safety Institute is instrumental in generating pioneering research, its true potential is unlocked only when there’s a regulatory framework mandating adherence. Regulation is not merely a component of the governance puzzle; it forms the very framework that ensures all pieces are securely in place."

Expounding on the path to a just society, she remarked that regulations could bridge the gaps in AI harm governance. "Handing regulators the tools and resources to tackle these harms necessitates legislation," Bennett declared. Addressing the delicate balance between ethics and commercial opportunities, she highlighted the unsettling fact that many generative products released recently bypassed the scrutiny of ethical teams.

As Bennett neared her conclusion, she voiced a pressing concern: the rapidly closing window for AI regulation in the UK, given the upcoming election. "The legislative disruption might push AI regulation to 2025 or beyond," she cautioned.

Wrapping up her compelling talk, Bennett alluded to a quote from Rumman Chowdhury on Day 1, "Brakes help us go faster as they enable the car to go at higher speeds." A reminder that regulations, far from being restrictive, could be the very thing that propels AI to its zenith.

Next we moved into a robust discussion moderated by the astute Michael Birtwistle from the Ada Lovelace Institute. An assembly of industry titans: Gill Whitehead from Ofcom, Shannon Vallor from the Edinburgh Futures Institute, Yolanda Lannquist of Global AI Governance, Deborah Raji from Mozilla, and Emran Mian from the Department for Science, Innovation, and Technology took to the stage to delineate the boundaries and nuances of AI safety.

Panel

Shannon Vallor began by emphasising that safety should be at the forefront of AI governance. She reminded attendees that AI safety isn't a new concept; rather, the knowledge base has been developing for some time. Using non-AI examples, such as gender-biased crash dummies and pulse oximeters that malfunction with darker skin tones, she cautioned against viewing AI safety solely as a technical issue. She stressed the importance of considering historical perspectives and understanding the intertwined roles of society and business.

Interestingly, Shannon also commented on the shifting focus of tech companies. They often seem more engrossed with potential future challenges than immediate, present concerns. She praised the Biden administration's approach as being more in line with immediate necessities.

Deborah Raji from Mozilla reflected on how broad the definition of AI has become. From risk assessments to facial recognition and now the introduction of models like ChatGPT, the narrative around AI safety keeps evolving. She pointed to the UK's A-level grading fiasco and biased facial recognition technologies as current, pressing concerns. While high risk and low probability events related to AI do warrant attention, Deborah believes we mustn't neglect the immediate challenges. She criticised companies for leveraging long-term concerns to dodge immediate regulations.

Echoing some of Shannon's and Deborah's sentiments, Yolanda Lannquist from Global AI Governance pressed the need to address both immediate and long-term concerns. Highlighting threats like system failures, hacks, and bio-security threats, she warned that tech lobbying might strip essential elements from crucial AI regulations, such as the EU AI Act. She sees open-source AI as the next significant challenge on the horizon.

Representing Ofcom, Gill Whitehead detailed their approach towards online harms. By focusing on regulating the context and use-cases instead of the technology itself, Ofcom seeks to mitigate potential hazards, including those from generative AI. Their collaborations with tech giants like Snap and TikTok are aimed at curbing harmful content online.

Emran Mian of the Department for Science, Innovation and Technology defended the focus on frontier AI, stating that such discussions were overdue. He proudly mentioned their recent initiatives, including launching a fairness innovation fund for AI and collaboration with Ofcom on the online harms bill. He underlined the importance of addressing safety issues with the current generation of AI models.

When the discussion shifted to the potential shape of future regulatory frameworks, Shannon argued that well-defined regulations could actually spur innovation, drawing parallels with the aviation industry. In contrast, Yolanda stressed accountability for open-source developers.

Deborah commented on the legislative direction in the UK and the challenges faced in the US, particularly in getting bipartisan agreement. Emran then discussed how companies attending the AI safety summit are proactive, sharing guidelines and best practices.

The panel ended with a consensus: while the future of AI is vast and uncertain, it's crucial to address present challenges even as we prepare for what lies ahead. The fusion of immediate actions and forward-thinking will likely shape the AI narrative in the coming years, and these discussions at the AI Fringe Hub are only the beginning.

The third segment of the event delved into the realm of "Standards for Responsible AI", a subject resonating more strongly as AI technology continues to integrate into our everyday lives.

Panel

Tim McGarr from the British Standards Institution (BSI) led the panel discussion, bringing his wealth of experience to the forefront. He reminded everyone that the push for standardisation in AI isn't a novel concept; it's been ongoing. The BSI recently launched an AI standards Hub, aligning with its mission to use standards to foster responsible AI from multiple viewpoints.

Adam Leon Smith, CTO of Dragonfly, chimed in, emphasising that standards are the agreed-upon ways of doing things, achieved through multi-stakeholder consensus. He highlighted that standards bolster customer trust and mitigate barriers to international trade. With the U.K. having over 100 experts working on this internationally, the technical gears are turning at full speed, with the legal side trying to keep pace.

Hollie Hamblett, a Policy Specialist at Consumers International, provided a unique perspective centred around consumers. She argued that while AI has the potential to revolutionise industries, it's the consumers who empower these changes. By covering areas like Privacy, Security, and decision-making processes, AI standards aren't merely defensive; they should elevate consumer awareness, enabling more informed decisions.

Digital Catapult's Chanell Daniels discussed her work with early-stage innovators. Their primary concerns revolve around ensuring ethical, safe, and economic considerations when deploying AI across various sectors. With media narratives and policy applications offering limited guidance, small businesses struggle to identify what good AI looks like. Here's where standards can fill the gap.

Cristina Muresan from the University of Cambridge shifted the conversation to AI procurement. She stressed the importance of engaging the public in AI governance mechanisms. Drawing an analogy, she described working on standards as tantamount to negotiating a UN resolution, emphasising the meticulous effort involved.

As the panel discussed key challenges associated with AI's positioning, a consensus emerged: the primary issues aren't technical. They lie in the realm of law and clarity. With rapid advances in AI and looming threats like the combined impacts of AI and Quantum technologies, the balance between innovation and standardisation becomes pivotal.

Hollie made a compelling case for making standards more digestible for consumers, given their profound importance. Tim hinted at the significance of an AI management system standard, which could potentially serve as the bedrock for many subsequent standards.

However, one challenge overshadowed the discussion: How do standards keep up with relentless innovation? Cristina posited an inversion of this question. Instead of expecting standards to chase innovation, innovators should use existing standards as frameworks, delving deeper into problem definitions before proposing solutions.

Adam, while acknowledging the importance of standards, expressed skepticism about their ability to stay abreast of relentless innovation. The takeaway? As AI technology accelerates, the need for robust, adaptable, and global standards grows ever more urgent.

Following the insightful panel discussion, the AI Fringe continued to delve into the pressing issues of our digital age. Amidst the deluge of content we encounter daily, the nagging concerns of authenticity and origin frequently arise. Addressing this topic with authority, Andy Parsons, Senior Director of Adobe's Content Authenticity Initiative (CAI), presented a compelling session.

Presenter: Andy Parsons (

Andy's comprehensive background includes founding Workframe, serving as the CTO at McKenzie Academy, and co-founding Happify. But today, his focus was on the pressing issue of digital content's authenticity. With the evolution of generative AI, distinguishing between human and AI-generated content has become an imperative, especially for preserving the integrity of democratic discourse. The CAI's mission: ensure digital content carries an intrinsic context and transparency that remains attached, no matter where it travels.

However, the CAI isn't in the business of truth labelling. Instead, it’s about equipping content with rich context through cryptographic verification and digital signatures. Two primary organisations drive this initiative: CAI, initiated by Adobe, and C2PA (Coalition for Content Provenance and Authenticity). While C2PA, a part of the Linux Foundation, has garnered participation from about 50 companies to create a global standard, CAI began with stalwarts like Twitter and the New York Times, now boasting over 2,000 members, all rallying behind adopting this standard.

The presentation enlightened attendees with insights into how the initiative caters to various digital assets - be it audio, video, images, 3D assets, or PDFs. One notable exclusion is text. In a captivating demonstration, Andy likened content provenance to a "nutrition label" for digital content, elaborating on its creation and elements without casting judgment on its veracity.

A pivotal highlight was the unveiling of "content credentials", seamlessly integrated into software platforms such as Photoshop. This feature not only traces an image's origin but also any subsequent modifications, making it possible to ascertain, for instance, if an image has been manipulated using AI tools like Adobe's Firefly. Interestingly, Firefly's ethical training leverages copyrighted material with obtained rights and historical data whose copyright has lapsed.

Further showcasing the practicality of this innovation, Andy mentioned its integration into high-end cameras like the Leica M11 P. Photographers can now digitally sign their captures right at the source, setting a new bar for photographic authenticity. Additionally, the technology can assess content on platforms like LinkedIn and provide verification even if traditional metadata is stripped away, thanks to the incorporation of watermarks with content credentials.

The video realm isn't untouched either. When streaming, a unique blue scrubber bar can indicate the content's originality or any tampering, fostering unprecedented transparency for consumers, fact-checkers, and media establishments.

Emphasising the initiative's open-source nature, Andy urged the audience to delve deeper at contentcredentials.org. In a nod to the inclusive spirit of the initiative, he confirmed that the technology is rooted in standards, free from licensing or patent constraints. For those keen on diving deeper, pins were generously offered.

In a world constantly blurring the lines between real and digital, between human and AI-generated content, such initiatives form the vanguard, ensuring that while content might be myriad and manifold, its origins are never obscured.

Post-lunch, the AI Fringe re-engaged its audience with a deeply introspective fireside chat. The ambiance was set for two leading minds in the AI industry, Aaron Rosenberg, a Partner at Radical Ventures, and Dorothy Chou, Head of Policy & Public Engagement at Google DeepMind. Their conversation revolved around the norms and evaluations for AI systems in the current technological landscape.

Fireside

Dorothy Chou opened the conversation by examining how tech giants determine the safety of their AI products before launch. She observed the profound influence of the Venture Capital (VC) hype cycle in shaping operational norms.

Aaron Rosenberg, with previous ties to Google DeepMind, provided an investor's perspective. Since 2017, his firm, Radical Ventures, has consistently supported machine learning founders who demonstrate responsibility. He proudly shared how Radical includes a clause in their term sheets mandating that investments engage with them on AI responsibility and ethics. Their commitment to AI safety is so profound that they've developed a framework to guide the early-stage investor community. He emphasised, "All of the major players today, like DeepMind, were once venture-backed. We're shaping the future by instilling responsibility and safety in AI from the very beginning."

When Dorothy inquired about the potential tension between commercial success and safety protocols, Aaron refuted the dichotomy often presented by the media. He stressed that safety and commercial success go hand-in-hand. For Rosenberg, a business model that ensures safety while leveraging machine learning is more commercially viable. He stated with confidence, "True commercial success cannot exist without a foundation of safety."

Their conversation further delved into the intricacies of their comprehensive framework, tailored to cater to ethical, social, and technical considerations surrounding AI safety and risk. This framework is more than just guidelines; it serves as a motivation for engaging with startup founders, ensuring that their principles align with Radical's own. "If a founder's commitment to responsibility feels secondary, it's unlikely we'll be having a second chat," Rosenberg declared, adding that the most commendable founders, like those behind DeepMind, prioritise ethics.

Chou underscored the importance of public education regarding AI, emphasising its potential impact on daily life. Meanwhile, Rosenberg highlighted consumers' pivotal role in today's AI ecosystem. With the ongoing surge in innovations, consumers now have an array of choices. He pointed out, "With choice comes power and responsibility."

The session provided a candid look into the intersection of responsibility, safety, and commercial success in the world of AI, underscoring the paramount importance of ethical considerations in shaping the AI landscape of tomorrow.

The next section of the day focussed on a pivotal panel titled 'Demonstrating and Showcasing Approaches to Evaluating AI'. The panel was proficiently moderated by Rumman Chowdhury, CEO and Co-Founder of Humane Intelligence. It featured three distinguished professionals in the field: Toby Shevlane, a Research Scientist at Google DeepMind; Laura Weidinger, also from Google DeepMind; and Deborah Raji, a Fellow at Mozilla. To set the scene all panellists presented for 10 minutes on their chosen topic.

Panel

Toby Shevlane's presentation, grounded in his extensive research at DeepMind, began by shedding light on the very core of AI's potential perils. He posited that risks in AI do not emanate simply from its foundational elements but lurk at its evolving frontier. These risks are especially potent in general-purpose AI, which can adapt to a wide array of tasks, as well as in specialised or narrow AI, designed for a particular task yet harbouring capabilities that can be misdirected.

Toby Shevlane

An integral point of Shevlane's talk was categorizing AI risks into two distinct brackets:

  • Static Risks: These are the pitfalls that do not surge with AI's advancement. A prime example is factual errors. Just because an AI system becomes more sophisticated doesn't necessarily mean it'll make more factual blunders.
  • Escalating Risks: Conversely, some risks amplify as the AI's capability expands. Shevlane spotlighted disinformation campaigns here, a looming threat in an age where fabricated narratives can have massive real-world ramifications.

Perhaps the most visually evocative segment of Shevlane's presentation was his introduction of the 'mitigation ladder'. This illustrative concept sequentially highlighted safety and security measures in response to the escalating underlying risks. Starting from 'safety fine-tuning', a preliminary step in risk mitigation, the ladder ascends to incorporate strategies such as 'watermarking' (embedding a unique identifier into the model's outputs), 'Access controls' (dictating who can use or influence the AI), and 'Adversarial training' (teaching the AI to counteract malicious inputs). The ladder culminates in a nebulous, yet-to-be-deciphered solution, hinting at the complexities we might face in the future.

Drawing a historical arc, Shevlane detailed how the risks associated with Large Language Models (LLMs) have metamorphosed over time. He painted a vivid picture: during the nascent stages of LLMs, the odds of one convincing him to act harmfully were minuscule. But with the relentless march of technology, as LLMs evolved and their capabilities expanded, these odds have shot up, painting a concerning trajectory for the future. This evolution underscores his central tenet — the importance of continually adapting and bolstering our safety and security protocols to ensure that the underlying risks remain in check.

Benchmarking, as Shevlane elucidated, is a crucial tool in the arsenal. It provides a tangible metric, a grading system if you will, that evaluates AI outputs. He listed various benchmarks designed for LLMs such as MATH, which assesses a model's proficiency in solving mathematical problems, and NarrativeQA, gauging its reading comprehension skills. However, it's not just capability that needs grading but also the safety of these models. This led him to introduce benchmarks like BBQ, focusing on bias detection, and ToxiGen, dedicated to pinpointing toxic language generation.

Yet, Shevlane was candid about the current limitations, admitting the scarcity of empirical evidence in this realm.

Shevlane's session reached its crescendo with his emphasis on the importance of an early warning system. Drawing from his paper, "Model Evaluation for Extreme Risks", he proposed a solution that would directly measure AI capabilities that pose an increased risk. The early warning system would serve as a sentinel, vigilantly scanning for threats in diverse domains - from cyber offences and weaponisation of AI to the more subtle dangers of persuasion, deception, and AI's potential for unchecked proliferation.

The presentation urged the AI community to be continually alert to the dual-faced nature of AI – its potential for progress, but also its capacity for peril. His focus on adaptable safety measures, combined with rigorous benchmarking and the imperative for an early warning system, provided valuable insights into navigating the evolving landscape of AI safety.

Launching her discourse, Laura distilled the essence of sociotechnical safety. It's not merely about the AI systems functioning autonomously but rather, about understanding that these systems are enmeshed in a human-AI relationship. This involves a constant exchange where humans train, tweak, and use AI, thereby shaping its responses, while simultaneously adapting to its outputs. Laura emphasised that this intertwined relationship forms the foundation of AI’s real-world impact.

Laura Weidinger

Laura mapped out an expansive taxonomy, cataloging a spectrum of potential harms birthed from multimodal generative AI. This spectrum traverses:

  • Representation Harm: The inadvertent perpetuation of stereotypes, biases, and exclusionary practices.
  • Information and Safety Harms: This encapsulates the inadvertent creation or propagation of potentially harmful knowledge.
  • Misinformation: The unintentional spreading of falsehoods, leading to a degradation of trust in information sources.
  • Malicious Use: More overt malicious applications like deepfakes, which can misrepresent reality, or cyberattacks using AI systems.
  • Human Autonomy & Integrity Harms: Over-dependence on AI recommendations, which might lead to diminished human decision-making, or even manipulation by AI outputs.
  • Socioeconomic and Environmental Harm: Job losses due to automation and the massive environmental footprint of training large AI models, among others.

Laura further expounded on four foundational pillars for ensuring the sociotechnical safety of AI:

  • Foresight: The capability to look ahead and pinpoint potential ethical and societal pitfalls that might accompany the technology’s evolution.
  • Evaluation: Once risks are identified, there's a need for tangible methodologies to gauge and quantify these risks.
  • Alignment: The harmonisation of diverse stakeholders to collaboratively address and mitigate identified risks.
  • Engagement: Catalysing a broader conversation that integrates perspectives from various stakeholders, ensuring that the AI ecosystem is both informed and accountable.

Delving deeper, Laura introduced a visually intuitive model to elucidate the layers of AI safety. The innermost circle, labeled 'Capability', reflects the inherent behaviours an AI system can manifest. Radiating outward, the subsequent circle represents 'Human Interaction', highlighting the AI's interface and its behaviour in specific applications. The outermost circle, termed 'Systemic Impact', seeks to gauge the repercussions of broader AI deployment, going beyond immediate interactions to its cascading effects in the larger ecosystem. Each outer circle brings more context.

Laura was candid about the current gaps:

  • Context Gap: Prevailing evaluations focus narrowly on AI models, often neglecting the broader human-AI interaction context.
  • Modality Gap: With AI systems spanning beyond text, evaluations for other modalities (like images or sound) are conspicuously scarce.
  • Coverage Gap: No harm area is comprehensively assessed across various modalities, leading to blind spots in our safety measures.

Laura underscored the need for:

  • A Robust Ecosystem: By sharing evaluations, ensuring quality through rigorous testing, and clarifying roles across evaluation tiers.
  • Progress Reporting: Constantly updating on identified evaluation gaps, and transparently conveying evaluation findings to the community.
  • Safety Evaluations Enhancement: Incorporating interactive assessments, taking into account systemic impacts, and prioritising evaluations for pressing harm areas.

Laura's presentation underscored the intricate dance between humans and AI. It was a clarion call for proactive foresight, rigorous evaluation, and inclusive dialogue, ensuring that as AI capabilities expand, their integration into our sociotechnical fabric is both safe and beneficial.

Deborah Raji, representing Mozilla, took a philosophical angle, exploring the responsibility that engineers shoulder. Drawing parallels to the Quebec bridge collapse, she emphasized the weight of accountability. AI, she said, often flits between two extremes: the utopian view of it as a world-changer and the dystopian fear of losing control. The reality? A multitude of AI systems don't deliver as expected in real-world scenarios, sometimes leading to devastating consequences, such as the A-level debacle.

Deborah Raji

Raji then touched upon the "Criti-hype in AI", drawing a line between the optimistic expectations and the actual pain points. She championed the need to critically audit claims, connect assessments to companies, and foster a culture of accountability.

Post-presentation, the panel discussion proved enlightening. Toby Shevlane championed the need for better comprehension of societal AI interactions. Laura Weidinger underscored the challenges of contemporary harms in AI, which have not vanished but merely evolved. Deborah Raji emphasised the pitfalls of over-reliance on tech's eventual improvement, highlighting the commercial challenges of even prize-winning technologies. The trio collectively acknowledged the need for better scientific understanding and empirical evidence to navigate AI's complex world safely.

The next conversation stood out prominently: a fireside chat between Nick Clegg, President of Global Affairs at Meta, and Madhumita Murgia, AI Editor at the Financial Times.

Fireside

Madhumita started with a critical question, seeking to understand where Meta fits in the ever-evolving AI mosaic. Clegg responded by distinguishing the fervour around generative AI from AI's general existence, which, according to him, isn't a newfound fascination. He elaborated on how platforms today are “sliced, diced, ranked, and ordered using AI,” indicating the pivotal role AI plays in shaping user experiences.

He also touted the positive influence of AI in moderating harmful content, revealing that AI was responsible for a significant 60-80% drop in hate speech on the platform, even though such content remains a minuscule 0.02%.

When quizzed about Llama2's influence on Meta tools, Clegg shared an intriguing perspective. Instead of a centralised multimodal solution employing generative AI, he envisions a more decentralised approach. This decentralised vision encompasses the development of novel tools for advertisers, improved ranking systems for social media apps, and the integration of photo-realistic chatbots in messaging platforms. In essence, a dispersed application of generative AI across the board.

Madhumita probed into Meta's rationale for going open source, a topic of much intrigue and speculation. Clegg clarified it wasn't a play to establish dominance through a generative AI bot. Instead, the motivation stemmed from a belief that leveraging Llama2 in the public domain would act as a catalyst for innovation, with the best ideas potentially getting integrated into Meta's ecosystem.

Citing examples like Linux and open-source encryption, Clegg expressed faith in the crowd's ability to contribute to safer model developments. However, he also candidly admitted that not everything gets the open-source treatment, referencing a voice extrapolation tool that was retained in-house due to safety concerns. He passionately spoke of democratising versatile technology like LLMs, emphasising that it shouldn't be the exclusive domain of tech giants.

As the conversation veered towards other tech giants and their perspectives, Clegg humorously noted the "Dutch auction" of speculations about the future. He remarked on the sporadic alignment among tech behemoths on AI, with some zealously advocating for AGI's imminent realisation, while others, like Meta, tread cautiously.

On being questioned about AGI's exaggerated prominence, Clegg candidly accepted that every technological leap, including AI, witnesses dual reactions – "excessive zeal from optimists and excessive panic from pessimists." He strongly felt that the LLMs shouldn't be regarded as precursors to AGI, emphasising the lack of an unanimous definition or measurable criteria for AGI.

On Madhumita's question about the AI Safety Summit's focus, Clegg stressed the importance of preemptive innovation without being shackled by premature regulations. He warned against making speculative assumptions on risks, especially when AI's landscape is still evolving.

The highlight was Clegg's poignant observation on dedicating vast hours to highly speculative risks, urging a shift towards more immediate, tangible challenges, such as developing interoperable solutions to track content across platforms.

Clegg’s parting thoughts revolved around the pivotal role of the public in influencing decision-makers. Highlighting the often-sensationalised media portrayal of AI (cue the "robots with red eyes" imagery), he hoped for a more grounded, realistic discourse on AI's immediate challenges.

In the heart of London, this fireside chat crystallised the urgent need for informed, nuanced discussions on AI safety, steering the narrative away from speculative perils to actionable solutions. The conversation was less of a conclusion and more of an open invitation to dialogue, underscoring the event's theme: expanding the conversation on AI safety.

As Day 2 of the AI Fringe in London began to wind down, the energy in the room remained palpable. This momentum was not surprising given the weight of the event’s final segment: a panel discussion with Sir Nigel Shadbolt, Principal of Jesus College and Chairman of the Open Data Institute, and Chloe Smith MP, the representative for Norwich North. The conversation was moderated by Resham Kotecha, Global Head of Policy at The Open Data Institute.

Panel

Chloe Smith commenced by underscoring the importance of a global approach to the summit's discussions, acknowledging the additional intricacies it introduced. For her, the emphasis should be on 'regulation' over 'legislation' when it comes to AI. This distinction seems to hint at the need for more agile, adaptive mechanisms given the pace at which AI is evolving.

Sir Nigel, reflecting on the Bletchley summit, pointed out that while it represents a select group of thought leaders, it shouldn't monopolise the decision-making. He stressed the reality that the latest AI trends are just one branch in the vast tree of AI technologies. Addressing these models' specific challenges, he highlighted the necessity of conversations that promote understanding without fanning unnecessary fears.

Sir Nigel posited that any AI undertaking should be anchored in trust, provenance, and authenticity. This triad, he suggested, forms the bedrock of AI's successful integration into societal frameworks.

Chloe emphasised a matrix-based approach by the government, a coherent strategy encompassing AI risks. She further elaborated on the multi-faceted nature of AI, both in terms of its challenges and opportunities. Highlighting AI's potential, she mentioned its application in public services and its adoption by private enterprises. With the UK standing as a tech giant, AI promises millions in economic value.

A staunch advocate for open data, Sir Nigel highlighted the challenge of sourcing high-quality foundational data for AI models. His perspective suggested a potential shift in focus towards smaller, finely-tuned models. He introduced the concept of a "data commons," emphasising the importance of communal data pools to further AI advancements.

Drawing a parallel with the airline industry, Chloe Smith depicted a vision where trust in AI matches the confidence people have in flying. To achieve this level of trust, rigorous regulations and addressing immediate challenges are paramount.

Addressing the often-discussed opacity of AI, Sir Nigel highlighted the need for decipherable explanations for AI’s decisions, especially in neural networks. As AI models become more intricate, the demand for more accessible and understandable explanations will inevitably rise.

Sir Nigel touched upon the urgent issue of misinformation in the age of AI, remarking how falsehoods spread faster than truths. The need for factual assurance in the AI space has never been more critical.

Chloe Smith pointed out the existing regulations in the representation of peoples act concerning elections. She further discussed the potential of the upcoming online harms bill to equip regulators better in managing illegal or detrimental content. Reflecting on the Cambridge Analytica debacle, Sir Nigel voiced concerns about data security in upcoming elections, emphasising the public's right to data utilisation assurances.

Closing the conversation, Chloe Smith emphasised the paramount importance of public education regarding AI. For AI to be seamlessly integrated into society, understanding and trust from the public are crucial.

As the sun set on Day 2 of the AI Fringe, it became abundantly clear that expanding the conversation around AI safety isn’t just an intellectual exercise; it’s an urgent societal imperative. This final dialogue encapsulated the essence of the event's theme: AI safety isn't just about definitions and boundaries; it's about trust, transparency, and truth.

Romanas Puisa

Principle Engineer @ Mott Mac

1 年

Thanks for sharing this summary. Very helpful indeed.

要查看或添加评论,请登录

Matthew Blakemore的更多文章

社区洞察

其他会员也浏览了