Navigating the Future: Shaping UK Policy in the Age of Emerging Technologies

Navigating the Future: Shaping UK Policy in the Age of Emerging Technologies

Recently, I had the exciting opportunity to take part in two events that explored how the UK is shaping policies and rules for emerging technologies.

The first event was a talk called "Policy Making in the Age of AI," given by Sarah Munby and hosted by the Strand Group at King's College London.?The second was the Tech Policy Leadership Conference 2024, organised by TechUK. There, I listened to various speakers and discussions on the topic "How Can the Next Government Use Technology to Build a Better Britain?"

In the following lines, I'll share a few insights that stood out to me from both events.

Sarah Munby’s lecture opened with a thought-provoking question: How do we effectively govern AI technology when it overlaps with many aspects of society and is accompanied by significant uncertainties? In essence, how should policymakers navigate the myriad uncertainties and 'unknowns' that come with governing AI technology's broad societal impact?

Sarah's first suggestion was to examine historical precedents to navigate the unknown challenges of new technologies, thus emphasising the value of drawing lessons from our past. Since ancient times, new knowledge and inventions have periodically remade human societies. From earliest times, new tools, processes, materials, crops, and techniques — “what we now call technology” — diffused swiftly throughout the world. The AI influence is akin to and falls somewhere between that of the internet's revolution and fire's discovery.

Referring to our present moment, Sarah cited Gartner analysts, pointing out that AI is at the height of its hype cycle, a model that illustrates the maturity and adoption of technologies. The term refers to a phase in the hype cycle of technological advancements, where expectations and excitement around new technology reach their highest point, often without a clear understanding of the technology's limitations or practical applications.

With this in mind, Sarah's speech raised some compelling questions:

  • The challenge of managing exponential growth is significant, especially given the limited historical capacity of political entities to rapidly adapt, as shown by the COVID-19 case.
  • What metric can truly reflect the 'big picture'? While staying current with leading-edge models is crucial in model training, relying solely on FLOPS doesn't fully capture our progress and thinking. We need a consistent way to track gradual improvements during the phases between significant breakthroughs, ensuring researchers and developers can measure and report on the nuanced advancements in their work.
  • The focus often leans too heavily on inputs rather than outcomes. How can we ensure a more balanced focus that includes effective output management?
  • Predicting when limitations will hinder us is challenging. Continuous exponential growth is unattainable. Can we bootstrap the model to work within these constraints, and how much data can we realistically use?
  • How can policies adapt to better support innovation and reduce the fear of making mistakes? Policy-making must evolve to address the frequent shortcomings of intuition and the swift obsolescence of evidence-based knowledge. With information often being uneven, challenging to understand and slow to arrive, the high risk of errors can dampen innovation and block vital support.

Sarah's speech also drew parallels between the present moment and the influential 1973 report that surveyed the field of artificial intelligence, which significantly impacted government decisions on funding for AI research. This report, often referred to as the "Lighthill Report," is named after Professor Sir James Lighthill, who led the investigation of AI's progress and prospects. The report highlighted the overly optimistic expectations of early AI researchers and the field's failure to achieve its ambitious goals. As a consequence, the report led to reduced government funding and support for AI research in the UK and had a similar effect in other countries, marking the onset of what is known as the "AI winter," a period characterised by a notable decline in interest and investment in the AI field. This event underscored the profound influence that such assessments can have on the trajectory of scientific fields. However, as Sarah also pointed out, errors can occur in the opposite direction. There is often a tendency to overestimate the short-term effects while underestimating the long-term implications, as exemplified by Paul Krugman, who in 1999 wrote: “By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machines.” Therefore, the main challenge arises: how do we accurately assess the impact of such technological advancements?

This question was also grappled with during the Tech Policy Leadership Conference, which started with the presentation of seven tech priorities for the next Government, as outlined in their new report, namely:

  • An updated AI strategy and incentives for vital capital investment,
  • Remove barriers to the digitisation of our public services,
  • Open the opportunities of the digital economy to everyone,
  • Leverage the new technology to keep us safe online and tackle fraud,
  • A new regulatory model that recognises the strategic economic importance of our regulators,
  • Help UK tech companies to scale across the country,
  • A new approach to trade and technology in a more fractious world.

The report and public surveys conducted by TechUK also brought some interesting statistics, including the fact that narrow AI is seen as the biggest immediate opportunity in the UK economy. Cities like Manchester, Birmingham, and Cambridge are among the top cities recognised for their technology scenes outside of London.

The conference also revealed critical gaps in three key areas: the absence of a coherent industrial strategy, a well-defined trade strategy, and a robust and scientifically proven evidence-based implementation strategy. ?

The biggest highlight of the TechUK event was the keynote addresses delivered by Rt Hon Michelle Donelan MP and Peter Kyle MP. Both parties spoke about the importance of seeking innovation in action.

Michelle highlighted the need to strengthen three essential UK pillars: skills, scalability, and regulations. She highlighted that the UK tech sector retains the #1 spot in Europe and #3 in the world. She forecasted that the AI sector in the UK is projected to expand to 1 trillion in 2040, a figure that dwarfs the current valuation of the entire UK Tech sector. She reflected on the role the UK played in the Global AI Summit. She spoke about the need to invest more in literacy in technology, which includes infrastructure and computing power, as well as SMEs. Michelle also addressed data and efficiency, setting the next objective of enhancing public services through data-driven strategy and legislation related to data bills. Lastly, she spoke about the need to build a better balance between start-up culture and the current challenges in scalability and the BridgeAI programme.

Peter's speech focused on a labour plan to publish an AI strategy, introduce a regulatory innovation office, and look at streamlining DSIT. He said that currently, it takes approximately 2.5 years to execute a tech program within DSIT. He spoke about the government's need to build trust with the public, as public enthusiasm about AI decreased by 5 per cent in 2023, and the word AI is often associated with the word scary. The majority of this sentiment comes from the public fear of job displacement or loss. Peter also intends to establish goals for regulatory measures and compare them against the practices of international rivals. He aims to enhance openness regarding the government's application of AI and will mandate that companies creating frontier models submit Safety Reports.

One of the highlights of Peter's speech was his discussion of the plan for the future funding strategy for the NHS to adopt AI technology in their daily practice. Before his speech, we had a panel discussion in which Dr Saira Ghafur highlighted the current challenges facing the health sector in adopting AI. Saira emphasised that current AI technologies have a constrained research horizon. She proposed that training for doctors and medical practitioners should focus on integrating AI into their workflows and procedures. The design of these workflows should prioritise patient-doctor engagement, enriching the context of medical processes and fostering patient trust and confidence in the outcome. As a gold standard for the tech industry, she cited the process of diffusion and procurement of Continuous Diabetics Monitoring tools. Moreover, she stressed that the health care policy and adoption incentives should focus specifically on accuracy and precision, given the unacceptable risk of errors that could endanger lives. She also highlighted a need for a clearer return on investment strategy, which includes information about solution usefulness, aspects of due diligence, time, and adoption cost.

During coffee break conversations with several civil servants, I uncovered a range of challenges they face. Firstly, the impending election and its associated time constraints make it difficult to commit to significant decisions, as circumstances could swiftly change. The voluntary nature of applying AI principles currently lacks sufficient motivation for the public sector, highlighting the need for a more proactive governmental role in technology implementation. Financial investments in AI are often contrasted with the need to compensate for previous budget cuts, leading to underfunded departments struggling to retain talent. Civil servants frequently manage high-risk situations where algorithmic biases and errors could result in critical mistakes in healthcare or housing, underscoring the necessity for government support to mitigate these risks. Additionally, there's an evident lack of a structured approach to build and foster a communication pipeline in which they can maintain an open dialogue with users, adopters, academics, civil servants, and the private sector. The picture painted here underscores the necessity for the government to place civil servants in leading roles, equipping them with the essential knowledge, tools, and resources they need.

My favourite panel, hosted by Sabina Ciofu, related to the question, “What is Britain’s role in a more complex and uncertain world of great power competition and fast technological change?” I especially enjoyed the remarks from Rt Hon Sir David Lidington about the extended role of the supply chain, which no longer can focus just on cost and speed of travel but rather needs to evolve to a monitoring function that ensures resilience, sustainability, and ethical practices in response to the complexities of today's technological and geopolitical landscape.

In conclusion, these events highlighted the evolving thought processes in technology policy, underscoring the dynamic approaches required to navigate the future's challenges. I remain hopeful about the strides we will make in the future, but there is a lot of work that remains to be done.

#TechPolicy2024 #TechUK #ArtificialIntelligence #EmergingTech



Fascinating insights—understanding the interplay between technology and policy is essential for sustainable progress in the digital age.

回复

要查看或添加评论,请登录

Monika Januszkiewicz, PhD的更多文章

社区洞察

其他会员也浏览了