New Tech, New Threats, and New Governance Challenges: An Opportunity to Craft Smarter Responses?
Yogesh ARIVAZHAGAN
information technology Graduate|Java|Spring Boot|react js|mysql
INTRODUCTION
Significant technological advances are being made across a range of fields, including information communications technology (ICT); artificial intelligence (AI), particularly in terms of machine learning and robotics; nanotechnology; space technology; biotechnology; and quantum computing to name but a few. These breakthroughs are expected to be highly disruptive and bring about major transformative shifts in how societies function.
The technological advances in question are driven by a digital revolution that commenced more than four decades ago. These innovations are centered on the gathering, processing, and analyzing of enormous reams of data emerging from the information sciences with implications for countless areas of research and development. These advances promise significant social and economic benefits, increased efficiency, and enhanced productivity across a host of sectors.
TECHNOLOGY’S DISRUPTIVE POTENTIAL
But there are mounting concerns that these technologies and how they are used will pose serious challenges, including labor force dislocations and other market disruptions, exacerbated inequalities, and new risks to public safety and national security. The technologies are mostly dual use, in that they can be used as much to serve malicious or lethal purposes as they can be harnessed to enhance social and economic development, rendering efforts to manage them much more complex.1 Relatively easy to access and use, most of them are inherently vulnerable to exploitation and disruption from both near and far.
In parallel, geopolitical tensions around the world are growing, and most countries increasingly view these technologies as central to national security. The potential for misuse is significant. And greater economic integration and connectivity mean that the effects and consequences of technological advances are far less localized than before and are liable to spread to countries and industries worldwide.
Technological innovation is largely taking place beyond the purview of governments. In many cases, the rate of innovation is outpacing states’ ability to keep abreast of the latest developments and their potential societal impacts. And even if one or a handful of national governments devise policies for managing these effects, the global reach of many emerging technologies and their impacts requires new approaches to multilateral governance that are much more difficult to agree on. A greater number and variety of actors must be involved to initiate, shape, and implement both technical and normative solutions.
Yet, like governments, many of these other actors do not have (or simply do not invest in) the means to consider the broader, cross-border societal implications of their investments, research, and innovations. When they do so, identifying the most relevant or effective policy-oriented or normative-focused platforms to discuss these implications can be challenging, not least because existing platforms sometimes do not consider the variety of actors implicated, the cross-border reach of the technologies in question, and the different value and political systems at play. This is particularly the case when technologies are designed for profit-making alone and their trajectories are entirely dependent on market forces.
Camino Kavanagh
Camino Kavanagh was a nonresident scholar at the Carnegie Endowment for International Peace, where her research focuses on international security, governance, and emerging technologies.
Today’s technological advances are deemed disruptive not only in market terms but also in the sense that they are “provok[ing] disruptions of legal and regulatory orders” and have the potential to “disturb the deep values upon which the legitimacy of existing social orders rests and on which accepted legal and regulatory frameworks draw.”2
Complex, dynamic frameworks already govern some fields of technology. For instance, cyberspace is governed by an amalgamation of existing international law; as well as an ever-growing, complicated array of political agreements; technical standards and protocols; trade, business, and human rights standards; research principles; national regulations; and self-regulation in the private sector. These legal and normative touchstones are underpinned by existing norms, values, and principles. They span different policy areas and issues, enlist a dizzying host of actors, and straddle numerous international regimes. What is more, they are complemented by a growing body of confidence- and capacity-building measures and efforts aimed at enhancing security and resilience.3 Yet important elements of this regime remain contested, even as new risks and vulnerabilities—such as those related to the Internet of Things, AI, and other technologies—emerge and need to be managed.
LOOMING GOVERNANCE DILEMMAS
The World Economic Forum’s (WEF) founder, Klaus Schwab, has described technological advances (for better or worse) as a “revolution” due to their “velocity, scope, and systems impact.”4 In discussing what he has dubbed a “Fourth Industrial Revolution,” Schwab emphasized a number of emerging policy and normative issues. Similarly, a 2019 paper prepared for the WEF stressed the need to “transform traditional governance structures and policy-making models” and adapt more “agile” methods of governance.5
But responding to the attendant risks and challenges will not require just exploring new governance structures, tools, and processes. This task calls for a deeper understanding of the social, cultural, economic, and geopolitical contexts in which policy, norms, and regulations are crafted as well as a firmer grasp of the overarching questions of power and conflict that shape humanity’s relationships with technology. Moreover, significant public engagement is sorely needed, since governments and companies cannot—and should not—expect to resolve these dilemmas alone.
Some of the key challenges include:
Identifying key principles and values: Relevant actors must articulate a vision of the principles and values—such as equity, equality, inclusivity, responsibility, transparency, and accountability—that might be negatively impacted by certain technological advances and how those values might best be protected. This task will require new thinking on how to ensure certain technologies (like predictive algorithms, biotechnology, or technologies allowing space resource exploitation) do not exacerbate existing socioeconomic inequalities, or how best to respond to new encroachments on personal privacy involving the body and the brain. Certain forms of research (like some aspects of biological engineering) may need to be restricted.
Determining appropriate principles and values requires more than a shared understanding of how technological innovations are evolving, how they have been designed, and how they might be used. Such normative reflection also calls for a clear sense of the different values and principles held by various communities worldwide as well as the means and actors by which such values and principles should be protected. It also necessitates going beyond the mere articulation and proliferation of principles to the practical application and acknowledgment of associated challenges and limitations.
Engaging stakeholders effectively: Legislators, regulators, private business leaders, researchers, and civic actors need to be ready to respond more responsibly and effectively to the effects and consequences of technological advances and the potential societal risks they pose. Ongoing experiments with new rules, principles, and protocols to deal with concrete relevant policy issues are certainly noteworthy, one example being the WEF’s “agile” governance initiative.6 Yet implementing these responses at scale and applying them to a host of cross-border socioeconomic and security challenges will be a complex challenge. This task likely will require a blend of approaches, increased spending in independent public research, and the involvement of many actors other than just states. To this end, it is vital that relevant actors delineate domestic and international responsibilities to determine which technology-related challenges should be addressed at home and which problems require cross-border coordination and cooperation.
Broadening existing platforms of multilateral engagement: Clarifying how various actors (and not just states) can responsibly contribute to the work of multilateral mechanisms focused on how existing international law or political norms apply to state uses of certain technologies is imperative. These platforms include the various United Nations (UN) Groups of Governmental Experts (GGEs), as well as other specialized working groups concerned with matters of technology (including ICT, machine learning, autonomous weapons, biotech, and space technology) and international security. Such efforts can also advance ongoing discussions about how to ensure that multilateral mechanisms are agile enough to determine when new global norms or rules (binding and/or nonbinding ones) are required to manage technology-driven challenges and risks and constrain certain applications of these technologies or specific behaviors by states and other actors.
Craft suitable regulations: Fresh approaches to policy and regulation are also needed. For instance, advances in ICT, machine learning, biotechnology, and the convergence of these technologies are already driving multifaceted discussions on the focus and rationale of relevant policies and the most suitable type of regulation (precautionary, preventive, reactive, or a combination of all three). Questions also abound around whether to opt for hard regulation, soft policy initiatives (such as guidelines, certification procedures, and labeling schemes) that stop short of binding regulations, or the current trend of self-governance measures by private entities. As noted above, a key question is how to ensure these regulatory solutions are fit for purpose, and whether they should be coordinated nationally or internationally. Given the cross-border effects of the technologies at play as well as the growing convergences between them, uncoordinated national rules and policies will likely be ineffective.
Enhance transparency, oversight, and accountability: Finally, new policy and regulatory approaches will require greater investment in transparency, oversight, and accountability mechanisms. This will necessitate agreeing on the nature of national regulatory oversight bodies and determining whether they should be public, private, or of a mixed composition. This task should also entail ensuring that technology companies and organizations accept greater scrutiny. For example, tech companies should heighten internal monitoring and external reporting of their self-regulatory initiatives, provide appropriately insulated, publicly funded researchers with safe access to their data, and, above all, ensure that accountability covers all aspects of the supply chain and that both the direct and indirect costs (such as labor and environmental costs) of the technologies in question are clearly understood.7 Agreeing on what is ethically acceptable in terms of industry’s role in funding and participating in oversight mechanisms (such as ethics councils and advisory boards) is an equally important lever for injecting legitimacy into some of these processes. Such scrutiny would also help identify remaining gaps and engage more actors to help gauge if and how a certain technology or its application should be regulated.
CHALLENGES TO EFFECTIVE GOVERNANCE AND COORDINATION
The international environment is hardly conducive to discussions of how best to coordinate responses to the complex, cross-border dilemmas emerging around new technologies. In some corners, existing multilateral platforms are increasingly perceived as unsuitable for resolving these challenges. The international community is notoriously slow at adopting new rules and institutions to deal with new challenges, and the quandaries posed by questions of national sovereignty and democratic legitimacy are persisting. In contrast, corporate actors appear to be racing ahead, intent on shaping the “science, morality and laws” of new technologies such as AI, with limited public debate underpinning or guiding their efforts.8 Many of these same companies and the technologies they produce or exploit are increasingly viewed as instruments of state power, a fact that only adds to these sovereignty and legitimacy-related questions.
Meanwhile, growing strategic competition between the world’s leading powers, especially in high-tech sectors, does not bode well for multilateral efforts to respond cooperatively and effectively. Such a competitive landscape is contributing to regulatory fragmentation and will likely delay much needed normative and regulatory action. This potential impasse places strains on existing efforts and could further delay the attainment of pressing social and economic objectives such as the 2030 UN Sustainable Development Goals, which are already under stress. Moreover, the resulting trust deficit between countries poses a significant threat to international peace and security, one that existing political institutions are not necessarily prepared to handle.
Throughout history, new challenges (including those relating to technology and governance) have generally opened new opportunities and channels for cooperation. Today is no different, although the challenges at hand are highly complex and are emerging at a time of systemic political change and a rising sense of conflict and crisis. More meaningful dialogue and cooperation—however difficult—on how technological developments are affecting societies and the uses and applications of technology generating the most disruption and contestation are urgently required. Such an approach would likely afford greater legitimacy to emergent governance efforts, while also tethering them to the common good. As one expert noted in the context of the WEF’s Fourth Industrial Revolution initiative:
The moment you start saying that technology is the most powerful force today, then what you have done is argue that people have a right to shape how technologies are developed and used. In practice, this means consciously trying to shape technological trajectories, it means setting research agendas, it is to direct foreseeable technologies, to articulate uses that might benefit the majority and not the few; placing technology inside of democratic politics.9
Developed on the basis of interviews with experts and extensive research, this analysis assesses four areas of technology around which governance dilemmas are evident and discusses the emergent responses. These four areas include ICT, AI, biotechnology, and space technology. The decision to focus on these fields was informed by their disruptive character, the potential national and international security risks associated with their dual-use character, and the growing degree of state competition emerging around them. Importantly, this focus is also informed by the fact that these same issues are underscoring the urgent need for greater cooperation and strengthened governance frameworks to manage the associated risks. Greater understanding of these efforts can help inform how relevant actors approach current—and prepare for new—technology-related governance challenges.
INFORMATION AND COMMUNICATIONS TECHNOLOGY
While it is not a new area of technology, there is no doubt today that continuous advances in networked computing and other aspects of ICT are converging with advances in other technological fields, greatly increasing human dependence on these digital tools.10 Governments around the world are establishing new institutions, identifying the policy implications of this growing digital dependence, and developing integrated frameworks for whole-of-government approaches to manage the resulting economic and societal transformations. Despite the associated benefits, humanity’s growing dependency on ICT continues to present significant risks. Cybersecurity and the stability of ICT systems more generally have become top policy priorities.
领英推荐
THE CURRENT NORMATIVE LANDSCAPE
The “regime complex” governing ICT stems from multiple sources: the existing body of international law, political agreements, and voluntary norms; an ever-growing body of technical protocols and standards on internet governance managed by technical groups and research communities; national regulations; self-regulation by the private sector; and other forms of nonbinding soft law.11 These norms span different topical areas, straddle numerous international regimes, and require the engagement of multiple actors.
These norms and governance mechanisms aim to strengthen cybersecurity in national and global terms; enhance international security and stability more broadly as well as general human well-being; and promote responsible behavior by drawing on existing rules, values, and principles. For instance, human rights, privacy norms, and the freedom of opinion and expression have been bolstered by a norm upholding the right to privacy in the digital age and the confirmation by the UN Human Rights Council that human rights apply online as they do offline.12
Many states, however, do not appear to be upholding these norms, and beyond basic privacy and human rights questions, there are increasing concerns surrounding the human costs of cyber operations, notably operations that affect healthcare and industrial systems or those that can generate systemic effects.13 Where cyber crime is concerned, progress is equally slow. States have not managed to agree on an international framework to counter cyber crime, although many states are leaning on the existing Council of Europe (Budapest) Convention and bilateral treaties as they adopt national cyber crime legislation and cooperate on this issue.
As for international peace and security, the work of a series of UN GGEs (five to date) has reaffirmed the applicability of international law, including the UN Charter, to cyberspace and has recommended a number of voluntary, nonbinding political norms aimed at encouraging states to use ICT responsibly.14 The norms seek to promote restraint, best practices, and other forms of positive behavior.
Specifically, many of the norms draw from existing principles of international law and address several facets of responsible use of ICT by states, including the importance of recognizing the challenges of attribution, preventing the use of a state’s territory for the commission of internationally wrongful acts, not conducting or knowingly allowing activity that damages critical infrastructure and essential services to the public, safeguarding national computer emergency response teams (CERTs) and their systems from malicious cyber activities, responding to requests for assistance by other states, and reporting and sharing information on vulnerabilities. The norms also include ensuring the integrity of supply chains, upholding human rights online and privacy rights in the digital age, and enhancing cooperation and information sharing with regard to terrorist and other criminal cases that involved cyber operations.15 Subsequently, the UN General Assembly recommended that states “be guided in their use of ICT” by the 2015 GGE report.16
Several international, regional, and other specialized bodies have since endorsed the GGE recommendations. Nonetheless, efforts to advance this work stalled when a new GGE (established in 2016) failed to produce a consensus report mainly due to disagreements on international law and the future work of the group. As the topic has become more politicized, in December 2018, the UN General Assembly’s First Committee agreed to establish two new parallel processes: an Open-Ended Working Group involving the entire General Assembly and a new twenty-five-member GGE.17
Other intergovernmental organizations complement these UN-led efforts. For example, regional organizations such as the Association of Southeast Asian Nations (ASEAN), the European Union (EU), and the Organization of American States (OAS) all have endorsed the GGE norms. So, too, have the Group of 20 (G20) and international financial institutions. Additionally, the G20 has developed guidance on strengthening the cyber resilience of the financial system and has sought to foster a norm aimed at protecting the integrity of financial data.18 Some specialized organizations, such as the International Atomic Energy Agency, have been actively developing capacity-building tools, guidance, and standards including, for instance, resources for protecting the computer and information systems of nuclear infrastructure.19
In national terms, states are under increasing pressure to ensure that government agencies, cybersecurity firms, and researchers discover and disclose cyber vulnerabilities in a more timely fashion and prevent these vulnerabilities from being illicitly traded or otherwise misused. In some states, these efforts are evolving into vulnerability equities processes (VEPs) or coordinated vulnerability disclosure mechanisms. While a principal aim is to strengthen transparency and oversight of government use of discovered zero-day vulnerabilities, there are concerns that such processes are bureaucratically complex and expensive and that they might remove pressure on companies to produce more secure products and services. Moreover, explicit processes for managing vulnerabilities might be seen as legitimizing government hacking.20 Yet, realistically, governments will unlikely eschew all use of vulnerabilities, so imposing greater due diligence, transparency, and oversight in this domain would be more beneficial than not doing so.
On a related note, another important factor is designing more secure ICT products and systems so states do not use vulnerabilities in the technology products and services that underpin people’s daily lives against citizens and other states. The costs to the global economy are certainly significant, and there are growing concerns about the potential human costs.21 The principle of security by design (see below) has been gaining currency among many engineers and entrepreneurs.
In light of persisting cybersecurity risks, governments also are moving toward more regulatory-focused solutions, many of which stop short of formal regulation. For instance, in 2018, the EU adopted a broad instrument called the Cybersecurity Act, which includes a voluntary certification framework to help ensure the trustworthiness of the billions of devices connected to the Internet of Things underpinning critical infrastructure, such as energy and transportation networks, and new consumer devices like driverless cars.22 The framework aims to “incorporate security features in the early stages of their technical design and development (security by design),” ensuring that such security measures are independently verified and enabling users to determine a given product’s “level of security assurance.”23 The effectiveness of such initiatives has yet to be gauged, although skeptics often point to challenges around voluntary certification schemes in other sectors. For instance, a scandal involving the automobile manufacturer Volkswagen (an incident commonly referred to as Dieselgate) showed the limitations of one such voluntary scheme. In such cases, the objectives may be good, but inherent conflicts of interest in process design, monitoring, and oversight tend to undermine these goals in the longer term.24
The Cybersecurity Act follows on the heels of the EU’s General Data Protection Regulation (GDPR), which seeks to bolster EU citizens’ data privacy and harmonize data privacy laws across Europe. The 2016 EU Directive on Security of Network and Information Systems is the first piece of legislation on cybersecurity that the EU has adopted.25 In the United States, there is increasing pressure on companies to prioritize consumer protection and citizen safety, as well as to introduce “proactive responsibility and accountability into the marketplace,” including through product liability. Such an approach might be particularly useful when security flaws are easily prevented “by commonly accepted good engineering principles.”26
Meanwhile, technical issues related to internet infrastructure remain largely within the purview of the so-called I* organizations, which include the Regional Internet Registries, the Internet Corporation for Assigned Names and Numbers, the Internet Engineering Task Force, the Internet Architecture Board, the Internet Society, and the World Wide Web Consortium, as well as the regional associations of country code domain name registries.27 Initiatives such as the Internet Governance Forum promote multi-stakeholder policy dialogue on internet-related issues, while intergovernmental bodies (including the World Summit on the Information Society, the International Telecommunication Union, and the European Telecommunications Standards Institute) deal with some policy aspects of internet governance. And a number of UN departments and agencies provide governments with capacity building as well as technical, legislative, and other forms of support, as do national technical bodies such as CERTs and computer security incident response teams (CSIRTs), in line with a provision (Action Line 5) of the World Summit on the Information Society’s Tunis Agenda.28
Initiatives promoted by or otherwise involving other societal actors are also proliferating. For instance, in 2018, former UN secretary general Kofi Annan established the Annan Commission on Elections and Democracy in the Digital Age, which aims at “reconcil[ing] the disruptive tensions between technological advances and democracy.”29 Another body known as the Global Commission on the Stability of Cyberspace, established in 2017, is studying how norms can enhance stability in cyberspace. Following consultations, it produced a 2018 “norm package” intended to shape the behavior of both state and nonstate actors on issues ranging from preventing supply chain tampering to combating offensive cyber operations by nonstate actors.30 An earlier initiative, the Global Commission on Internet Governance, has advocated for an internet that is “open, secure, trustworthy and accessible to all,” stressing the need for a “new social compact” designed to protect the rights of users, establish norms for responsible public and private use, and ensure the kind of flexibility that encourages innovation and growth.31
Industry actors are also active in multiple ways. While some of these efforts may be seen as an attempt to forestall regulation, many aim to pressure industry actors or states to commit to behaving more responsibly. For instance, in 2018, Siemens launched a Charter of Trust for a Secure Digital World at the Munich Security Conference, a document that outlined principles that the initial signatories (eight companies and the Munich Security Conference) believe to be essential for establishing trust between political actors, business partners, and customers as the world becomes more dependent on digital technology.32 The number of charter signatories has since grown to sixteen.33
Microsoft, too, has promoted norms of responsible behavior for both state and industry actors, and the company has reportedly responded more positively than other corporate peers in terms of complying with new regulations such as the EU’s GDPR.34 The firm has also raised the idea of a “Digital Geneva Convention,” a binding instrument that would protect users from malicious state activity.35 Along with several other industry leaders, Microsoft has also announced the Cybersecurity Tech Accord, which advocates for increased investment in (and heightened responsibility for) cybersecurity by leading industry actors.
In 2018, the company launched a new initiative entitled Defending Democracy Globally.36 This initiative aims to work with democracies worldwide to (1) “protect campaigns from hacking”; (2) “increase political advertising transparency online”; (3) “explore technological solutions” to protect electoral processes; and (4) “defend against disinformation campaigns.”37 The initiative emerged in tandem with the company’s launch of a Digital Peace Now campaign, which calls for greater government action as well as improved cyber hygiene on the part of users. Interestingly, this campaign is silent on private sector action.38
In November 2018, the French government incorporated many of these initiatives under the umbrella of an initiative called the Paris Call for Trust and Security in Cyberspace, which scores of governments, industry players, and civil society actors joined.39 Yet the announcement that has produced the most headlines is Facebook founder Mark Zuckerberg’s call for greater government and regulatory action, notably in the areas of “harmful content, election integrity, privacy, and data portability” following the March 2019 attacks in Christchurch, New Zealand.40 Importantly, he stressed the need for more effective privacy and data protection in the form of a “globally harmonized framework” urging, somewhat ironically, that more countries adopt rules such as the EU’s GDPR as a common framework. Zuckerberg’s opinion piece received a lukewarm reception, and many experts remain skeptical of his intentions.41
WHAT LIES AHEAD?
Despite this progress, significant governance challenges remain for cyberspace. Efforts to not only protect data, privacy, and human rights online but also attend to national and international security concerns are improving in some cases. For instance, according to one assessment, the EU’s GDPR provides much stricter guidelines and “strict security standards for collecting, managing, and processing personal data.” But the instrument does provide exemptions for data controllers or processors when it comes to “national defense, criminal investigations, and safeguarding the general public).”42 Progress remains much more limited or has even regressed in other countries and regions.
On cyber crime, despite concerns about the growing scale, economic and societal costs, and other risks of online criminal activity, states have not been able to (and likely will not) agree on a common framework for dealing with cyber crime or other malicious online activity that imperils users and hampers economic growth and development. This state of affairs is unlikely to change, given that some states continue to insist on the need for a common framework, while others remain wedded to the expansion of the existing Budapest Convention.
There are other challenges too. Progress remains slow in terms of achieving public and private sector commitments to bridge existing technological divides and move the digital transformation agenda forward. Inequalities within and between states (and cities) are growing even as technological advances continue to be made. This situation may make it even more challenging to meet the UN Sustainable Development Goals.43
Some countries will further challenge modalities of internet governance, particularly states that view greater state involvement in internet governance as crucial to national security. These divergences over how the internet should be governed continue to foment tensions among states and other stakeholders.44 Meanwhile, several countries have announced they will seek to build their own national alternatives to the global internet, possibly further fracturing an (already fractured) world wide web, though some observers have questioned the feasibility of such alternatives.45
As for international security, while tensions between countries continue to fester around normative restraints on state behavior, two new multilateral processes will commence this year through an Open-Ended Working Group (September 2019–July 2020) and a new GGE (December 2019–May 2021).46 Many observers view the two processes as conflicting and competing initiatives (given that the former was proposed by Russia and the latter by the United States). Some parties also view them as outdated (since the main actors in both mechanisms are states, although, importantly, both have included consultative mechanisms to engage with other actors such as regional organizations, private companies, academia, and civil society). But there is a signaled interest in ensuring the processes are both complementary and constructive.
That said, there are concerns, for instance, that GGE discussions on how international law applies to cyberspace might once again hit a wall. As noted, disagreements on international law–related issues were partly what impeded the last GGE from producing a consensus report. Since then, some states have decided to publicly share their views on how international law applies, although, to date, only a few states—Australia, France, the United Kingdom (UK), and the United States—have done so.47 Nonetheless, it is hoped that the two new processes will make further contributions to the discussion, as per their mandates.48 Importantly, the resolution establishing the GGE also suggests that the report include an annex in which participating governmental experts can include national views on how international law applies to the use of ICT by states.49
Meanwhile, some countries—and some nonstate actors—appear to remain committed to a binding international treaty. Yet the likelihood of such a treaty is perhaps slim, not least because the key actors crucial to any agreement view cyberspace and cybersecurity in very different strategic terms; at present, there appear to be limited incentives to agree on a new regime.
Beyond binding international law, the coming twenty-four months will serve as an important window into progress that has been made toward socializing and institutionalizing the political, nonbinding norms and confidence-building measures that past GGEs recommended. The two new UN processes and their associated consultative mechanisms will serve as important platforms for sharing national experiences and lessons on how states are implementing the norms and confidence-building measures nationally and through regional organizations, and how other actors actively contribute to this end.50
Other groupings will likely be proactive. For example, the Group of Seven (G7) members recently committed via the Dinard Declaration to sharing lessons and experiences on said norms, and it is likely they will channel those lessons and experiences into the multilateral process.51 Similarly, multi-stakeholder groups such as the Global Commission on the Stability of Cyberspace or industry initiatives such as the Tech Accord will surely present lessons and experiences from the norms they too have been advocating.
How countries hold each other accountable for violating norms is just as important. Indeed, some states’ persistent misuse (and potentially lethal use) of ICT is driving a dangerous security dilemma involving tit-for-tat activities that have significant escalatory potential. Beyond the fact that such activities raise serious questions about the rule of law, most related crisis-management or confidence-building mechanisms would likely prove ineffective in the event of escalation if there are no real channels of diplomatic dialogue between key states. Such channels are largely nonexistent at present. In this respect, more political and financial investment in operationalizing existing commitments to confidence building, track 1 or 1.5 dialogues, and other cooperative measures is imperative.
The growing number of initiatives aimed at fostering greater cybersecurity and stability do not (and perhaps cannot) deal with some of the structural issues driving insecurity and instability. This is particularly the case with respect to ICT products and services, which remain highly vulnerable to exploitation by actors with malicious intent.52 Greater, and more participatory, dialogue on the nature of global ICT market trends and the structural levers for making ICT products and services more safe and secure is urgently required and should not be inhibited by the growing (and valuable) focus on VEPs and other similar measures.
Finally, existing threats and vulnerabilities will surely be compounded by new problems. This means that conceptions of security will need to be reconsidered over time and that existing normative and governance frameworks will likely need to be adapted. For instance, new threats and vulnerabilities related to the Internet of Things are emerging: as the lines between human agency and “smart agent-like devices” become increasingly blurred, the safety and security of related services and devices remain serious problems.53 Likewise, new threats are also developing in relation to critical systems dependent on AI (such as the growing number of sectors and industries reliant on cloud computing), critical satellite systems, and information and decisionmaking processes, which are increasingly susceptible to manipulation for political and strategic effect. Heightened strategic competition and deteriorating trust between states further compounds these challenges. More than ever, countries need to invest in diplomacy to foster greater dialogue, cooperation, and coordination on the ICT-related issues that pose the greatest risks to society.