AI and international technical standards

AI and international technical standards

AI and international technical standards

?International standards are a vital but frequently overlooked aspect of the international trade system. Technical standards set out specific characteristics that a product is required to meet—such as its size, shape, design, functions, and performance, or the way it is labelled or packaged—before it is put on sale. In most cases, the negotiation of international technical standards is a dry, technical exercise that takes place under the auspices of little-known international bodies. However, for AI and other frontier technologies, shaping international standards has become a geostrategic imperative, with governments including the US and China announcing an intent to shape international standards, and vying for leadership positions in the key international standard-setting bodies. Standards are an important policy instrument for regulating AI technologies and ensuring that they are reliable, trustworthy, and accountable. In most countries, AI standards are at an early stage, tend to be sector-specific rather than horizontal (applying to all uses of AI), and their stringency varies from sector to sector. For instance,

while the use of AI in health and civil aviation is heavily regulated, use in sports is much less regulated (Ciuriak and Rodionova, 2021). As is often the case for new technologies, many jurisdictions rely principally on voluntary standards and industry self-regulation, including in controversial areas like connected and autonomous cars. Deference to industry reflects governments’ lack of technical expertise and the challenges of designing mandatory performance standards for emerging technologies that are complex and evolving rapidly (Peng, 2021). Many guidelines aiming at ‘ethical AI’ have been proposed by industry, raising concerns of ‘ethics washing’ whereby industry players adopt a light-touch approach to self-regulation in a bid to assure consumers without requiring

substantive changes to their practices (Radu, 2021).

As concerns about the ethical impacts of AI have grown, governments have started to develop cross-cutting regulations that will apply to all AI technologies. The EU has proposed the world’s first comprehensive attempt to

regulate AI. Its proposed AI Act would ban some uses of AI (such as social scoring), heavily regulate high-risk uses (such as hiring and admissions software, and credit scoring), and lightly regulate less risky AI systems (such as customer service chatbots); meanwhile AI and international technical standards

International standards are a vital but frequently overlooked aspect of the international trade system. Technical

standards set out specific characteristics that a product is required to meet—such as its size, shape, design, functions,

and performance, or the way it is labelled or packaged—before it is put on sale. In most cases, the negotiation of international technical standards is a dry, technical exercise that takes place under the auspices of little-known

international bodies. However, for AI and other frontier technologies, shaping international standards has become a geostrategic imperative, with governments including the US and China announcing an intent to shape international standards, and vying for leadership positions in the key international standard-setting bodies.

Standards are an important policy instrument for regulating AI technologies and ensuring that they are reliable, trustworthy, and accountable. In most countries, AI standards are at an early stage, tend to be sector-specific rather

than horizontal (applying to all uses of AI), and their stringency varies from sector to sector. For instance,

while the use of AI in health and civil aviation is heavily regulated, use in sports is much less regulated (Ciuriak and Rodionova, 2021). As is often the case for new technologies, many jurisdictions rely principally on voluntary

standards and industry self-regulation, including in controversial areas like connected and autonomous cars. Deference to industry reflects governments’ lack of technical expertise and the challenges of designing mandatory

performance standards for emerging technologies that are complex and evolving rapidly (Peng, 2021). Many guidelines aiming at ‘ethical AI’ have been proposed by industry, raising concerns of ‘ethics washing’ whereby

industry players adopt a light-touch approach to self-regulation in a bid to assure consumers without requiring substantive changes to their practices (Radu, 2021).

As concerns about the ethical impacts of AI have grown, governments have started to develop cross-cutting regulations that will apply to all AI technologies. The EU has proposed the world’s first comprehensive attempt to

regulate AI. Its proposed AI Act would ban some uses of AI (such as social scoring), heavily regulate high-risk uses (such as hiring and admissions software, and credit scoring), and lightly regulate less risky AI systems (such as customer service chatbots); meanwhile its proposed AI Liability Directive would make it easier for the enforcement of civil law compensation for damage caused by AI systems. While the EU is a first mover, other governments and

sub-national governments are following suit, including China’s Shanghai Province, which has developed its own AI Act, and several legislative proposals have been tabled in the US.

National AI standards have implications for trade, as a firm wishing to export must be able to demonstrate that its product conforms to the AI standards of the importing market. Under the EU’s proposed AI Act for instance,

exporters of AI products deemed ‘high risk’ will need to implement a risk management process; conform to higher data standards; more thoroughly document their AI systems and systematically record their actions; provide information to users about AI functions; and enable human oversight and on-going monitoring (Engler, 2022). They will also need to undergo conformity assessment by designated bodies before entering the EU market. Third-party

suppliers in the AI supply chain must also be able to show compliance. Data suppliers, for instance, will have to explain how data were obtained and selected, the labelling procedure, and the representativeness of the dataset

?

It will be updated in the next post.....

??

#Artificial_intelligence?#AI?#global_economy?#innovation?#technology?#digital?#training?#management?#data?#development?#opportunities?#intelligence


要查看或添加评论,请登录

S akbar Tafakh的更多文章

  • Advantages & Disadvantages of Air Pollution

    Advantages & Disadvantages of Air Pollution

    Advantages of Air Pollution Awareness and Policy Changes: Air pollution can serve as a powerful catalyst for awareness…

  • 6 Strategies to Help Your Company Weather Inflation

    6 Strategies to Help Your Company Weather Inflation

    by · Jason Heinrich, Simon Henderson,Tom Holland, and Megan Portanova With the economic recovery gaining steam amid an…

    2 条评论

社区洞察

其他会员也浏览了