FHI: technical report on AI standards governance

Artificial intelligence (AI) presents novel policy challenges that require coordinated global responses. Standards, particularly those developed by existing international standards bodies, can support the global governance of AI development. International standards bodies have a track record of governing a range of socio-technical issues: they have spread cybersecurity practices to nearly 160 countries, they have seen firms around the world incur significant costs in order to improve their environmental sustainability, and they have developed safety standards used in numerous industries including autonomous vehicles and nuclear energy.

These bodies have the institutional capacity to achieve expert consensus and then promulgate standards across the world. Other existing institutions can then enforce these nominally voluntary standards through both de facto and de jure methods. AI standards work is ongoing at ISO and IEEE, two leading standards bodies. But these ongoing standards efforts primarily focus on standards to improve market efficiency and address ethical concerns, respectively. There remains a risk that these standards may fail to address further policy objectives, such as a culture of responsible deployment and use of safety specifications in fundamental research. Furthermore, leading AI research organizations that share concerns about such policy objectives are conspicuously absent from ongoing standardization efforts.

Standards will not achieve all AI policy goals, but they are a path towards effective global solutions where national rules may fall short. Standards can influence the development and deployment of particular AI systems through product specifications for, i.a., explainability, robustness, and fail-safe design. They can also affect the larger context in which AI is researched, developed, and deployed through process specifications.

The creation, 3 dissemination, and enforcement of international standards can build trust among participating researchers, labs, and states. Standards can serve to globally disseminate best practices, as previously witnessed in cybersecurity, environmental sustainability, and quality management. Existing international treaties, national mandates, government procurement requirements, market incentives, and global harmonization pressures can contribute to the spread of standards once they are established. Standards do have limits, however: existing market forces are insufficient to incentivize the adoption of standards that govern fundamental research and other transaction-distant systems and practices. Concerted efforts among the AI community and external stakeholders will be needed to achieve such standards in practice.

Ultimately, standards are a tool for global governance, but one that requires institutional entrepreneurs to actively use standards in order to promote beneficial outcomes. Key governments, including China and the U.S., have stated priorities for developing international AI standards. Standardization efforts are only beginning, and may become increasingly contentious over time, as has been witnessed in telecommunications. Engagement sooner rather than later can establish beneficial and internationally legitimate ground rules to reduce risks in international and market competition for the development of increasingly capable AI systems.

In light of the strengths and limitations of standards, this paper offers a series of recommendations. They are summarized below:

  • Leading AI labs should build institutional capacity to understand and engage in standardization processes. This can be accomplished through in-house development or partnerships with specific third-party organizations.
  • AI researchers should engage in ongoing standardization processes. The Partnership on AI and other qualifying organizations should consider becoming liaisons with standards committees to contribute to and track developments. Particular standards may benefit from independent development initially and then be transferred to an international standards body under existing procedures.
  • Further research is needed on AI standards from both technical and institutional perspectives. Technical standards desiderata can inform new standardization efforts and institutional strategies can develop paths for standards spread globally in practice.
  • Standards should be used as a tool to spread a culture of safety and responsibility among AI developers. This can be achieved both inside individual organizations and within the broader AI community.

Read the full report

Related Posts

Previous Post
BIS: OTC derivatives notional declines to $544 trillion, market value down to $9.7 trillion
Next Post
Key takeaways from FISL NYC 2019 (Premium)

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account