Google publishes AI ethics white paper calling for global standards

Google has published a white paper highlighting five specific areas where concrete, context-specific guidance from governments and civil society would help to advance the legal and ethical development of AI:

Explainability standards
• Assemble a collection of best practice explanations along with commentary on their praiseworthy characteristics to provide practical inspiration.
• Provide guidelines for hypothetical use cases so industry can calibrate how to balance the benefits of using complex AI systems against the practical constraints that different standards of explainability impose.
• Describe minimum acceptable standards in different industry sectors and application contexts.

Fairness appraisal
• Articulate frameworks to balance competing goals and definitions of fairness.
• Clarify the relative prioritization of competing factors in some common hypothetical situations, even if this will likely differ across cultures and geographies.

Safety considerations
• Outline basic workflows and standards of documentation for specific application contexts that are sufficient to show due diligence in carrying out safety checks.
• Establish safety certification marks to signify that a service has been assessed as passing specified tests for critical applications.

Human-AI collaboration
• Determine contexts when decision-making should not be fully automated by an AI system, but rather would require a meaningful “human in the loop”.
• Assess different approaches to enabling human review and supervision of AI systems.

Liability frameworks
• Evaluate potential weaknesses in existing liability rules and explore complementary rules for specific high-risk applications.
• Consider sector-specific safe harbor frameworks and liability caps in domains where there is a worry that liability laws may otherwise discourage societally beneficial innovation.
• Explore insurance alternatives for settings in which traditional liability rules are inadequate or unworkable.

Google’s white paper noted that while differing cultural sensitivities and priorities may lead to variation across regions, it should be feasible to agree on a high-level checklist of factors to consider. Longer term, working with standards bodies (such as ISO and IEEE), it may also be helpful to establish some global standards as ‘due diligence’ best practice processes in relation to developing and applying AI.

“So far much of the current AI governance debate among policymakers has been high level; we hope this paper can help in evolving the discussion to address pragmatic policy ideas and implementation,” Google wrote.

Read the full white paper

Related Posts

Previous Post
Get the weekly SFM update – our February 22 newsletter is online
Next Post
What happened with robo-advisors in Q4 2018

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account