Google has published a white paper highlighting five specific areas where concrete, context-specific guidance from governments and civil society would help to advance the legal and ethical development of AI:
• Assemble a collection of best practice explanations along with commentary on their praiseworthy characteristics to provide practical inspiration.
• Provide guidelines for hypothetical use cases so industry can calibrate how to balance the benefits of using complex AI systems against the practical constraints that different standards of explainability impose.
• Describe minimum acceptable standards in different industry sectors and application contexts.
• Articulate frameworks to balance competing goals and definitions of fairness.
• Clarify the relative prioritization of competing factors in some common hypothetical situations, even if this will likely differ across cultures and geographies.
• Outline basic workflows and standards of documentation for specific application contexts that are sufficient to show due diligence in carrying out safety checks.
• Establish safety certification marks to signify that a service has been assessed as passing specified tests for critical applications.
• Determine contexts when decision-making should not be fully automated by an AI system, but rather would require a meaningful “human in the loop”.
• Assess different approaches to enabling human review and supervision of AI systems.
• Evaluate potential weaknesses in existing liability rules and explore complementary rules for specific high-risk applications.
• Consider sector-specific safe harbor frameworks and liability caps in domains where there is a worry that liability laws may otherwise discourage societally beneficial innovation.
• Explore insurance alternatives for settings in which traditional liability rules are inadequate or unworkable.
Google’s white paper noted that while differing cultural sensitivities and priorities may lead to variation across regions, it should be feasible to agree on a high-level checklist of factors to consider. Longer term, working with standards bodies (such as ISO and IEEE), it may also be helpful to establish some global standards as ‘due diligence’ best practice processes in relation to developing and applying AI.
“So far much of the current AI governance debate among policymakers has been high level; we hope this paper can help in evolving the discussion to address pragmatic policy ideas and implementation,” Google wrote.