Building on the work of the group of independent experts, the European Commission is launching a pilot phase to ensure that the ethical guidelines for artificial intelligence (AI) development and use can be implemented in practice. The plans are a deliverable under the AI strategy of April 2018, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
VP for the Digital Single Market Andrus Ansip said in a statement: “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
Commissioner for Digital Economy and Society Mariya Gabriel said in a statement: “Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”
Artificial Intelligence (AI) can benefit a wide range of sectors, including climate change and financial risk management, and can also help to detect fraud and cybersecurity threats. However, AI also brings new challenges for the future of work, and raises legal and ethical questions. The EC is taking a three-step approach: setting-out the key requirements for trustworthy AI; launching a large scale pilot phase for feedback from stakeholders; and working on international consensus building for human-centric AI.