7 Step Guidelines Released By EU For Trustworthy AI
No technology raises the outright fear like artificial intelligence. And this is not something that worries the ordinary people. Google and Facebook, have done some thorough research in AI ethics research centers. Similarly, Canada and France teamed up last year to form an international panel to debate on AI’s “responsible adoption.” Finally, the European Commission released its own guidelines calling for “trustworthy AI.”
The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. The guidelines include seven requirements. Instead, they offer a loose framework. The emphasis is that AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness, and accountability. It calls for attention to protecting vulnerable groups, like people with disabilities, children and also states that citizens should have full control over their data.
The Commission will work with stakeholders to identify areas where additional guidance might be required, and out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As the potential to develop and build things like autonomous weapons and fake news-generating algorithms, its likely more governments will take a stand on the ethical concerns AI brings to the table.
A Synopsis of the EU’s guidelines listed below.
1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
4. Transparency: The traceability of AI systems should be ensured.
5. Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills, and requirements, and ensure accessibility.
6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
7. Accountability — Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
Source: https://www.dw.com/en/artificial-intelligence-the-eus-7-steps-for-trusty-ai/a-48250503 , http://europa.eu/rapid/press-release_IP-19-1893_en.htm
Kevin Jones951 Posts
Kevin Jones, Ph.D., is a research associate and a Cyber Security Author with experience in Penetration Testing, Vulnerability Assessments, Monitoring solutions, Surveillance and Offensive technologies etc. Currently, he is a freelance writer on latest security news and other happenings. He has authored numerous articles and exploits which can be found on popular sites like hackercombat.com and others.