7 essential factors for developing trustworthy AI

Written by Our News Team, Our News Team

The European Commission has publicly recognised the benefits of using Artificial Intelligence to boost productivity, increase accuracy and lower costs in sectors such as healthcare, farming, financial risk management, fraud and cybersecurity threat detection, driver safety, and law enforcement. The Commission also recognises the ethical challenges of introducing AI and the need for governance over technologies that have the potential to harm, discriminate against, expose, or supercede people.

The Commission has outlined the three stages that are needed to build trust in AI. These include publishing the 7 essential factors for developing trustworthy AI; initiating a large scale pilot involving members of the European AI Alliance and, thirdly, gaining international consensus on ethical AI from countries outside of Europe, including members of the G7 and G20.

The seven essentials for trustworthy AI are:

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Cathal McGloin, CEO of conversational AI platform company, ServisBOT comments, “The European Commission’s Ethical AI Guidelines will help to guide conversational AI developers to write code that protects us from ‘rogue’ chatbots that push the boundaries. In the realm of conversational AI, where natural language is fast becoming the new engagement interface, there are considerable benefits to be gained from implementing chatbot technology that does not breach customer privacy and that can adhere to solid ethical approaches. The challenge is that conversations, by their nature, are fluid, nuanced, and can take many different directions. The opportunity for conversational AI to transform engagement models and yield greater operational efficiency is highly attractive, but only when done right.”

whois: Andy White Freelance WordPress Developer London