
The European Commission has publicly recognised the benefits of using Artificial Intelligence to boost productivity, increase accuracy and lower costs in sectors such as healthcare, farming, financial risk management, fraud and cybersecurity threat detection, driver safety, and law enforcement. The Commission also recognises the ethical challenges of introducing AI and the need for governance over technologies that have the potential to harm, discriminate against, expose, or supercede people.
The Commission has outlined the three stages that are needed to build trust in AI. These include publishing the 7 essential factors for developing trustworthy AI; initiating a large scale pilot involving members of the European AI Alliance and, thirdly, gaining international consensus on ethical AI from countries outside of Europe, including members of the G7 and G20.
The seven essentials for trustworthy AI are:
Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
Cathal McGloin, CEO of conversational AI platform company, ServisBOT comments, “The European Commission’s Ethical AI Guidelines will help to guide conversational AI developers to write code that protects us from ‘rogue’ chatbots that push the boundaries. In the realm of conversational AI, where natural language is fast becoming the new engagement interface, there are considerable benefits to be gained from implementing chatbot technology that does not breach customer privacy and that can adhere to solid ethical approaches. The challenge is that conversations, by their nature, are fluid, nuanced, and can take many different directions. The opportunity for conversational AI to transform engagement models and yield greater operational efficiency is highly attractive, but only when done right.”