Analysis: the EU is seeking to protect its citizens from bad decisions made by high-risk AI information technology systems
As part of its strategy for data, the European Commission has unveiled its first ever artificial intelligence regulatory framework. This aims to introduce a policy to protect the safety and fundamental rights of citizens from the decisions made by high-risk AI information technology systems. The proposal imposes specific obligations and requirements on public authorities and businesses that are using or who intend to use high-risk AI-based decision systems.
AI works by modelling human decisions involving supervised deep learning of a neural network's prediction model. This is done by using training data that is inputted into the model which can then be used to make predictions concerning new data or events
But there is a caveat in that there must be a degree of trust that the data training the neural network does not contain any 'noise’ in terms of missing data, erroneous data, and data bias. For instance, a recent MIT study highlights how 10 of the most used datasets that are used to train the global AI neural networks for self-driving cars, medical imaging devices and credit scoring systems contain data label errors.
Furthermore, networks can be trained on data sets that are inherently prejudiced and biased. In 2015, Amazon used an AI recruiting system that was not rating all applicants fairly and showed a bias against female applicants. The root cause of this was found to be that the AI system was modelled on a 10-year historical data set that showed that there was a male dominance across all technology roles in the industry. As a result, the AI neural network learned to prefer male candidates over female candidates for roles at Amazon.
There are also wider ramifications of AI decision-making in terms of decision-making drift, where businesses and public authorities could become dependent on AI systems making both simple and complex decisions. Left unchecked and without intangible human judgment or intervention, it may lead to an increase in biases propagating throughout the training neural networks.
There are some controls to stem this. The European Commission is taking proactive steps to create the first ever legal framework on AI which will apply to the providers and end-users of high-risk AI. While high-risk AI is not defined, it can be interpreted from the criteria listed in article 6 and article 7 of the framework what constitutes high-risk. This would include the AI used in self-driving cars, the assessment of student state exams or patient medical x-ray image processing.
Specifically, the proposed legal framework applies to the "use of AI with its specific characteristics (e.g., opacity, complexity, dependency on data, autonomous behaviour) can adversely affect a number of fundamental rights enshrined in the EU Charter of Fundamental Rights. The European Commission have used a risk-based approach to enable businesses to foster the creation of "Trustworthy AI" information technology systems.
The framework outlines specific obligations to eliminate the occurrence of biased AI-based decision-making, involved in education and training, employment, policing, the judiciary, and special services, which can infringe on fundamental human rights. Ultimately, the new AI legal framework will create trust, transparency, and accountability between public authorities, businesses, and citizens. Additionally, it will create legal certainty and bolster enforcement mechanisms across European Union member states. The legal framework will also provide a roadmap for public authorities and businesses to innovate and reduce obstacles for provisioning AI-related products and services.
Irish businesses that currently use high-risk AI or plan to use high-risk AI should prepare a strategy for readying themselves for the introduction of the new regulations. The proposed legal framework will create a European AI Board which will overview the enforcement of the framework in member states. Additional measures will be put in place to support start-ups and small and medium-sized enterprises to ensure compliance with the framework.
Irish businesses will have to create robust risk management controls. Human oversight will also become of paramount importance to ensure compliance with the proposed regulations so that fundamental rights are not contravened. The regulatory framework is currently being presented by the European Commission and will go to the European Parliament and council for review.
Dr Trevor Clohessy is a lecturer and researcher in the Department of Enterprise and Technology at the School of Business at Galway Mayo Institute of Technology. Fearghal McHugh is a lecturer in the Department of Enterprise and Technology at the School of Business at Galway Mayo Institute of Technology
The views expressed here are those of the author and do not represent or reflect the views of RTÉ