Opinion: algorithms increasingly rule our world, but most of us are not sufficiently trained or knowledgeable to know about how they work
By Birgit Schippers, St Mary's University College Belfast
The row over calculated grades errors for the Leaving Cert mirrors the political furore that ensued over algorithmic grading in Britain and Northern Ireland. For the class of 2020, algorithms became a symbol of computer-automated social inequality and a barrier to social mobility. They produced a disproportionately negative outcome for high performing students from historically low performing schools.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Drivetime, guidance counsellor Brian Mooney and Brian Stanley TD discuss the error in the Leaving Cert calculated grading system
Algorithms, such as those used in exam grading, are automated data processing techniques that proceed via a series of computational steps. These steps transform data input into output – creating a 'decision’ or prediction - with limited or no human involvement. And they work at scale: they deliver a fast analysis of huge datasets that can be composed of diverse sources, such as text, sound or images.
As someone who researches how new technologies impact on our human rights and on democratic politics, I am concerned that the use of opaque and little understood technological systems will replace debate, contest and the deliberative quality of democratic will formation. What is particularly worrying is how policy decisions based on algorithmic prediction present a veneer of technological accuracy and impartiality, which appears to be superior to flawed human judgement.
To most of us, the workings of algorithms seem to lack any apparent connection to the real world. But they have real-life consequences, ranging from credit scoring to job applications, performance management and workplace surveillancel from high-frequency trading to dynamic pricing in online retail; from welfare decisions to predictive policing.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Liveline, lsteners react to the news of errors in the Leaving Cert calculated grading system
Algorithms can be a force for good, for example in medical diagnosis, in design and architecture, or even in music and composition. They propel our Internet search engines and Netflix or Amazon recommendations. But these search engines can provide us with information that is biased, fake or downright harmful. For example, research has shown that the racial and gender bias of Internet search engines amplifies the harmful racist and misogynistic representation of black girls and women
The actions of Facebook and Cambridge Analytica in the 2016 Brexit referendum and the US presidential election of the same year have become textbook examples of algorithmic interferences in the democratic process. Practices of misinformation and the microtargeting of prospective voters through individually tailored political advertising exploit individual fears and vulnerabilities for political gain. They undermine the shared information basis of political communities and destroy what philosopher Hannah Arendt called our 'common world'. Meanwhile, the use of facial recognition algorithms, already deployed at Irish airports, has a chilling effect on our right to freedom of assembly, association and expression, and on our democratic political culture.
These algorithms also illustrate how data sets are rarely free from human bias. For example, risk scoring algorithms used by US courts and parole boards to establish the probability of reoffending have revealed a significant racial bias. There are worries that risk scoring and need classification in welfare assessment will create a ‘digital welfare dystopia’. Instead of attending to individual needs, opaque algorithmic scores, usually beyond reproach, extrapolate welfare assessments from the behaviour patterns of a group.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Drivetime, Philip Boucher-Hayes looks into the algorithm behind this year's Leving Certificate results
This opaque nature of algorithmic tools raises serious concerns. Even if we know and understand the data that goes into the algorithm (and we usually don’t), most citizens are probably unable to open up the black box of algorithmic decision-making. This could be because we are not sufficiently trained or knowledgeable, or because the algorithm is simply too complex. The proprietary or classified nature of some algorithms – as intellectual property of a corporation or as a state secret – can also prevent transparency.
This lack of transparency means that algorithmically generated decisions, such as those used to grade our children’s educational performances, bypass scrutiny and challenge. It has led organisations such as the British Ada Lovelace Institute and the US-based AI Now to call for the introduction of algorithmic audits and algorithmic impact assessments. Such robust and independent scrutiny of the algorithmic black-box, and efforts to improve algorithmic literacy, are urgently required.
Algorithms wield power but we should not view them as separate from the social world of humans. Contrary to British Prime Minister Johnson’s claim that a mutant algorithm was responsible for the UK’s exams fiasco, we should think of algorithms as elements of a socio-technical system that includes data and algorithmic code, as well as human values and policy preferences. By extrapolating from the past into the future, and from the collective to the individual, there is a real danger that algorithmic decision-making and prediction fails to consider the qualities, quirkiness, needs and rights of individual human beings.
Dr Birgit Schippers is Senior Lecturer in Politics at St Mary's University College Belfast.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ