Last month, I had the privilege of being sponsored by the Future Leaders Network to attend the two-day New York Digital Summit of the World Humanitarian Forum (WHF), which according to its website is the world’s ‘largest and most inclusive nonpartisan forum in humanitarian aid and international development.’
The annual Forum convenes decision-makers that cut across the public, private and non-profit sectors, along with emerging leaders. This was a special year for WHF as it celebrates its 75th anniversary and thanks to COVID-19 it’s also the first time the conference was done virtually.
There were plenty of talks and roundtable discussions but all were united by a common concern: is the international community able to deliver the Sustainable Development Goals (SDGs) by 2030? What will be the impact of COVID-19?
The aim of this blog series will be to shed light on the main themes that were explored and suggest different actions we can all take to achieve the SDGs. These themes include:
Tech-for-good and innovation
Eradicating poverty and hunger
Philanthropy and partnerships
We’ll begin with the first theme: Youth.
The main question we’re concerned with here is whether young people have what it takes to pioneer change in an age of deep uncertainty.
The speakers at WHF believe so. And so do I.
In this article, I’ll be referring mainly to the roundtable titled Youth: Our Future.
The panelists included
Vivian Onano (moderator), Founder/Director at Leading Light Initiative and Youth Advisor at Global Education Monitoring Report, UNESCO
Safoora Biglari, Director, Community, One Young World
Siddarth Satish, Young Ambassador, Ariel Foundation International
Sophie Daud, Chief Executive Officer, Future Leaders Network
Mete Coban MBE, Executive Director, My Life My Say
Joseph Watson, Youth Advisory Board, EY Foundation
Franco Perez Diaz, Global Vice President of Business Development and External Relations, AIESEC
There was lots said but I’m going to limit my discussion here to what in my view were the main issues the panelists sought to address:
Getting young people in decision-making positions
Addressing the employment gap
Supporting young entrepreneurs
Creating an inclusive space for young people to engage with politics
Let’s start with issue #1.
Young people in decision-making positions
Sophie dealt with this question in spectacular fashion. Her answer could be divided up into three parts (1) increase youth representation (2) equip young people with the skills they need (3) recognise young leaders.
First, Sophie pointed out that while there exists opportunities to sit on boards and attending important events like the G7 and G20 Youth Summits, these opportunities are ‘far too few and far between’. In her view, we need to create more mechanisms and open up more platforms that make it possible to put the youth voice at the heart of international development and multilateral decision-making.
Second, Sophie made the fantastic point that it’s not merely enough for young people to have a seat at the table. We must go a step further to equip them with the skills to have a tangible impact otherwise we effectively ‘set them up to fail’. Sophie made the subtle but important point that the training involved to be an effective leader is often limited to those who demonstrate leadership qualities early. This leads to a vicious circle wherein those who are perceived as possessing ‘innate’ leadership skills are disproportionately represented in leadership positions. The way to break this circle is by recognising that while it is possible for one to be naturally gifted in the skills associated with being an effective leader, leadership is a skill that can be learned and refined through training. This is something FLN is doing through their Academy.
Finally, Sophie submitted that young leaders need to be elevated and promoted for their work as a way to debunk the notion that to be a leader requires one to be ‘older and wiser’. That is not to say one should not accord more seasoned leaders with the respect they deserve. However, to Sophie’s second point, leadership is not like wisdom teeth that sprout when you hit a certain age — it’s a skill that is distributed across various demographics and requires cultivation. I share and wholeheartedly endorse Sophie’s points.
I also appreciated Sophie’s other contributions during the talk, including her transparency about the opportunities and challenges she’s faced throughout her leadership journey having worked in the public sector previously. Growing up, Sophie was discouraged from entering the public sector by virtue of it being perceived as ‘boring’ and ‘terribly paid’. She discovered for herself that these perceptions were fallacious and has since done outreach work to give children and students a more accurate depiction of what it’s like.
In dealing with the challenges of being a young professional, Sophie expressed the importance of backing yourself, becoming a master of your craft and working collaboratively. As a civil servant, you’re almost guaranteed to work with individuals with whom you’ll disagree. It’s important, therefore, to become ‘attuned to your values’ and to recognise the ‘shades of grey’ in which you and your colleagues can find common ground.
In the next blog, I’ll summarise what the speakers had to say on the remaining issues.
Disclaimer: The views expressed in this article, unless otherwise stated, are those of the participant(s) and their organisations and do not necessarily represent those of the World Humanitarian Forum or its advisory board members or my own. This article does not imply official endorsement or acceptance of the views expressed or the support of specific agendas.
The science fiction (Sci-Fi) genre can teach us many things about humanity. Namely, new technologies usually either terrify us as we imagine the dystopian consequences that may accompany them or fill us with boisterous optimism about the revolutionary benefits they hold. Sci-Fi is a genre of ideas, speculation and in a sense hyping about what’s next.
Legal technology (LegalTech) is no exception.
*Cue the anecdote*
Last year I had to write a report and deliver a presentation on what I thought a top-tier law firm will look like in 30 years’ time. The group project was part of my legal scholarship at Freshfields Bruckhaus Deringer — a leading commercial law firm — and gave me the opportunity to interview a suite of experts regarding their views on the future of law.
The experts included partners and other executives at Mckinsey and Company, Goldman Sachs, Deloitte, Bank of England, as well as Freshfields. Contrary to the ‘scary robots will take all our jobs’ narrative that has been peddled by the futurist, Ray Kurzweil, in ‘The Singularity is Near: When Humans Transcend Biology’ (2005), these experts gave nuanced accounts about the potential impacts of technological innovation on the legal industry.
After hearing from these experts and engaging in some scholarly debate of our own, the group concluded that LegalTech and artificial intelligence (AI) capability has been hyped, reallyhyped.
In my last post, I made the point that technologists tend to possess an unbridled optimism towards the prospects of new technologies. What I didn’t explore were the political and social factors that give rise to their hyperbolic claims.
This blog is not intended to breathlessly chase the headlines regarding law firms’ latest innovation efforts. Instead, I’m interested in how hype and promises legitimise and constrain present actions for better and for worse.
What do I mean by ‘hype’?
I’m writing this blog one month after The Legal Geek conference — the world’s first LegalTech start-up event. The conference is a reflection of the explosion of LegalTech events. It included sessions on how AI can differentiate firms from their competitors.
The conference emphasised AI’s time-saving and scalability through practical workshops. Examples of LegalTech include BCLP’s Sharedo, which verifies property details against public records, and Freshfields’ Kira, which leverages proprietary machine learning technology to automatically identify and extract relevant information from contracts.
Clearly, the current meaning of AI in law is far removed from science fiction concepts. Yet there’s such a strong temptation to characterise the emergent field of LegalTech as revolutionary and inevitable, rather than as an incremental advance on existing technologies and techniques that may or may not have huge dividends.
Until the technologies mature and the processes are fine-tuned, the pervasive optimism that underpins the marketing and speculation surrounding LegalTech will remain largely in the realm of fiction.
Why the hype?
A major factor is the ability to attract investment. LegalTech is becoming a household word in the commercial world at least partly in response to clients’ demands for the death of the billable hour. But also, the global competitiveness and unpredictability of the legal market. No law firm aspires to be the next Nokia or Blockbuster by failing to respond to the shift in legal service delivery.
By boldly declaring that LegalTech is a revolutionary field of technological innovation with exhilarating prospects for the world of law, LegalTech providers and start-ups generate a flurry excitement among investors and law firms.
As Selin explains, hype is ‘active, shaping and constructive’ and it has an impact on day-to-day ‘decision-making, alliance building and resource allocation’. We can see this in evidence by the growing number of law firms forming partnerships with LegalTech start-ups. Take Neota Logic. It recently partnered with the Irish law firm, McCann Fitzgerald with the promise to offer brand differentiation and higher return on investment (ROI) by providing the firm with a digital extension of their business.
Paradoxically, the expectations of LegalTech seem to be at their strongest even though the technology is still at its infancy. Hype around LegalTech, and AI generally, has established a narrative of inevitability, the myth that LegalTech is an unstoppable force which is guaranteed to fundamentally alter the way lawyers do their jobs.
A similar law-like inevitability was attributed to the exponential growth in the processing power of computers through Moore’s Law. But recent studies have shown that the law which promised us that processor performance would double every two years is, to put it crudely, dead.
The reality is that corporate strategy gives impetus to LegalTech. There’s no perceivable law in nature which states that LegalTech will define the future of the legal industry and yet the hype around it would have you believe otherwise.
In this sense, hype is a handmaid of technological determinism. It would have you assume that LegalTech and legal AI will determine the development of law firms’ structure and values in a primary and all-encompassing way. Hype is rhetoric at its most powerful. As Van Lente (1993) notes, hype as promises has the ability to construe ‘forceful fictions’ that are implicated in the innovation efforts of law firms.
But we can’t assume the present position of LegalTech is some kind of absolute that doesn’t warrant further probing. Just like any business trying to survive the acrimonies of the corporate jungle, LegalTech start-ups care about their bottom line. Law firms, too, need to convince clients they are innovating to preserve client relationships.
What’s the big deal?
The LegalTech market is highly fragmented. But the hype around LegalTech that is currently gathering headlines tends to conflate diverse technologies under the single umbrella term of AI which creates a speculative social bubble. Even legal AI as a subset of LegalTech consists of various solutions.
On one end, legal AI solutions like Kira, LawGeex and Luminance make the lives of lawyers easier and more productive by performing contract review and analysis. On the other end, we have Lisa and Neota Logic that supposedly threaten to replace the role of lawyers entirely as they capture and automatically replicate the expertise of lawyers through machine learning.
By conflating these technologies, the hype around LegalTech makes it hard to disentangle fact from fiction, the mundane from the extraordinary. It’s not that either side of the spectrum is theoretically invalid. It’s not impossible, for example, that AI systems will perform better than human lawyers at some tasks. At least that’s what we can infer from the AI program called Case Cruncher Alpha that beat commercial lawyers at predicting whether the Financial Ombudsman would allow a claim of mis-sold payment protection insurance (PPI) after being given the basic facts of hundreds of cases.
But the conflation makes it hard to reflect on the inherent qualities of the various technologies across different contexts. As Cambridge law lecturer Felix Steffek and Ian Dodd from a company called Premonition note, had PPI experts been doing the predicting, the AI program probably would not have outperformed the humans. The lesson we can take from this example is that we do ourselves a disservice by taking the sci-fi-inspired leap of logic in concluding that AI software like Case Cruncher Alpha are the doom of lawyers. Instead, we must keep a critical eye regarding the nature of LegalTech and legal AI under various conditions.
Without clarification and clear definition of what LegalTech providers mean when they weave their accounts of futures and ‘revolutionary’ technologies, uncertainty and political agenda give way to biases.
Not everyone is buying into the hype. A 2018 report by Jomati, a legal profession consultancy, found that the keenness of firms to adopt AI solutions has been exaggerated as many firms have adopted the ‘watch and wait’ approach before choosing to invest. Richard Susskind expressed a similar sentiment in ‘Tomorrow’s Lawyers’where he explains that AI systems are costly to build and maintain (p186). Some have gone as far as asserting that most forms of LegalTech are ‘glorified excel spreadsheets’. There are other concerns with LegalTech and AI broadly, too, that are dividing opinions.
For example, facial-recognition technology has become so advanced that it has the potential to detect someone’s sexual orientation. In the wrong hands, such technology could lead to outrageous privacy violations. Law-enforcement officials across the globe will use AI to identify criminals but may also encroach on ordinary citizens. Countries with a stellar record of surveillance and human-rights abuses, including China, are already using AI to monitor political activity and quash dissent. AI can be biased when it comes to criminal conviction, sentencing and prediction of crime.
There’s also the prospect of legal AI contributing to the rise of monopolies in the legal industry, which would stifle innovation and consumer choice. Retailing is an example of how AI can help huge corporates gain a lion’s share in the market. Amazon, which makes extensive use of AI, controls around 40% of online commerce in the US, helping it build moats that make it difficult for rivals to compete.
Hype has the potential to distract us from these other forms of LegalTech even though they could have cataclysmic effects on civil liberties and resource allocation.
Perhaps we need to hype them up, too?
The other side effect of hype is that it could lead LegalTech providers and recipients to fall into what Rayner (2004) describes as the novelty trap. As I established in the last section, the LegalTech project being propelled by firms and start-ups has inevitably invited suspicion. But the response that some firms have taken is to assert that some continuity will be maintained.
For example, law firms are joining standards-setting organisations that will produce the technical standards which seek to maintain the conventions and traditions that law firms typically promise their clients.
But most standards are voluntary in the sense that they are offered for adoption by industries without being mandated in law. It’s only when regulators start breathing down law firms’ necks that the standards become legal requirements.
How can we be sure that the damage won’t have already been done by the time this happens? Plus, a pivot back to standards and traditions seems suspiciously at odds with the revolutionary aspect of the LegalTech promise that made it so attractive to begin with. What is so appealing about LegalTech if it is dramatically altered in the course of actual implementation in law firms through standards and traditions?
These implications give credence to the importance of hype in the material construction of reality — past, present, and future — and provide a justification for thinking about the future and LegalTech’s place in it.
It’s not very clear how we resolve the issues to which hype gives rise. Is the hype still justified despite its neglect of the impingement on fair and equal treatment? Does this neglect constitute an erasure or flight from truth? Whose view ultimately matters?
I feel like there isn’t an ultimate solution. We’re dealing with uncharted territory, so of course, it’s not going to be straightforward.
But I don’t think we should rely on distorted fictions to drive innovation.
Principles matter, especially when it concerns the law.
Almost thirty years have passed since the racist murder of Stephen Lawrence. The event was a watershed in legal history as it exposed fundamental weaknesses of the criminal justice system (CJS). Six years of campaigning by Baroness Lawrence led to a public inquiry that deemed the Metropolitan Police — the UK’s largest police authority — to be guilty of ‘institutional racism’. Coined in Sir William Macpherson’s report (1999), the term became a cornerstone for anti-racism and contributed to equality reforms like the Race Relations (Amendment) Act (2000). Such reforms represented a paradigm shift in race relations in the UK and helped the country transcend from the race riots of the ’80s and casual bigotry of the ‘no blacks, no dogs, no Irish’ variety. There were rumours that Britain was becoming a post-racial society.
Then in May 2020 came a discordant ring. News of George Floyd’s murder at the hands of US law enforcement reignited global conversations around race relations. Like in the event of Stephen Lawrence, George Floyd’s treatment by the police was an affront to the rule of law. It was a stark reminder that the life of a black man can be typecast as not deserving the same level of respect and protection by officers of the law. Of course, issues of stereotyping and unconscious racial bias are less explicit and commonplace. Nevertheless, we must accept that they still disadvantage litigants in civil, criminal and immigration cases, and hamper the prospects of aspirant lawyers.
Just under 14% of practising barristers and 15% of first-six pupils in 2019 were from BAME backgrounds, according to the Bar Standards Board (BSB). According to the Lammy Review (2017), black people are six times more likely to be stopped and searched by the police than white people. But it is not just the police that appear to be guilty of racial bias. So, too, is the judiciary. The Lammy Review showed that BAME defendants were 240% more likely to be given a prison sentence for a drug offence than white defendants. Black people make up 3% of the general population yet account for 12% of prisoners and 21% of children in custody. All of those black prisoners, including the children, were sent to prison by a judge rather than a jury, which suggests that our judges have a deeper bias issue than our juries.
These disconcerting statistics naturally raise the question: how should the law and the legal profession respond? In summary, I will argue that we must adopt a pluralistic approach to generate a fair consensus. Like the proverbial human cell that renews itself after injury, our generation has the opportunity to restore trust and confidence in the CJS despite its long history of racial bias. I propose we do this in three ways.
Lord Chancellor and Secretary of State for Justice, Robert Buckland, rightly asserts that the CJS is ‘not a single entity’ but rather ‘an ecosystem of interconnecting and mutually dependent parts’ (Ministry of Justice, 2020). The complexity of the CJS presents challenges that are impossible to be resolved through any single intervention. As such, we need interdisciplinary conversations and partnerships — from mental health professionals and economists to charities and grassroots communities — and implement a plurality of initiatives. This includes publishing sentencing remarks in the Crown Court in video, audio and written format. This would address the ‘trust deficit’ identified by Lammy by making justice more transparent, pandemic-proof and comprehensible for victims, witnesses and offenders. Digitisation is a costly but worthwhile investment. Second, I recommend compulsory unconscious bias training for the Courts and practitioners. The first step to behavioural change is recognition. By requiring judges and practitioners to confront and reflect on their own biases we can reduce their impact. Another outcome is that judges will learn to adapt their style of communication to suit the different cultures of those appearing in court.
Commit to responsible innovation
It is a truism that navigating the challenges before us will not be feasible if we retain the mindsets and fallacies that created them. Take, for example, the Gangs Violence Matrix (GVM): an intelligence tool used by the Metropolitan Police to identify and risk-assess gang members across London. Two major issues with the GVM are (1) it treats knife crime primarily as an enforcement issue and (2) it conflates criminals and victims of gang-related crimes. (1) is problematic because this approach only deals with the eventual manifestation of the problem rather than the underlying causes. To address disproportionality you must address deprivation. (2) is problematic because the demographics of those on the database do not reflect reality: 78% are black yet black people are responsible for 27% of serious youth crime (Amnesty, 2018). There is no denying that innovation through data collection serves a purpose. The danger, however, is in treating trivial factors like one’s music preferences as emanations of one’s criminality. When datasets are conferred this level of symbolic value, they threaten to pass as incontrovertible proof of causal relationships that are empirically invalid. By deferring agency to the data and elevating it to the level of the symbolic, it has the ability to incriminate unfairly.
Uphold individual sovereignty
Like an infinitely reconfiguring Rubik’s cube, we’re all a puzzle of identities that shifts with time, space, experience and context. Group identity is a useful starting point since many of our experiences are analogous. However, the BAME community is not a monolith. To improve representation in the legal profession, we have to avoid excessive intellectualisation and engage with the intricacies of the individual through contextual recruitment. The core skills that a lawyer requires e.g. communication, teamwork, resilience, are substantively irrelevant to one’s identity group. By providing one with equal opportunities to strengthen these skills through e.g. work experience, scholarships, mentorship, the stereotypes normally associated with one’s group identity have less weight, which reduces inequality of outcome. By working regeneratively, we can change perceptions and create the conditions for individuals and communities to adapt, evolve and thrive.
Data-driven technologies in public services are instruments for control, manipulation and punishment. My central claim is that data-driven technologies that unfairly target marginalised groups are rationalised by a call for mechanical objectivity (MO). The aim of this essay is to show why MO is partly to blame for oppressive algorithms and why it is misguided to view MO as an epistemic virtue, tout court. I will reference two examples to support my thesis: (1) Galton’s composite portraits; (2) Eubank’s ‘Digital Poorhouse’. There is no reason to think modern data scientists share Galton’s eugenicist ambitions. However, there are important parallels to be made in their methodologies and rationales.
I will submit two arguments. First, MO is viewed as an epistemic virtue partly because we tend to hastily conflate indexicality and symbolism (Sekula, 1986, p55). This is an error since photos and digital datasets are, at best, idealised models which provide a partial representation of phenomena.
At worst, they distort phenomena and provide plausible deniability when the technologies produce unfair outcomes. Therefore, we cannot rely on them to generate essentialist or universal claims. Second, if objectivity is to be moralised, then we must interrogate the moral limits of MO. History is replete with instances in which MO aided justice and democracy but the contrary also applies. Unless we accept MO as an absolute epistemic virtue, which I grant is implausible, then my thesis is valid.
However, critics might respond to my arguments by asserting the impenetrability and moral blindness of algorithmic reasoning. While this may be appealing for parties wishing to absolve themselves of moral responsibility, in Sections 2 and 3 I show that we cannot defer agency to these technologies. Instead, I advocate the interposition of human decency to circumvent automating oppression.
1. CRITICAL EXPOSITION
Let the machines do the talking
Daston and Galison (2007) define MO as ‘the insistent drive to repress the wilful intervention of the artist-author, and to put in its stead a set of procedures that would, as it were, move nature to the page through a strict protocol, if not automatically’ (p121). Their main argument here is that scientists held MO as an epistemic virtue because seeing nature clearly — that is, without subjective projections — could only be achieved through mechanically produced artefacts. This is because, in their view, mechanical apparatuses like cameras seemed to enable nature to ‘speak for itself’ (p120) in a way that surpassed human methods of interpretation. MO stood in contrast with the previous brand of objectivity which Daston and Galison term ‘truth-to-nature’. This view held that selecting, idealising, simplifying and beautifying were essential to the scientific representation of nature (p43). Whereas truth-to-nature encouraged wilful intervention of the scientist, MO required scientists — as a matter of ethical compunction and discipline — to eliminate their individual judgement (p48). The agency of the knower was limited to creating the appropriate conditions for the mechanically objective apparatus to perform its representation of nature. Even though Daston and Galison speak in historical terms, I agree that remnants of both brands of objectivity persist in scientific practice (p46).
PHOTOGRAPHY, AUTOMATISM, INDEXICALITY AND SYMBOLISM
Images do lie
In my view, the three components of MO that provide its initial warranty are (1) automatism, (2) indexicality and (3) symbolism. According to Daston and Galison, mechanical objectivists favoured cameras because they could produce images ‘untouched by human hand’, that is, in an automated fashion (p42). While they do not discuss indexicality explicitly, their account would suggest that scientists’ advocacy of MO was partly driven by the notion that a photograph was an index. Or in Mitchell’s terms, photographs were viewed as a ‘direct physical imprint’, akin to ‘a fingerprint left at the scene of a crime or lipstick traces on your collar’ (1992, p2).
There is no denying that on their own photographs record a physical trace of a contingent moment in time. The danger is in viewing them as literal ‘emanations of the referent’ (Barthes, 1982, p80–81) that reveal hidden truths by virtue of a camera’s automatism. When photographs are conferred this level of symbolic value, they threaten to pass as incontrovertible proof of ‘essential’ features, general laws and causal relationships that are empirically inadequate (Sekula, 1986, p55). Hence Peirce’s conceptual distinction between indexicality and symbolism. By deferring agency to the camera and elevating it to ‘the level of the symbolic’ (ibid), it has the ability to incriminate or vindicate (Sontag, 2008, p5). It is this very fact — the camera’s oscillating status of incrimination and vindication, of threat and promise — that concerns me the most about giving primacy to MO.
GALTON’S COMPOSITE PORTRAITS
It’s all in the type
Sir Francis Galton (1822–1911), who founded eugenics in Britain, employed MO as a mode of investigating the underlying traits that constitute the ‘criminal type’. Sekula (1986) explains how Galton fabricated composites:
‘[composites] worked by a process of successive registration and exposure of portraits in front of a copy camera holding a single plate. Each successive image was given a fractional exposure based on the inverse of the total number of images in the sample […] Thus, individual distinctive features, features that were unshared and idiosyncratic, faded away into the night of underexposure. What remained was the blurred, nervous configuration of those features that were held in common’ (p47)
Galton (1878) wrote that a composite portrait:
‘represents the picture that would rise before the mind’s eye of a man who had the gift of pictorial imagination in an exalted degree […] The merit of the photographic composite is its mechanical precision, being subject to no errors beyond those incidental to all photographic productions.’ (p97; emphasis mine).
Galton placed a lot of explanatory power in composites because he maintained that automatism endowed the camera with a greater degree of ‘precision’ than even the most gifted artist or scientist. Further, Galton was possessed by the ideology that existing class relations in England could be naturalised and quantified and sought to devise a programme of social hygiene through selective breeding (Sekula, 1986, p42). Galton was motivated by (1) the classicist instinct to perceive ancient Greeks as a higher race (specifically, two ranks higher than the English according to Galton) and (2) a utilitarian vision of social betterment (pp 42, 65). That is, by taking measures to reduce the numbers of the ‘unfit’ Galton claimed to be pushing the English social average toward an invented, bygone Athens, and away from an equally invented, threatening residuum of ‘degenerate urban poor’ who were preordained for unhappiness (p44).
What Galton failed to see was that he had elevated the physiognomic descriptions captured by the composites to the level of the symbolic in a tautological fashion (ibid). Namely, Galton set out to demonstrate that those with a reputation for criminality bred criminals without locating an independent causal feature in the composites that could explain the priority of nature over nurture. For example, on what grounds could Galton presuppose that the criminal type could be found at the centre of the composite and that only the ‘gross features of the head mattered’ without regard for the periphery details of the image (p48)? By hypostatising criminality and generating essentialist claims that were broad in scope from empirically specific observations, Galton created a caricature of inductive reasoning and statistical inference. And yet, composite images based on Galton’s procedure, first proposed in 1877, persisted widely over the following three decades (p40). Why? Largely because Galton together with his proponents uncritically espoused the merits of the camera’s mechanical procedure of obtaining group characteristics. They blindly espoused MO.
2. MY CRITIQUE
EUBANK’S DIGITAL POORHOUSE
Algorithms as false witnesses
My first argument is that MO is viewed as an epistemic virtue partly because we tend to hastily conflate indexicality and symbolism. I have shown that Galton’s composite portraits committed this error in his attempt to locate essentialist traits like criminality. So how does MO apply to modern data-driven technologies used in public services, what Eubank calls the Digital Poorhouse? The shorter answer: these data-driven technologies rely on a sequence of mechanically produced rules and conventions to automate inferences and predictions about large datasets. Automation is synonymous with MO’s espousal of automatism and non-interventionism. Like photos, digital data points are indexical, ceteris paribus. But the fact that the inferences and predictions are mechanically produced does not preclude the possibility that they are a product of historically contingent biases and assumptions of the people doing the coding. And hence the need to interrogate (1) the data’s symbolic claims and (2) the scope of these claims. The longer answer requires more exposition.
According to Eubanks (2018), the Digital Poorhouse comprises databases, algorithms, risk models and other forms of digital technology that ‘quarantine’ the marginalised (p12) who face higher levels of data collection (p6). She appropriates the term ‘poorhouses’ as she observes a parallelism in nineteenth-century country poorhouses that housed and managed the poor and modern methods of poverty management in the public sector. She argues that automated decision-making subject the marginalised to ‘invasive surveillance, midnight raids, and punitive public policy that increase the stigma and hardship of poverty’ (p12). Support for her thesis comes from three examples: (1) automated provision of Medicaid in Indiana. (2) Homeless services in Los Angeles that used an algorithm to distribute scarce subsidized apartments. And (3) the Allegheny Family Screening Tool (AFST) — an algorithm designed to reduce the risk of child endangerment in Allegheny County (p10). I will limit the discussion to examples (1) and (3) as they will be sufficient to prove my thesis.
First, in early 2006, the Mitch Daniels administration released a request for proposal (RFP) to outsource and automate eligibility processes for TANF, food stamps, and Medicaid. While the project promised to ‘reduce fraud, curtail spending, and move clients off the welfare rolls’, it was a failed attempt to privatise and automate the process for determining welfare eligibility in Indiana (p46). Daniels blamed the state’s Family and Social Services Administration (FSSA) for ‘contributing to a culture welfare dependency’ (ibid). He insisted that transitioning from interpersonal casework into electronic communication would make FSSA offices ‘more organized and more efficient’ (p47). Daniels’s claims were later contested factually, but what interests me here is that after IBM secured the $1.16 billion contract to automate the system (p48), Daniels’s mandate to ‘reduce ineligible cases’ (p46) and streamline eligibility determinations took precedence over helping the poor.
Daniels’ appeal to automation led to the system losing its human face, which supports my second argument about the moral limits of MO. Among other issues, case workers no longer had the final say in determining eligibility. Performance metrics designed to expedite eligibility determinations incentivised call centre workers to terminate cases prematurely (p50). As a result, ‘between 2006 and 2008, the state of Indiana denied more than a million applications for food stamps, Medicaid, and cash benefits, a 54 percent increase compared to the three years prior to automation’ (p51). This adversely affected desperately ill children and African Americans in particular. Eventually, things exacerbated to the point that Daniels had to acknowledge that the experiment had failed and annulled the contract with IBM.
I agree with Eubanks’ assessment that many of the administrative errors were the result of ‘inflexible rules that interpreted any deviation from the newly rigid application process […] as an active refusal to cooperate’ (p50). The words ‘inflexible rules’ here bear a striking resemblance to MO’s maxim of adhering to ‘strict protocol’ (Daston and Galison, 2007, p121) even if it results in patent injustice. But MO seemed to have provided Daniels’ administration technological cover for their concerted effort to shunt people off welfare. Even though the technology was not particularly sophisticated, it provided plausible deniability for the administration to refuse so many welfare. This was compounded by the ‘noninterventionist’ approach imposed on the caseworkers, whose decision-making power was sacrificed at the altar of automation. And on the call centre workers, who were rewarded for blindly deferring to the dictates of the performance metrics. In principle, the automated system should have served as a tool for decision-making, not as a decision-maker. But in true MO spirit, the automated system provided the administration with the ethical distance they needed to make the inhuman choice of denying eligible people welfare.
Conversely, the AFST is a cutting-edge machine-learning algorithm developed by a team of economists at the Auckland University of Technology. Factoring in variables like a parent’s welfare status, mental health, and criminal record, the AFST produces a score that is meant to forecast a child’s risk of endangerment (p130). However, Eubanks observed that its predictions often defy common sense: ‘A 14-year-old living in a cold and dirty house gets a risk score almost three times as high as a 6-year-old whose mother suspects he may have been abused and who may now be homeless.’ ‘And yet’, she writes, ‘the algorithm seems to be training the intake workers’ (p142). Like in the case of Indiana, the workers tend to defer to high scores produced by inherently flawed software. Much like Galton’s self-validating assertions about criminality based on composites, the algorithms’ predictions invite more supervision, which in turn supports the prediction. But this cruel feedback loop is a product of flawed historical data based on patterns of racial profiling (p152), rather than an ‘objective’ representation of reality.
3. OBJECTIONS AND REPLIES
One response to my thesis is that the issue here is one of flawed technology, which could be overcome by improved and morally blind technology. This objection is empirically open, and my response will rely on analytic retrospection. I contend that, ultimately, both examples serve as cautionary tales of the logical extreme of suppressing human interventionism. For example, there is room for subjectivity in determining what precisely constitutes neglect or abuse and it is unlikely that an algorithm will possess the necessary degree of emotional intelligence required to make this assessment. As Daston and Galison (2007) put it, ‘as long as knowledge posits a knower, and the knower is seen as a potential help or hindrance to the acquisition of knowledge, the self of the knower will be at epistemological issue’ (p40). Both examples would suggest that the knowers here i.e. the bureaucrats, software developers, the electorate, the victims, would have been a help rather than a hindrance in circumventing the unfair outcomes of self-validating algorithms that only captured — if not distorted — a partial representation of ethically and technically complex phenomena.
I have shown that MO is not an absolute epistemic virtue and that it must be supplemented by human intervention. MO identifies a genuine threat to epistemology — unbridled subjectivity that can lead to distorted renderings of phenomena. The knower must be grounded in some discipline and MO is one option (Daston and Galison, 2007, p48). In Kantian tradition, however, MO is a regulative ideal that cannot be fully realised without the danger of straitjacketing scientists into strict adherence to mechanical procedures. A point of further research will be to show how to reconcile MO with other epistemic virtues, namely ‘trained judgement’ (p376).
Barthes, R. (1982). Camera Lucida. London: Cape.
Daston, L., & Galison, P. (2007). Objectivity. New York: Zone Books.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
“A mind that is stretched by new experiences can never go back to its old dimensions.”
Oliver Wendell Holmes Sr
When I launched my campaign to complete my postgraduate degree at the University of Cambridge, I had to utilise multiple skills to reach my target of $80,000. I had to be a storyteller and craft a genuine narrative that people would feel compelled to share. I had to be an innovative strategist and think carefully about how I’d hit critical milestones before the deadline. I had to be an advocate and defend my cause against naysayers. By combining these skills among others, I was able to raise over $106,000 in 1.5 months, get featured in the national press and weave a new trajectory for myself and the communities that I serve.
I want to emphasise that these weren’t fixed traits that I was born with but rather skills that I’ve nurtured over time. Over the last decade, I’ve been privileged to be mentored and coached by executives at world-renowned firms and companies like Goldman Sachs, Freshfields, McKinsey & Company, Google and leading advertising agencies like McCann. I’ve attended two top-tier universities (Cambridge University and UCL), I was elected a fellow of the Royal Society of Arts, and I sit on the executive boards of two tech startups: TechShift and the LawTech Society.
Crucially, I embarked on a personal journey to pursue the optimal life and to develop a mind capable of solving the world’s most entrenched problems. I knew that if I could put together my insights and synthesise the interdisciplinary knowledge and experiences I’ve acquired, I could help thousands if not millions of changemakers and entrepreneurs across the globe who share my drive for positive social change.
And that, of course, is how OmniSpace was born.
Our mission to help humanity thrive by empowering individuals and teams to achieve their most ambitious goals. We do this through our commitment to the transformative power of education and technology. We offer premium courses and personal coaching in organisational leadership, fundraising, public speaking and digital marketing.
Growth and learning are always choices for those who are willing. My hope is that OmniSpace will provide you with some vital guidance and inspiration to bolster that willingness.
To find out more about what OmniSpace has to offer, please visit our LinkedIn page.
Thank you so much for joining us in making this dream a reality.