Our benevolent overlord, the algorithm

 

// Jordan Young

Our outdated perceptions of technology and feeble regulators are allowing algorithms to exploit the worst in us

Although algorithms have gained a reputation as systems of impenetrable complexity, fundamentally, they are nothing more than lists of instructions. That an algorithm could be used to bake a 6-ingredient Victoria sponge and also to accurately scour the vastness of the internet on a crude search term, is evidence of their infinite scalability. It is irrefutably more efficient to outsource the mathematical drudgery that drives search engines and colour balance digital photographs. It isn’t so obvious why algorithms should also be ubiquitously deciding how the content of the internet is presented to us, an entirely more subjective matter. Public unease around Artificial Intelligence (AI) is widespread and appears constantly in popular culture, takeover by superintelligent AI is surely up there with environmental catastrophe and nuclear holocaust as one of the most explored endings to the human story. Fears certainly aren’t unfounded, although according to many, the actual machine apocalypse is likely to be comparatively dull and take place over generations as the economy is slowly automated.The same anxieties however, are not extended to the machine-learning (ML) algorithms that AI systems comprise, which have already firmly replaced human beings in screening CV’s, recommending films and music, assessing risk and crucially, deciding which ads, articles and social media posts we are shown. ML algorithms chew through huge, complex data sets (Big Data) and train themselves to become iteratively better at spotting patterns and trends - the more data, the more efficiently this process works. Algorithms of this sort will soon run the world, if they don’t already, so it is deeply troubling that there isn’t already a comprehensive ethics framework in place that addresses the social malaise we are collectively experiencing; a repercussion of the extreme, algorithmically constructed online worlds that we reside in. 



A logbook of contradictions

The glittering mission statement of social media is to connect the human race and bring us closer together but we are becoming increasingly aware that in reality it feels like the opposite is happening. The real history is a logbook of contradictions. Instead of democratizing wealth and knowledge, it seems to be pooling in the hands of an untouchable tech super-elite, paradoxically young and liberal yet more out of touch than ever before. People want to know why social media is making them less happy, less attentive, less productive and less connected. It would be easier to believe the blame lies squarely at the feet of a cabal of greedy silicon valley capitalists but in truth, it is more likely a disastrous by-product that is built into the technology itself and the philosophies of the companies that wield it. Dataism, an ideology that heralds data-processing power as the foremost measure of innovation, is melded into technocapitalist work culture along with Mario Kart breaks and office table tennis tournaments. But it is flawed thinking to view people as outdated processors hindered by irrational impulses and relegating the subjectivity of human experience to a lower order of importance. As video games developer Zoe Quinn puts it, ‘algorithms are not arbiters of objective truth and fairness simply because they are math’, they can only function as ethically and impartially as the data they are fed allows. The record of algorithmically-driven systems being implemented in recent years is pockmarked with embarrassing instances of discrimination. In 2016 it took a mere 16 hours for Microsoft’s AI chatbot ‘Tay’ to be shut down after it quickly evolved from a cheerful digital BFF to a fanatical facist, a personality learned entirely from it’s interactions with users on Twitter. Interestingly, Tay’s successor chatbot ‘Zo’ was programmed to explicitly avoid any and all ‘controversial’ talking points, instantly switching the topic when triggered and as such quickly drew heavy criticism for being an intolerable,  politically-correct brat. Not all examples are as innocuous or easily explained, in the same year, COMPAS, a program that assesses the reoffending risk of criminals in US courts was found to be grossly overestimating the risk posed by black defendants; having a consequential impact on the decisions of judges and court officials. Clearly, these incredibly complex systems can’t be policed with bullish post-hoc censorship, the prevailing solution is that they need to be run on better data, a half-truth, because in the current tech landscape ‘better’ simply means more.

The resultant drive to gather as much data as possible has given rise to a new culture of surveillance capitalism, one in which the most profitable business models employ the algorithms that best predict human behaviour. Shortly after Facebook’s inception, twenty-something CEO Mark Zuckerberg hailed the arrival of social networking, with his company at the helm, as some sort of grassroots movement, ‘by giving people the power to share’ he would in fact be ‘making the world more transparent’. Online platforms and tech companies universally insist that mass data collection is essential to refine the browsing experience and in an effort to appear more open about this, now subject users to obligatory pop-up consent forms on arrival. The suggestion that this data collection is primarily to improve user experience and further, that these improvements reciprocally allow users even greater sovereignty over their data-selves is a dubious one. Privacy ‘preferences’ are intentionally tedious and obscure to discourage scrutiny; it is usually apparent on closer inspection that much of the information collected will be used to provide targeted ads and tempting recommendations. Human attention is the precious finite resource big tech firms are competing for, the limiting factor in the upscaling of information consumption. Algorithms are ruthlessly efficient tools for capturing attention and the interminable streams of data that they are run on are willfully given over. Another half-truth that is operationally fundamental for the social media profit model is the assumption on the users' part that these are ‘free’ services, but we do pay a dear price, except the currency is our attention, something that for many of us is scarcer and more precious than money itself. Economising human attention is an irresistibly lucrative opportunity and although the cost to the user is not monetary there is a human one; the power of algorithms and their ability to be scaled to make incredibly complex decisions for us instantly and without warrant means exploitation is inevitable. 


‘Move fast and break things...’ 

‘...Unless you are breaking stuff, you are not moving fast enough’, another Zuckerberg jingle, foretells the current predicament. 17 short years after its inception Facebook is still moving fast, or at least faster than regulators can develop a potent ethical framework for Big Tech’s data mishandling. Zuckerberg’s testimony in front of U.S. Congress in 2018 has been the most heavily publicized attempt to curtail the absolute control a minority exerts over the flow of personal data. 3 years on from the affair, little has changed. If anything, during the pandemic, digital usage and revenue has soared, with the tech oligopoly taking the biggest cut. 

This isn’t to say that social media hasn’t also irreversibly altered society for the better. To a certain extent, anyone with an internet connection can now take part in a global conversation. Online platforms have given a voice to marginalized groups and have been an essential tool for communities and movements to organize. It’s hard to see how the type of large-scale anti-government demonstrations witnessed during the Arab Spring and more recent Hong Kong protests would have been possible without social media. The most obvious example of how social media is changing society is being felt by parents, teachers and academics across the globe. Younger people wield a larger influence and a louder voice than ever before and are changing culture at breakneck speed. Venerated institutions from the Academy Awards to the oldest universities are held to the ideals of the smartphone generation, or face very real reputation destruction. This power shift is at risk of being inverted, whoever controls the flow of information and which information is truth, controls the narrative. If your reality is defined by what you know and what you know is increasingly what is displayed on your social media feeds, then Big Tech has the biggest say in what is the truth. 

How do we, as a decentralized group of digital consumers spanning the globe, hold these companies accountable while being helplessly coupled to them? Digital ethics boards certainly exist, though they are ill-equipped to keep up with the rapidly evolving technology landscape and investigate and regulate mostly retroactively. Since December 2020, Facebook and Google have been the subject of respective landmark antitrust lawsuits launched by the Federal Trade Comission (FTC) and almost all the US States for anti-competitive practices. The cases all concern the lengths each company has gone to in order to monopolize multiple markets that they already dominate. Facebook is primarily under fire for buying out competitors Instagram and Whatsapp. Google for more subtle tinkering to ensure that they hold a firm grip on search and advertising markets. This sounds like progress, but the cases are shaky at best, with oversights that Facebook and Google will surely exploit to the full extent of their legal power. In fact, Facebook believes the FTC’s accusations are so vague and littered with contradictions that it has requested the case be thrown out altogether. They have a point. The unlikeliness that either of these lawsuits will amount to anything despite the seriousness of the allegations illustrates just how out of step anti-trust laws are with the goals of tech regulators. Regulation has to be empowered and take precedence over litigation.

Remarkably, the most tactile regulatory powers are the various in-house committees set up over the last few years. Facebook’s Oversight Board sounds great on paper, an independently commissioned group of 40 diverse and respected individuals that can investigate, judge and even reverse decisions made by Facebook itself. But the board was set up and financed by the same company it is intended to regulate and only considers content policy and content decision-making. Despite the huge funding and virtue signalling on Facebook’s part, the board has only just reached consensus on Facebook’s decision to censor Donald Trump indefinitely, 4 months after the incident. It judged the platform to be justified in making the suspension, but disapproved of its indefinite nature. Google’s ethical AI team was essentially liquidated after its top researchers tried to publish studies disparaging the type of machine-learning language models similar to those used by Google itself. Such an overt act of censorship is telling of the performative, borderline fraudulent culture of ethics in Big Tech. Firms are desperate to be seen as transparent in an effort to keep public faith and ward off more serious scrutiny. It would be a disaster for Google and Facebook’s remit if it was concluded that machine-learning systems are intrinsically flawed and pose huge threats to social cohesion. But while the sector remains hamstrung by this latent, conciliatory approach, it will continue to be mired in a bizarre conflict of interest that sees 5 of the biggest companies in the world commission their own custom-fit regulators.

There should be a concerted effort to consider the lasting impacts of such powerful technology on our society. We should first be figuring out what algorithmic firepower could do for us and to us. Regulators need to be independent, dynamic and unilaterally backed by governments and lawmakers.



From within the proportionally tiny group of people who work at the world's top tech firms, a loosely affiliated group of agitators are emerging. Founded by ex-Google ethicist Tristan Harris, Virtual Reality pioneer Jaron Lanier and a raft of other prominent industry figures, the ‘Center for Humane Technology’ aims to draw public attention to the problematic coalescence of social media and mental health. CHT and similar independent bodies are crucially run by people with an intimate understanding of Big Tech but who are no longer complicit affiliates. Existing governmental regulators have proven to be ineffective, there is little policy that already exists to base regulation on and the hands-off approach taken by the US government  towards regulating private enterprise has not helped. Despite operating in almost every country in the world, almost all the tech heavyweights are registered and regulated exclusively in the US. The issue with this, as Harris puts it, is that the task of ‘ethically steering the thoughts and actions of two billion people’s minds every day’ has been left to tech companies whose fiscal success depends on which direction they are steered. The culture of digital ethics is too conciliatory, big tech should not be allowed to adopt ethical parameters at their own pace and certainly not only if those parameters don’t slash profits.

Tristan_Harris_at_Collision_Conf_2018.jpeg


So, faced with feeble regulators and social networks to which our lives and livelihoods are already deeply intertwined, what meaningful action can we take as individuals? Some suggest disengaging with social media altogether, but for some of us this would be disastrous for business or for staying connected to our communities. Guillaume Chaslot, another Google expat, suggests compromising by making use of peripheral programs, add-ons and extensions to limit the functionality of algorithms; he even recommends an extension that disables the YouTube algorithm he himself helped build. Ultimately, the responsibility will fall on us to become conscious consumers. Users must foster a healthier relationship with social media, one in which they decide when and how they use it. Turn off notifications, unsubscribe from mailing lists, endeavour to do some of the vital decision making that Google would have us believe is the grunt work of algorithms. One tedious task that should remain a sacred human burden, however, is deciding what we want and need. Even the fiercest social media critics are hopeful of a way through the fog. Even while lamenting Silicon Valley’s regression from a cult of nerdy insurgents to nerdy despots, Lanier envisions the internet in undeniably optimistic terms; as a great mirror of society, ‘the real us’, a spectrum of all that humanity is and ‘seen in proportion, we can breathe a sigh of relief, we are basically OK’ . 



Previous
Previous

DDT: Chasing the Mosquito Man

Next
Next

Britain's retrofit crisis