What the Web Has Wrought

: In 1989, Sir Tim Berners-Lee proposed the development of ‘a large hypertext database with typed links’, which eventually became The World Wide Web . It was rightly heralded at the time as a signiﬁcant development and a boon for one-and-all as the digital age ﬂourished both in terms of universal accessibility and a ﬀ ordability. The general anticipation was that this could herald an era of universal friendship and knowledge-sharing, ushering in global cooperation and mutual regard. In November 2019, marking 30 years of the Web, Berners-Lee lamented that its initial promise was being largely undermined, and that we were in danger of heading towards a ‘digital dystopia’: What happened?

the reality is very much at odds with this roseate view. The internet, the web, and particularly social networks have become forums for online hatred, with many websites exhibiting the most venal and vile characteristics of humanity. Given the centrality of the web in our daily lives, this is not something that can be ignored.
In November 2019, Sir Tim Berners-Lee (TBL) launched a global action plan to save the web from political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a "digital dystopia" [1]. This has taken the form of a 'Contract for the Web', a digitally-available document presenting nine principles designed to 'safeguard the web', targeting three constituencies-governments, companies, and individuals. I will refer to this throughout as The Contract. (NB This paper was written in December 2019, in the immediate aftermath of publication of TBL's call for a contract. It was revised in early May amidst the COVID-19 pandemic. A massively altered context which to some extent has overwhelmed TBL's entreaty, but which also adds a new urgency as will be discussed in the concluding section.) Berners-Lee used the disconcerting term dystopia, and as a perceptive reviewer of an earlier draft of this paper pointed out, the term can elicit a range of images, in many cases drawn from literature and dramatic films or TV series. It can, for instance, evoke an Orwellian 'Big Brother', all-seeing, privacy-destroying authoritarianism. It can also refer to an economic and ecological melt-down, leading to massive social deprivation and immiseration. I think that TBL has the former predominantly in mind, although the term itself is widely regarded as implying both-i.e., 'relating to, or being an, imagined world or society in which people lead dehumanized, fearful lives'. (https://www.merriam-webster.com/dictionary/dystopian) "I think people's fear of bad things happening on the internet is becoming, justifiably, greater and greater," Berners-Lee, the inventor of the web, told the Guardian. "If we leave the web as it is, there's a very large number of things that will go wrong. We could end up with a digital dystopia if we don't turn things around. It's not that we need a 10-year plan for the web, we need to turn the web around now." [2] This raises a number of questions: [Q1] What has gone wrong? Why has the principle of putting everyone in contact with one another, with everyone having access to a vast array of information, gone so drastically awry?
[Q2] Is there something wrong with the technology itself?
[Q3] Why is Berners-Lee arguing for a 'contract', and is his strategy appropriate and feasible?
The concise form of responses to these questions: Q1: The development of cheap and accessible digital media for communication and exchange on a massive scale in the context of an increasingly 'individualized' society has provided a context for the proliferation and amplification of some of the most malign aspects of human behaviour and sentiment.
Q2: The fault does not simply reside in or with the technology. Moreover, we must be wary of arguments that assume or articulate the idea that technology is something distinct from other aspects of the social milieu.
Q3: Berners-Lee's initiative is specifically aimed at harnessing a wide variety of contending interests to work together for the common good and general benefit. Although not a legally binding contract, Berners-Lee and his working group are clearly intent on harnessing the goodwill and benevolent intentions of governments, companies, and citizens to work together for the common-good. This is in many regards akin to a social contract, as will be explained in more detail in what follows. The strategy itself is, however, at best a partial one, and unlikely to prove successful to any degree unless supplemented with additional aspects that are far more complex and indeterminate.

An Avalanche of the Banal, Insipid, or Pathological
Berners-Lee's anguish at what the web has wrought is not an isolated lament. Vint Cerf, one of the 'fathers of the internet' (https://internethalloffame.org/inductees/vint-cerf) and 'chief Internet evangelist at Google', (An anachronistic and incongruous title, indicating that those in charge of Google see it as some sort of religion with universal claims; although one justification for the title was that Cerf would continue to serve the global internet community, rather than the specific corporate aims of his employers. The reality has proved more complex and mixed-see https: //googletransparencyproject.org/articles/googles-evangelist) offered a similar view in 2017, using the lament 'What Hath We Wrought?' While we have yet to determine what, exactly, is the online moral equivalent of shouting "Fire!" in a crowded theater, it seems clear that societies that are significant users of online resources need a way to cope with a wide range of harms that malevolent users might visit upon others. The need is transnational in scope, and users aren't exempt from responsibility. Adopting safe networking practices (supported by cooperating online services companies) should be a high priority, and providing technical means to implement them should be the business of the computing and networking community. Finding mutually supportive legal agreements between nations to sanction harmful online behaviors will be a challenge worth exploring. [3] Cerf's use of the phrase was no accident. It is taken from the King James Bible-Numbers 23:23. 'Surely there is no enchantment against Jacob, neither is there any divination against Israel: according to this time it shall be said of Jacob and of Israel, what hath God wrought!' Samuel Morse used it in May 1844, when he sent the first message by telegraph from Baltimore to Washington. 'What Hath Man Wrought!' was the title of the editorial in The United States News written by David Lawrence, its founder, in the wake of the detonation of the atomic bombs over Hiroshima and Nagasaki, August 1945. Cerf's use of the phrase deliberately evokes these cries of anguish, and this is echoed by Berners-Lee.
Similar sentiments directed at technological 'advances' have appeared in the past. Technologies initially heralded as beneficial and constructive all too often turn out to be a mixed blessing or potentially damaging. Joe Weizenbaum offered a telling example regarding the advent of commercial radio and the emergence of TV in the USA in the 1920s. He noted that Herbert Hoover, US Secretary of Commerce in 1920s, saw these new media as the basis for a euphoric dream.
. . . these media would exert an enormously beneficial influence on the shaping of American culture. Americans of every class, most particularly children, would, many for the first time, be exposed to the correctly spoken word, to great literature, great drama . . . The technological dream was more than realized.... But the cultural dream was cruelly mocked magnificent technology... [an] exquisitely refined combination of some of the human species' highest intellectual achievements.... delivering an occasional gem buried in immense avalanches of everything that is most banal or insipid or pathological in our civilization. (emphasis added) [4] So, we have been here before, albeit that the internet>web delivers an incessant avalanche of malignity greater by several orders of magnitude. There is little disagreement in this regard, but the proposed analyses and remedies are diverse and differ widely.

Technology-Cause and Cure
Q2 concerns the extent to which the technology itself may be at fault. If it is, then might the solution also be a technological one? One of the prime candidates offered in this regard, as both cause and cure, is the algorithms that underlie various recommender systems, particularly those connected with Google and Facebook. Examples of the malignant nature of these systems abound: -It has been found that Google and Google Translate perpetuate racism, sexism, and other forms of discrimination.
General forms of machine learning quickly reproduce biases found in the general population at hand. See for instance the article from Vox.com, and the review of Noble's book Algorithms of Oppression: How Search Engines Reinforce Racism. 'These search algorithms aren't merely selecting what information we're exposed to; they're cementing assumptions about what information is worth knowing in the first place. That might be the most insidious part of this.' 'But know: Machine learning has a dark side. "Many people think machines are not biased," Princeton computer scientist Aylin Caliskan says. "But machines are trained on human data. And humans are biased." Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, Caliskan explains: from their creators.' [5,6] -Microsoft's Twitter chatbot, Tay, turned into the apotheosis of a racist troll within 24 h of its launch, and was rapidly shut down.
'Last year, a Microsoft chatbot called Tay was given its own Twitter account and allowed to interact with the public. It turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theories in just 24 h. "[George W] Bush did 9/11 and Hitler would have done a better job than the monkey we have now," it wrote. "Donald Trump is the only hope we've got." [7] -'Last May, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime after being booked systematically. The reporters found that the software rated black people at a higher risk than whites.' [8,9] -In a paper about the new study in the journal Science, the researchers wrote: 'Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes.' [10] Algorithms are crucial components of many of the most widely used internet>web applications. They are the engines that drive the recommender systems that facilitate the use of various applications such as social networking, finding information on the web, keeping up to date with news and current events, listening to music, and shopping online. The recommendations that are offered are tailored to the individual user-based on the user's profile and history-and also drawing on more general aspects such as overall popularity and relationships between items. These systems are of course also designed to maximize the revenue of companies such as Facebook, Google, You Tube, Twitter, and Amazon. Given the business models of these companies, maximizing revenue must now be regarded as the primary and overwhelming purpose of these algorithms.
Machine learning algorithms in recommender systems are typically classified into two categories-content based and collaborative filtering methods although modern recommenders combine both approaches. Content based methods are based on similarity of item attributes and collaborative methods calculate similarity from interactions. (emphasis added) [11] The recommender applications are, however, far from perfect, and can have various side-effects. For several years, the UK satirical magazine Private Eye has included a section on malgorithms, offering examples of inappropriate juxtapositions of news items and advertising on Google and various on-line news sites. Examples from the current issue at the time of writing include reference to -'The Great Black Friday Swindle-95% of 'bargains' shown to be more expensive than at other times', followed by an advertising feature for 'AMAZING Black Friday deals'; -A report on the attack in the UK on London Bridge in which two people died after being stabbed; accompanied by an advert for the film 'Knives Out' [12].
There is now an extensive and growing literature demonstrating the ways in which these supposedly neutral algorithms are heavily influenced by, and then perpetuate the assumptions and biases of their creators and their cultural backgrounds. Various authors, for instance in First Monday [13][14][15][16]), have contributed to a growing understanding that algorithms are social artefacts as well as technical ones. They need to be understood and evaluated as such, with analyses consequently being far more wide-ranging. In the light of these critiques, and examples such as Microsoft's Tay and the system offering risk assessment in criminal sentencing, algorithms have justifiably become the focus of concern and suspicion. (I would also strongly recommend Criado Perez' Invisible Women [17] for further examples both online and beyond.) But such efforts will at best only offer a partial analysis, and some have suggested that the focus on algorithms owes much of its appeal to the range of 'solutions' this perspective easily affords, resulting in remedies that readily imply one-dimensional policies and strategies for countering the more heinous aspects of internet>web malignancies. If the algorithms are at fault, then the simplest and most obvious remedy will appear to be a re-engineering of the algorithms to ensure that they operate in a far more equitable manner.
Such solutions, however, will be difficult to implement and are unlikely to prove effective. Although at first sight they offer a straightforward strategy, the reality is far more complex. In some cases, there will be broad agreement that the result of a particular algorithm, or operational combination of algorithms, is malign and requires revision or excision. In many other cases, opinions will differ regarding the actual functioning and outcome of specific algorithms. Policies based on these algorithmic diagnoses raise key questions that were probably never considered during the design stages of the algorithms themselves, and relate to complex issues that are not easily resolved with consensual or broad satisfaction. For instance: - To what extent should algorithms be designed to detect and marginalize or exclude specific types of link? -What specific issues should be addressed in the design and development of algorithms? -Should algorithms be designed to prohibit certain categories of data, messages, and web sites, or should they offer alternative opinions?
Recent accounts by whistle-blowers working within Google attest to the ways in which Google employees have intervened to change the ways in which the search algorithm operates [18]. This may sound pernicious, and it certainly changes Google's mission 'from its founding philosophy of "organizing the world's information" to one that is far more active in deciding how that information should appear' (WSJ-quoted in the article). The article refers to two instances where this has happened: (1) . . . down-voting any search results that read like a "how-to manual" for queries relating to suicide until the National Suicide Prevention Lifeline came up as the top result. According to the contractor, Google soon after put out a message to the contracting firm that the Lifeline should be marked as the top result for all searches relating to suicide so that the company algorithms would adjust to consider it the top result.
(2) . . . employees made a conscious choice for how to handle anti-vax messaging: One of the first hot-button issues surfaced in 2015, according to people familiar with the matter, when some employees complained that a search for "how do vaccines cause autism" delivered misinformation through sites that oppose vaccinations. At least one employee defended the result, writing that Google should "let the algorithms decide" what shows up, according to one person familiar with the matter. Instead, the people said, Google made a change so that the first result is a site called howdovaccinescauseautism.com-which states on its home page in large black letters, "They f-ing don't." (The phrase has become a meme within Google.) These are clearly laudable examples, but a general and largely covert trend for Google employees and sub-contractors deciding how and whether information should appear in people's searches might not be welcome as a universal corporate policy. It also opens the way for more pernicious censorial practices. Google, Facebook and their ilk should certainly develop greater awareness of the ways in which their seemingly neutral algorithms perpetuate existing biases and enhance a wide range of forms of discrimination, prejudice, and hatred. Nevertheless, this needs to be done in an open and participatory manner. Moreover, the focus on algorithms only goes so far, and the strategy it suggests fails to get to grips with wider and more complex issues underlying the ways in which the internet>web has developed in a manner so unwelcome and wildly divergent from Berners-Lee's optimistic vision of the 1990s.

Moving beyond Algorithms and Malgorithms
Munger and Phillips [19] take up these limitations in the context of YouTube, particularly the functioning of its recommendation system, which is seen as providing the basis for alternative influence networks [AINs] [20] that all too often lead users along a path taking them to sites exhibiting increasing forms and levels of extremism and malevolence, a phenomenon that is particularly associated with racist, homophobic, and misogynist content. Taking issue with many current analyses, Munger and Phillips argue that the YouTube recommendation algorithm is not the key issue; more account needs to be given to the supply and demand of such material.
A prominent theme in theories claiming YouTube is a radicalizing agent is the recommendation engine ('the algorithm'), coupled with the default option to 'auto-play' the top recommended video after the current one finishes playing.
Their paper contributes further weight to the understanding that algorithms are social artefacts, extending this to include AINs. These networks are not the creations of YouTube; they draw on existing groups and the cultural backgrounds of system developers and users. Hence their focus on supply and demand: the supply of and demand for these opportunities to express and spread bigotries and exchange mutually supportive messages with fellow extremists exist above and beyond YouTube. These provide the sources from which and the basis upon which AINs can grow and flourish. Accounting for this wider context necessitates recognizing the social and cultural complexities that underlie the social factors that have led to the overpowering of Crocodile Dundee style optimism by the current internet>web reality of the 'occasional gem buried in immense avalanches of everything that is most banal or insipid or pathological in our civilization'. Only once this has been understood can effective and acceptable remedies begin to be implemented. (Just as a first draft of this article was being completed, I was directed to an article by Mark Ledwich that takes an even harder line against what he terms 'algorithmic radicalization', building on the paper by Munger and Phillips-see https: //medium.com/@markoledwich/youtube-radicalization-an-authoritative-saucy-story-28f73953ed17.) A moment's reflection supports what should be a fairly obvious and mundane point: the hatred and malignancies found on the internet>web are founded in existing and external sources, in many cases in the form of age-old prejudices and urban myths.
Wikipedia offers the following definition of 'an urban legend, urban myth, urban tale, or contemporary legend': . . . a genre of folklore comprising stories circulated as true, especially as having happened to a friend or family member, often with horrifying or humorous elements. These legends can be entertainment, but often concern mysterious peril or troubling events, such as disappearances and strange objects. They may also be moralistic confirmation of prejudices or ways to make sense of societal anxieties.
Dating from the 1960s, urban legends suggest that the circulation of these tales reaches new levels of intensity and wider proliferation in densely populated urban settings, as opposed to more sparsely populated rural ones.
Two recent examples illustrate some key aspects. The first, dating from March 2019, was reported in the DEMOS report Warring Songs [21], and refers to 'false rumours spreading on Facebook and Whatsapp in France about Roma people carrying out child abductions sparked violence against the Roma community; echoing attacks in India in which people were killed after similar rumours of abductions spread.' [22] The speed and the extent to which this spread are specific to the internet>web, but the core of the rumour is age-old. Ask your parents, grandparents, or great-grandparents; many will confirm that similar rumours abounded in their youth-i.e., late 19th and early 20th century.
The second was reported on CNN in December 2019, and refers to the tale that . . . men driving white vans are kidnapping women all across the United States for sex trafficking and to sell their body parts. While there is no evidence to suggest this is happening, much less on a national, coordinated scale, a series of viral Facebook posts created a domino effect that led to the mayor of a major American city issuing a warning based on the unsubstantiated claims. [23] This is simply the most recent incarnation, one earlier version in 2009 referred to child abduction by drivers of white vans in Australia [24]. These tales may well have developed from the trope 'white van man' in the UK, referring to aggressive and inconsiderate male drivers of small commercial vehicles. The drivers being, by implication, white and male; all too readily and volubly expressing right-wing, racist and misogynist views.
Both examples support the contention that AINs-whether in the form of specific websites or more loosely-coupled viral forms of communication facilitated by social networks-build upon the beliefs of people who are already receptive to such messages. They then become the source for propagating these beliefs and claims via the internet>web.
AINs largely exist as filter bubbles, where one set of ideas dominates and excludes any alternatives; precluding any possibility of engagement and discussion. Even if there are attempts to criticize or challenge such claims, they are largely discounted and discredited within these bubbles, and in any case such challenges usually lag far behind the initial claims themselves. Mark Twain memorably noted that '[A] lie can travel around the world and back again while the truth is lacing up its boots', and that was a century or so before the internet>web existed. It is far worse now. (Some sources now question the attribution of this to Mark Twain-see https://quoteinvestigator.com/2014/07/13/truth/).

Metaphors of Communication
Munger and Phillips point to the role of metaphors used in discussions and analysis of media. (The term 'media' is itself a powerful, resilient, and largely unacknowledged metaphor, encompassing the implication that radio, TV, etc., are forms of mediation that operate between the communicator and the audience. The implication is that media do not take an active part in this process, although see below for Ellul's idea of mediation as both active and passive. This original meaning of the term has now largely been effaced, and people assume, quite correctly, that 'media' are far more active in the process of communication, and certainly not neutral. NB Astute readers will also have noted that the term is itself plural-the singular being medium.) These are mostly tacit, commonplace, and highly influential even though in many cases they are at best partial and often misleading. This is nothing new, as Munger and Phillips argue it can be found in some of the earliest analyses of mass media where metaphors such as 'the hypodermic needle' or 'the magic bullet' were used to explain the impact of radio and TV on the new audiences that were brought into being by these new technologies. Both metaphors are premised on the characterization of a largely passive and undifferentiated media audience, each of whom can be directly influenced or even controlled by media stimuli-i.e., messages from the media are injected or propelled directly into people's awareness, influencing or altering their opinions and possibly even prompting their behaviour. The epitome of this relates to the events surrounding the radio broadcast of H G Wells' The War of the Worlds by Orson Welles and his theatre company in the USA in 1938. The play was deliberately updated and revised so that the broadcast appeared to be live reporting from a nearby scene of a mass landing of hostile Martians. According to many reports, this led to widespread panic amongst the audience, many of whom started to flee in search of safety. (https://en.wikipedia.org/wiki/The_War_of_the_Worlds_(1938_radio_drama). These reports are now regarded as suspect and exaggerated. Subsequent analysis of the aftermath of the broadcast indicated a far more nuanced and variable set of responses, laying the basis for more detailed studies into the impact of mass media-initially radio and cinema, later TV.
Metaphors based on the assumption of a largely passive audience, however, were not easily discarded, and have taken on new forms in the era of the internet>web. Munger and Phillips aver that '[N]ew cultural contexts demand new metaphors'. The old ones have been replaced by what they term 'the Zombie Bite model of YouTube radicalization'. People bitten by a zombie will become zombies themselves. In similar fashion, people who engage with AINs via sustained linking to increasingly extreme and malicious YouTube videos recommended by the algorithm will become infected/radicalized. They will themselves become the source of further 'infection' [25].
Munger and Phillips refer to a working paper by Ribeiro et al. [26]), 'the most comprehensive quantitative analysis of YouTube politics to date', to exemplify this form of analysis. They argue that Ribeiro et al. see 'people who comment on videos produced by figures associated with the Alt-Right as infected', leading to further infection as these comments are picked up and further ones added. So, the Zombie Bite metaphor, or variations thereof, may be now be part of the conventional wisdom, and taken for granted well before any analysis commences. People just assume the evidence for it exists whether this is the case or not.
They accuse Ribeiro and colleagues of perpetuating this ill-founded metaphor. Yet in her review of Munger and Phillips' work, Martineau [27] quotes Ribeiro, on behalf of his colleagues, strenuously denying this. On the contrary, he alleges that their work has been deliberately 'misinterpreted to fit the algorithmic radicalization narrative by so many outlets that he lost count'.
Seemingly, Munger and Phillips have aimed their criticism at the wrong target, but it is perhaps an unwanted effect of the power of the metaphor itself. They regard the Zombie Bite metaphor as inadequate; the result of weak and ill-conceived ideas on the part of journalists and academics. The algorithmic model to which it is related leads to a model or approach that . . . is incomplete, and potentially misleading. And we think that it has rapidly gained a place in the center of the study of media and politics on YouTube because it implies an obvious policy solution-one which is flattering to the journalists and academics studying the phenomenon. If only Google (which owns YouTube) would accept lower profits by changing the algorithm governing the recommendation engine, the alternative media would diminish in power and we would regain our place as the gatekeepers of knowledge. This is wishful thinking that undersells the importance of YouTube politics as a whole Their paper offers a useful and important corrective, but even its reception has not been immune (sic!) to misinterpretation along the same lines as that of Ribeiro et al. Martineau's otherwise useful review of their work asserts that their 'paper suggests that radicalization on YouTube stems from the same factors that persuade people to change their minds in real life-injecting new information-but at scale'. The power of metaphors should not be underestimated.

Technology is Society
Martineau's review of Munger and Phillips' paper in WIRED bears the title 'Maybe It's Not YouTube's Algorithm That Radicalizes People', noting that the authors' argument is that analysis needs to shift from the algorithm to 'the communities that form around right-wing content'. This begins to move the focus of attention away from any simple idea that the technology, at least in the form of the algorithm, is to blame-Q2. However, posing the question in this form assumes that technology can somehow be treated as a wholly distinctive category, which opens the way for the consideration of the potential impact of 'technology' on 'society'-i.e., technology is something distinct from the social context in which it arises.
Permitting this distinction opens the way for 'technological determinism', a term coined by Thorstein Veblen which evokes and encourages the argument that the technology in any era drives or determines the nature of society. Positions may vary regarding the extent to which existing technology exerts this influence, but all share some aspect of determinism. Counter arguments emanate from several sources, including the work of MacKenzie and Wajcman's 'social determination of technology' [28].
A succinct counter to technological determinism is offered by Castells in the introduction to his trilogy The Network Society.
Of course technology does not determine society. Nor does society script the course of technological change, since many factors, including individual intuitiveness and entrepreneurialism, intervene in the process of scientific discovery, technological innovation, and social applications, so that the final outcome depends on a complex pattern of interaction. Indeed the dilemma of technological determinism is probably a false problem, since technology is society, and society cannot be understood or represented without its technological tools. (my emphasis) [29] Note that Castells is in effect arguing that technology is both a social product and has its own logic (see below for the discussion on the ambivalence of technology.) A more eloquent and insightful account can be found in the work of Raymond Williams, a key figure in the development of cultural studies as it developed in the UK in the 1960s. Writing in the 1970s, in the wake of the widespread uptake of television that had begun in the previous decade, Williams stressed that television cannot be seen simply in terms of the tangible technology, but that the technology itself must be located within more complex processes including systems of consumption, entertainment, communication, leisure and so on.
Williams rejected technological determinism, where R&D is understood as self-generating in an independent sphere, leading to new social conditions. He also rejected the converse view, which he termed symptomatic technology, where the R&D underlying technological innovation is regarded as independent and autonomous, with the results of such innovations being taken up and used within the surrounding social contexts.
He offered examples of each position as follows: Deterministic 1-TV was invented as a result of scientific and technical research. Its power as a medium of social communications was then so great that it altered many of our institutions and forms of social relationships.
Deterministic 2-TV was invented as a result of scientific and technical research, and developed as a medium of entertainment and news. It then had unforeseen consequences, not only on the other entertainment and news media... but on some of the central processes of family, cultural and social life.
Symptomatic 1-TV, discovered as a possibility by scientific and technical research, was selected for investment and promotion as a new and profitable phase of a domestic consumer economy.
Symptomatic 2-TV became available as a result of scientific and technical research, and its character and uses exploited and emphasised elements of a passivity, a cultural and psychological inadequacy, which had always been latent in people. [30] Williams explained that although the deterministic and the symptomatic positions differ in many respects, they share the assumption that technology is an isolated facet of existence, outside society and beyond the realm of intention. Anticipating Castells' argument about the relationship between society and technology, Williams stressed that technology must be understood as being 'looked for and developed with certain purposes and practices already in mind', these purposes and practices being 'central, not marginal', as the symptomatic view would hold.
The complex history of the internet, particularly the way in which it was taken up by US Defense interests in the 1960s, is a partial illustration of Williams' position, although any specific technology will exhibit its own peculiarities, and Williams' account requires modification to allow for other factors and contingencies. Indeed, in many cases, the original conception underlying technological innovations proved to be at odds with the later evolution of the innovation itself. Thus, radio was initially developed for one-to-one communication, the telephone was initially invented for something akin to broadcasting; computers were originally devised purely for calculating (Brian Winston's account offers a detailed account of these and other technologies [31]; see also my discussion [32]) For our present purposes, the core issue to take from all of this is to recognize that the context that Berners-Lee finds so disturbing-a potential road to dystopia-cannot be understood purely in technological terms. One problem with technical advances is that they are almost always difficult to grasp by those not intimately connected with the technology itself. On the other hand, in considering the wider implications, the people usually least capable of appreciating the potential impact of innovation are the technical specialists. To paraphrase Oscar Wilde-Non-technicians think they understand the technology: that is their misfortune. Few technicians understand society: that is theirs. These collective misconceptions are everyone's misfortune. ("Every woman becomes their mother. That's their tragedy. And no man becomes his. That's his tragedy." Oscar Wilde The Importance of Being Earnest).
In the past, these tensions often dissipated as new technologies stabilized and became more familiar; achieving a form of equilibrium along the lines indicated in the next section. The development of ICT has been more complex. It comprises a wide and expanding range of technologies that have become readily available and achieved near-ubiquitous penetration in a very short time. This has worked against stability and familiarization, although I would argue that perhaps some plateau has been reached with the introduction of the smartphone in 2007; affording the basis for our wider understanding of the overall impact of these technologies.

Achieving Technological Equilibrium
Taking into account critiques of the algorithmic-cum-Zombie-Bite argument, and the ideas offered by Williams and those in his wake, the malignancies encompassed by 'the dark side of the internet' can be seen as social products finding expression in new technological forms that uniquely facilitate communication and proliferation. On this basis Berners-Lee's plea is unlikely to prove effective, given that its entire focus is on the internet>web, and largely silent on any wider factors.
Criticism of The Contract has come thick and fast, and has included a good deal of unease that the announcement emanated from a group that included many of the organizations that are seen as the key embodiments of the malaise itself-particularly Facebook and Google. (The Wikipedia page for The Contract lists several critiques-https://en.wikipedia.org/wiki/Contract_for_the_Web).
The Contract can be seen as an indication of how the development of the internet>web can be tracked along a variant of Gartner's 'hype cycle'. (https://www.gartner.com/en/research/methodologies/ gartner-hype-cycle). The Gartner model outlines a model of five phases that technological innovations pass through, encompassing the sequence of technological trigger; peak of inflated expectations; trough of disillusionment; slope of enlightenment; plateau of productivity. The model has been criticized for being over-simplified and lacking in evidence, but that misses the point, since its main value lies in the ways in which it offers caveats that need to be recognized and taken into account with regard to the uptake and adoption of any technical innovation-particularly those associated with ICT.
Although the Gartner model focuses on the adoption of technical innovations, predominantly in the private sector, it can be adapted to provide the basis for an outline model relating to the wider social impact of innovations including the internet > web.
Five stages can be identified: 1 Initial Optimism: The innovation is welcomed and recognized as bringing significant benefits. 2 Anxiety and Doubt; Disavowals and Repudiations: Various aspects of the innovation begin to foster unease and warnings resulting from its use/misuse. This will lead to discussions regarding the balance between the 'goods' and the 'ills' of the innovation are discussed; do the benefits outweigh the harms, or vice-versa? Those in favour of the innovation, particularly if they are benefitting politically and/or commercially, will respond with protestations, including disavowals and repudiations of harm. Others will highlight the harms or noxiants, and downplay the benefits, particularly if they see themselves and their interests as being impaired or diminished in the wake of the innovation.

3
Precarious Equilibrium: Eventually, with some level of acceptance regarding possible or actual harm, efforts will be made to encourage and promote self-regulation, often with an explicit or implicit threat of official and formal regulation. There may be several successive periods with differing levels of equilibrium. 4 Formal regulation and control: Failure of self-regulation may result in calls for more effective regulation, including formal legislation. The prospect of such constraints will elicit counter efforts from opposing interests. Eventually, some forms of legislation may be enacted, although this may not necessarily be matched by actual enforcement. 5 Further development: Like sharks, technologies must keep moving forward or they die. (This is only true for some types of shark. In Annie Hall, Woody Allen says to Annie Hall 'you know relationships are like sharks, they have to keep moving forward or they die'.) Assuming a technology does not become obsolete, there will be forms of iteration around some or all of the preceding stages, potentially resulting in new forms of contestation, equilibrium, and regulation/legislation. 'Mature' technologies exemplify this, particularly if they are long-lived, and in some cases may evolve in ways completely at odds with their initial appearance and objectives.
This model can be tentatively applied to many innovations, for example the invention of printing in the 15th century: (What follows is a highly imperfect and sketchy outline. It is not intended as a precise account in any way. I hope, however, that it brings certain key features to the fore, and in anticipation I offer my apologies to those with greater knowledge of the topic!) 1 Printing was welcomed since it meant that books would be far more widely available. 2 People began to express their unease that wider access to printed texts would lead to undermining the authority of the church in interpretation of scripture and other matters. By the 16th and early 17th century, the wide availability of printing presses resulted in a massive proliferation of printed matter, and the development of mass communication-hence the concept of 'the press' as 'the fourth estate', in addition to 'clergy', 'nobility', and 'commoners'. Was the availability of books and newspapers a benefit to society or a threat? Would this result in greater demands for increased levels of literacy and education, leading to challenges to traditional authority? Or would there be a net benefit to all from a more educated and informed populace? 3 There was intense debate and dispute between contending interests. This continued for some time-15th and 16th centuries. The technology itself developed in the ensuing centuries, and printed matter became far more widely and readily available to an increasingly literate population. 4 Efforts were made by governments to control the press and to censor what could be published-17th and 18th century. Various forms of taxation were introduced. In the UK, the first newspaper taxes were introduced in 18th century, specifically to rein in the free press. Other governments followed suit, and in the 19th century the UK introduced the Stamp Tax, which raised the price of any newspaper to well above the level what could be afforded by any worker. 5 The battle between advocates of press freedom and freedom of speech and those seeking some form of control or censorship continues unabated. In some jurisdictions, such as the USA, freedom of speech is enshrined as a fundamental principle. In others, laws of defamation and libel curtail these freedoms. The advent of the internet>web has heralded a new stage given the new opportunities available for 'publishing'-e.g., blogs, social networks and the like.
For the internet>web: 1 Initial Optimism: The internet was developed in the 1960s, but the World Wide Web only appeared in 1989, and the first optimistic phase dates from this time. 2 Anxiety and Doubt; Disavowals and Repudiations: Initial appreciation of the web was quickly followed by recognition of some of the drawbacks. Articles began to appear both in print and online listing the dangers of using the internet>web. (The two terms were often used synonymously or interchangeably). A Google search using 'dangers of the web before:1995' listed articles focusing on pornography, fraud in e-commerce, cyber-stalking, and the dangers for children in chatrooms and MUDs. A search using 'before:2000' produced items relating to cyber-bullying, child pornography and online recruitment to cults. By 2008, Nicholas Carr was arguing that 'Google is making us stupid' in a widely circulated article in The Atlantic, and soon after, Professor Susan Greenfield, a neurologist in the UK was claiming that social websites were harming children's brains-despite being a Professor of Neurology, Greenfield's claims have been heavily criticized by many of her neurological peers, hence the term 'eminence-based research' has been applied to her writings on this topic. 3 Precarious Equilibrium: The Contract is firmly based in stage 3. Hence the support for this strategy of self-regulation forthcoming from Google and Facebook, amongst many others. In this instance, however, the self-regulation called for in The Contract applies across the board, including commercial interests, governments, and individual users. For many critics, it is unlikely to prove any more effective than earlier internet>web strategies, all of which have proven to be a case of whack-a-mole in their effectiveness, a strategy that according to The Online Slang Dictionary refers to 'the practice of repeatedly getting rid of something, only to have more of that thing appear. For example, deleting spammers' e-mail accounts, closing pop-up windows in a web browser, etc.' (http://onlineslangdictionary.com/meaning-definition-of/whack-a-mole) 4 Formal regulation and control: This is yet to develop, despite increasing calls for precisely such actions. In some countries and regions, such as China and Russia, levels of control and constraint are already widespread, albeit emanating from different motivations (see for instance https://borgenproject.org/internet-censorship-in-russia-and-china/) 5 Further development: The appearance of the smartphone in 2007 seems to represent a plateau of sorts in the rapid development of internet>web technologies, although there have been significant advances in areas such as financial technology (fintech), AI/robotics, and big data. Offering predictions regarding technology, however, is even more hazardous than other forms of forecasting. (Examples of poor predictions relating to technology abound-see for instance http://www.rinkworks.com/said/predictions.shtml). As will be argued in the later sections, strategies such as Berners-Lee's Contract are unlikely to prove effective, leading to increased calls for formal regulation. This may take the form of an international effort, or other countries may try to follow the example of 'The Great Firewall of China', albeit that it may not be an easy strategy to duplicate elsewhere [33]. Some internet>web technologies will evolve, outpacing or out-manoeuvring such efforts. Eventually, there may be periods of equilibrium between the contending forces, although we must also be aware of the increasing likelihood that the effects of climate change may result in a global deterioration and degradation of all existing technologies.

The Ambivalence of Technology and the Paradox of Social Contracts
This all amounts to a disheartening and disparaging response to Q3: Berners-Lee's strategy is unlikely to prove effective and might be regarded as merely delaying moves to tougher and more effective-but potentially less welcome-measures backed by legislation and enforcement. Such developments will be resisted tooth-and-nail, particularly by the main commercial interests; indeed, the armoury has been under preparation for some time. A stark illustration of the likely outcome of consensual or self-regulation is provided by the recent announcement of Facebook's new review board, which Vaidhyanathan describes as a strategy 'designed to move slowly and keep things intact' [34]. This is not to say that efforts should not be made to improve matters, and even alleviate some of the worst aspects. Nevertheless, as indicated earlier, it is probably more realistic to plan and strive for new and more generally beneficent forms of equilibrium between contending forces than to seek long-lasting and comprehensive solutions.
Clearly there is no way that the technological developments emanating from and associated with the internet>web can selectively be put into reverse. Although our current state of technological development may well be halted and significantly degraded as our despoliation of the earth continues unabated, that will not be selective. Two fictional accounts offer some salutary lessons. The first dates from the 1950s; A Canticle for Leibowitz, by Walter M. Miller [35], traces developments over several millennia as people strive to rebuild civilization in the aftermath of nuclear devastation. The second is provided by William Gibson in The Peripheral [36], where he refers to the global cataclysm as 'The Jackpot' [37]. Gibson has form in astutely predicting technological trends-most notably in Neuromancer, where he coined the term 'cyberspace', defining it as A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts... A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding... [38] Assuming, and in the hope that, Gibson may not be as prescient regarding The Jackpot as he has been with his other insights, we have to learn to live with the internet>web as it is and as it will develop. We need to recognize that efforts to 'control' its malignant and damaging aspects require a more complex understanding of the ambivalent nature of technology based on ideas derived from Williams and others. Technology is both an aspect of the social milieu from which it arises, yet it also develops with its own logic. This is particularly the case with ICT. Jacques Ellul offers a way of understanding this ambivalence and its ramifications. Writing in the 1970s in an eerily prescient manner, Ellul discussed what he termed 'the technological system' [39] rather than 'technology' per se. He identified five criteria for the system's sustained existence • a network of interrelations; • a preference for constituent parts of the system to combine with each other, rather than with non-system components; • parts of the system modify each other's behaviour, so encouraging innovation rather than repetition; • the system as a totality can enter into relationships with other systems; • existence of feedback structures.
For Ellul, the technological system is far more than simply a 'means' or an 'instrument', but always a 'mediation', which for Ellul has the dual sense of passively forming a link between two elements and actively intervening between them. This goes beyond Williams' point about technology being looked for and developed, since Ellul contended that the modern technological system has taken on an ever more active mediation, until it forms a complex and dominating system that 'fragments, simplifies, splinters, divides; everything reduced to manageable objects' [30]. Moreover, this system is inherently dynamic and revolutionary, operating with what Ellul terms its 'functional imperative', everything must be up-to-date, and the imperative operates with the system providing its own dynamic: 'technology requires its own transformation for itself'.
This self-perpetuating and widely transforming 'technological environment makes all problems and difficulties technological'. This is technicism. If scientism is 'science's belief in itself', then technicism can be characterized as technology's belief in itself as the means and measure of all things. Mowshowitz encapsulates this with his definition of technicism as a stance that 'mixes optimism and the entrepreneurial spirit with the engineering tradition' [40]. The ramifications of technicism include being too optimistic about technology and too dismissive of political and social implications.
Berners-Lee quite understandably was firmly in the technicist domain in 1989, but, judging by his recent actions and pronouncements, now recognizes that such a stance has its limitations, hence his call for a far more expansive and complex orientation to the internet>web. The Contract may well prove to be a useful and fruitful starting point, but it will only prove effective if it leads to recognition and acceptance of the ambivalence of technology, encompassing the ideas propounded by both Williams and Ellul amongst others. (This is no small matter since it will involve re-orienting and revising some of the fundamental categories of our thinking-further discussion of this, however, needs to be postponed) In addition to the ambivalence of technology, however, a further set of complexities arise from The Contract. Why is this initiative framed as a contract, rather than a declaration, affirmation or statement? The text includes an Annex that 'links to a selection of human rights and other frameworks that relate to the Contract's substance' [2]. Each of the nine principles is linked to specific paragraphs from a variety of international documents or statements-including The Universal Declaration of Human Rights, The Tunis Agenda from the World Summit on the Information Society, UN Sustainable Development Goals, and The International Covenant on Economic, Social and Cultural Rights. This offers a lexicon of possible terms that might have been used for this initiative-declaration, framework, agenda, covenant, and so on.
At the online site, the homepage announces 'Contract for the Web: A global plan of action to make our online world safe and empowering for everyone'. A contract is intended to bind a group of clearly defined parties to a definitive set of statements clarifying a range of rights, duties, and responsibilities. Contracts are legal statements that obligate all parties concerned to a range of actions and accountabilities. Contracts then will include reference to general legal issues, and legal processes, including penalties that can be incurred if any party breaches the contract.
The Contract is couched in quasi-legal terms and clearly encompasses the idea of rights, duties, and responsibilities for the three groups concerned: governments, companies, and citizens.

The preamble is clear in this regard
Everyone has a role to play in safeguarding the future of the Web. The Contract for the Web was created by representatives from over 80 organizations, representing governments, companies and civil society, and sets out commitments to guide digital policy agendas. To achieve the Contract's goals, governments, companies, civil society and individuals must commit to sustained policy development, advocacy, and implementation of the Contract text.
Presenting this initiative as a contract was clearly deliberate, since Berners-Lee is intent on offering something that provides far more than a declaration. He is intent on providing the basis for a 'plan of action'. Moreover, the website itself invites people to 'endorse' The Contract, either as an individual or as a representative of a larger organization. Cynics might see this as nothing more than an outlet for 'virtue signalling'. (This is something of a contentious term, but seems potentially appropriate in this case. 'The action or practice of publicly expressing opinions or sentiments intended to demonstrate one's good character or the moral correctness of one's position on a particular issue. 'it's noticeable how often virtue signalling consists of saying you hate things'; 'standing on the sidelines saying how awful the situation is does nothing except massage your ego by virtue signalling'. https://www.lexico.com/definition/virtue_signalling.) In a similar manner large multinationals, including those directly or closely involved in energy supply, are accused of 'green-washing' [41].
In presenting a contract for the web, Berners-Lee et al. are seeking general consent for the ways in which the internet>web can be developed and controlled. This idea of a basis for consent derives from the idea of a social contract or social compact. Social contract ideas can be traced back to Epicurus (3rd and 2nd centuries BCE), but more specifically to the work of Hobbes, Locke, Rousseau, and Kant in 17th and 18th centuries, after which they fell somewhat into abeyance until John Rawls revived them in A Theory of Justice [42]. The Stanford Encyclopedia of Philosophy (SEP) (a rich and stimulating resource for all things philosophical-https://plato.stanford.edu/) offers a useful introduction and overview.
The basic idea seems simple: in some way, the agreement of all individuals subject to collectively enforced social arrangements shows that those arrangements have some normative property (they are legitimate, just, obligating, etc.). Even this basic idea, though, is anything but simple, and even this abstract rendering is objectionable in many ways. (emphasis added) Political and moral philosophers when discussing the ways in which authority and consent arise and can be sustained have used the metaphor of a social contract. Unlike claims to authority based on oligarchy, traditional monarchy, or theocracy, contractual models argue that agreement on political processes and moral norms ought to be derived from mutual agreement-a contract or compact.
There are now understood to be two main strains of thought on social contracts-contractarianism and contractualism. Contractarianism stems from the work of Thomas Hobbes. It is premised on the idea . . . that persons are primarily self-interested, and that a rational assessment of the best strategy for attaining the maximization of their self-interest will lead them to act morally (where the moral norms are determined by the maximization of joint interest) and to consent to governmental authority. Contractarianism argues that we each are motivated to accept morality "first because we are vulnerable to the depredations of others, and second because we can all benefit from cooperation with others" In other words, people will sign up to the contract because they fear the consequences of not doing so. Hobbes, writing in Leviathan, argued that without a social contract, people would live in a state of nature devoid of security, requiring constant vigilance: a war of all against all, where one's life was 'solitary, poor, nasty, brutish, and short' [43].
Contractualism, in contrast, stems from the work of Immanuel Kant, whereby 'individuals are not taken to be motivated by self-interest but rather by a commitment to publicly justify the standards of morality to which each will be held'. In contrast to the previous Hobbesian position, people sign up to the social contract because it is seen as good in itself and results in a benefit for everyone.
The most influential modern exponent of this orientation was John Rawls and his conception of 'the original position', a central part of A Theory of Justice [42].
SEP summarizes this as follows: The original position is a central feature of John Rawls's social contract account of justice, "justice as fairness," set forth in A Theory of Justice. The original position is designed to be a fair and impartial point of view that is to be adopted in our reasoning about fundamental principles of justice. In taking up this point of view, we are to imagine ourselves in the position of free and equal persons who jointly agree upon and commit themselves to principles of social and political justice. The main distinguishing feature of the original position is "the veil of ignorance": to insure impartiality of judgment, the parties are deprived of all knowledge of their personal characteristics and social and historical circumstances. They do know of certain fundamental interests they all have, plus general facts about psychology, economics, biology, and other social and natural sciences.
The Contract for the Web falls somewhere between these two positions. As evidenced by Berners-Lee's statements regarding the potential for dystopia, the motivation behind it clearly derives from something akin to a digital state of nature. The current internet>web is moving towards a digital war of all against all, if it has not already got there. Hence the incessant and increasing demands for users to exercise constant vigilance and take responsibility for their own security.
On the other hand, the tenor of The Contract is far more akin to Kantian or Rawlsian contractualism. Governments, companies, and individuals are exhorted to endorse The Contract on the basis that it will lead to a benefit for one and all, regardless of their specific interests and current position.
This combination of Hobbesian and Kantian/Rawlsian motivations was clearly in evidence when Berners-Lee, preparing the ground for the announcement of The Contract, presented The Dimbleby Lecture on the BBC (see the report in The Daily Telegraph https://www.telegraph.co.uk/news/2019/11/18/ social-media-content-ghastly-humans-monitor-tim-berners-lee/ and note the incorrect use of the term 'internet'. The lecture itself can be seen here https://www.youtube.com/watch?v=CmcIVdtVvJw---the discussion following the interview is at least as important as the lecture-this site lists the names of those participating https://www.imdb.com/title/tt11307186/fullcredits) The lecture refers both to the dystopian aspects of the internet>web, particularly relating to social media; also pointing a way ahead based on a commitment to a publicly justifiable and transparent set of standards of behaviour and action.
Hobbes' contractarianism encompasses the argument that people should sign up to a social contract, acknowledging the authority of an absolute sovereign power-unlimited and undivided (again, the entry for Hobbes in SEP is an invaluable starting point-see https://plato.stanford.edu/ entries/hobbes-moral/#LimPolObl) This is an unpalatable conclusion, and many of those who followed in Hobbes' wake offered alternative ideas. Clearly, Berners-Lee and his colleagues are not endorsing the idea of an undivided and unlimited sovereign power for the internet>web. The Contract leans far more to a consensual arrangement. It refers to The UN Internet Governance Forum in several places, but it is largely silent on any details regarding actual governance structures and processes. It might be unrealistic to expect a fully-fledged governance model at this initial stage, but one needs to be developed and discussed as a key priority. Without it and given the earlier discussion of the hands-on approach now being taken by Google, the danger is that control will be ceded by default to a small band of Internet Leviathans, which may or may not include government agencies or international bodies such as the UN.
One line of criticism of all social contract models is that they are predicated on an individualist ontology that places the individual prior to and outside society. The issues and debates around this are complex, and again SEP offers a rich and varied set of accounts. Some examples of relevant SEP entries on social ontology include https://plato.stanford.edu/entries/social-ontology/communitarianism and https://plato.stanford.edu/entries/communitarianism/. For present purposes, the key issue is the paradox that these theories are premised on an outcome in which all parties agree to be bound by the terms of the social contract. In other words, the problem of how social solidarity and stability arise is already a supposition and premise of the argument. What is to prevent people signing up to the social contract and then reneging? The Hobbesian account deals with this to some extent, granting absolute power to a sovereign or sovereign institution-including any form of sanction or punishment. The Kantian/Rawlsian account, however, rests largely on the goodwill and fellow feeling of all parties concerned, in which case why is a social contract needed in the first place?
Alternatives to social contract theories are on offer from communitarians amongst others; also, from those who invert the argument, positing the existence of responsibility, ethics, and sodality prior to social development-i.e., there has to be some form of shared and approved understanding before any contract can be put in place and respected. Here, an ethical and moral orientation is required as the prerequisite to society and social cohesion. (tThis is again a complex and important topic that requires extensive discussion and moves well beyond the current issue.) The Contract exhibits this paradox in its appeal to interests that are not obviously or demonstrably reconcilable. The idea of an agreement between governments, private organizations, and citizens, allocating three principles to each group, flounders if there are any inherent contradictions between the different interests and specific principles. Each of the nine principles is laudable, but they may not be easily reconcilable-for instance, several refer to access and availability, some to security, privacy, and trust, and others to ensuring that users can shape and contribute to the web. Enhancing many of these aspects without undermining or impairing others will be a key challenge and will probably require compromises by one or more parties. Hence the need for a clear governance model, with agreed structures and processes. Unfortunately, the auguries are not promising! 9. In Conclusions . . . for Now . . . At around the same time as The Contract appeared, it was announced that The Dot Org [.org] top level domain was being sold off to a private equity organization. Corey Doctorow, a noted civil rights activist who has been a keen and visible promoter of the Creative Commons organization and liberalizing of copyright, pulled no punches in reporting this on his boingboing blog [44].
Earlier this month, management of the .org top-level domain underwent a radical shift: first, ICANN dropped price-caps on .org domains, and then the Internet Society (ISOC) flogged the registry off to Ethos Capital, a private equity fund, and a consortium of three families of Republican billionaires: the Perots, the Romneys, and the Johnsons. This doesn't just mean that nonprofits-for whom the .org top-level domain was created-will pay higher prices to maintain their domains, and it doesn't just mean that private equity funds-rather than a transparent, nonprofit NGO-will be able to censor what gets posted to .org domains, by kicking out any domain that it doesn't like (remember when everyone was cheering because Nazi websites were being stripped of their domain names by registrars? This cuts both ways: if registrars have the power and duty to respond to speech they object to by taking away organizations' domains, then that duty and power also applies to billionaires and private equity-appointed administrators). (This echoes the point made earlier with regards to the way in which some Google employees have re-directed anti-vax enquiries.) By December 2019, pressures started to build from those critical of this move, and a campaign was established to persuade the Internet Corporation for Assigned Names and Numbers (ICANN) to reconsider its decision. Currently, the dot org domain registry is managed by Public Interest Registry (PIR), and the sale must be agreed by ICANN. The dot org domain is of critical importance to NGOs and civil society organizations in general. Ironically, ICANN is itself a non-profit organization with the web address ICANN.org! On 9 December 2019, the ICANN website gave an update on the proposed sale-here are the opening paragraphs.
The proposed acquisition of Public Interest Registry (PIR) by Ethos Capital was announced on 13 November 2019 by the parties and the Internet Society (ISOC). This announcement has raised many questions. In light of this, we want to be transparent about where we are in the process.
On 14 November 2019, PIR formally notified ICANN of the proposed transaction. Under the .ORG Registry Agreement, PIR must obtain ICANN's prior approval before any transaction that would result in a change of control of the registry operator. Typically, similar requests to ICANN are confidential; we asked PIR for permission to publish the notification and they declined our request.
According to the .ORG Registry Agreement and our processes for reviewing such requests, ICANN has 30 days to request additional information about the proposed transaction including information about the party acquiring control, its ultimate parent entity, and whether they meet the ICANN-adopted registry operator criteria (as well as financial resources, and operational and technical capabilities). [45] By late March 2020, it was far from clear how this situation might be resolved. Thanks to various efforts, including those coordinated through sites such as savedotorg.org, ICANN and PIR agreed to postpone any decision until early May. In April, savedotorg noted that . . .
[W]hen ISOC (The Internet Society) originally proposed transferring management of .ORG to PIR in 2002, ISOC's then President and CEO Lynn St. Amour promised that .ORG would continue to be driven by the NGO community-in her words, PIR would "draw upon the resources of ISOC's extended global network to drive policy and management." As long-time members of that global network, we insist that you keep that promise.
On 30th April, ICANN announced that the proposal had been rejected.
Today, the ICANN Board made the decision to reject the proposed change of control and entity conversion request that Public Interest Registry (PIR) submitted to ICANN.
After completing extensive due diligence, the ICANN Board finds that withholding consent of the transfer of PIR from the Internet Society (ISOC) to Ethos Capital is reasonable, and the right thing to do. [46] This must count as a significant moment, but the episode itself brings into question the likely effectiveness of The Contract, since it indicates that there are inherent contradictions and conflicts of interest between and within the three groups identified. Previous experiences regarding net neutrality (see, for instance, ACLU [47], also [48,49]) and government interference in internet traffic and encryption (for example, the various efforts by The Five Eyes to enforce de-encryption [50] and the efforts of many governments to gain access to anyone's data [51]) demonstrate that conflicts between public and private sectors, and between governments and citizens, are inescapable. Moreover, they usually only come to an end once the private sector prevails over the public sector, or governments do so over citizens. The save dot org campaign may have won for now, but who is to say that the transfer will not be accomplished, perhaps in a different but irrevocable manner, in the future?
And all this began before the COVID-19 pandemic became the overwhelming global concern. TBL's appeal for a consensual agreement for the future of the internet>web has faded into the background, part of a bygone era which seems increasingly alien to our current existence. Yet as the full ramifications of the pandemic unfold, the motivations and rationale underlying The Contract have in fact become more pertinent and crucial.
When I was working on the early drafts of this article in November/December 2019, I little thought that my brief discussion on the work of Hobbes, Locke, Kant, and Rawls would prove to be of such immediate and demanding relevance. The distinction between contractarianism and contractualism might have seemed incongruous and nugatory for a discussion centred around internet>web governance. Yet only a few months later, Hobbes' advocacy of the benefits to be offered by assenting to a sovereign power able to offer each of us safety and security, at the expense of some measure of our liberty, set against an orientation that places primary value on our collective and communitarian sensitivities, could hardly be more apposite and relevant. Each one of us is now caught in Rawls' 'veil of ignorance'. Most of us have no idea whether or not we have been infected with the SARS-CoV-2 virus. (SARS-CoV-2 (Severe Acute Respiratory Syndrome) is the name of the virus; COVID-19 is the disease caused by the virus.) Even if we have been tested and found positive, we cannot be certain that we then have some immunity to reinfection. Moreover, as we engage with others in our daily lives, we have no idea who or what might represent a danger to us; people, animals, and even inanimate objects can be sources of transmission and infection.
Discussions regarding the balance between security and freedom are no longer of arcane interest solely to political philosophers and forums on civil liberties; they have an immediate and increasingly significant impact on our daily lives as governments across the world seek to utilize the internet>web to track and trace each one of us, on the premise that in so doing they can offer protection both to individuals and the general population.
The internet>web is proving a significant benefit to humanity in an era of self-enforced or government mandated isolation-referred to as shelter-at-home or lock-down, respectively-albeit that despite the pervasiveness of the technology, significant groups of people have limited or no access to it to ameliorate their condition. I would argue, however, that in many societies, the widespread and relatively enduring adherence to voluntary or enforced isolation would not have been accomplished without the internet>web.
Yet the dystopian aspects, so alarming to TBL and others, have also found a new force; people are being swindled by fake cures and prophylactics, defrauded by phishing scams and other swindles, and persuaded by all manner of conspiracy theories including renewed efforts by the anti-vaxxers, and one that lays the blame for the virus with 5G technology.
Perhaps even more alarmingly, governments are taking the opportunity to push through peremptory legislation empowering extensive and intrusive surveillance of the general population, requiring access to everyone's smartphone and all manner of big data, across both the public and private sector. This may be warranted in the context of the pandemic, but the possible and likely abuses and threats to people's privacy and liberty cannot simply be cast aside, with little or no discussion or independent monitoring of the process. It is critical that various forms and processes of scrutiny are swiftly established, incorporating participation from a wide range of interests similar to those advocated by TBL. Many governments are evading scrutiny, and hurriedly enacting legislation while accepted and standard processes of debate and deliberation are in abeyance or take place in an abbreviated or constrained manner. Moreover, the informatics community needs to be involved in these discussions, offering a specific form of scrutiny that challenges and rectifies the characterization of the internet>web upon which many of these policies rely-i.e., that the technologies involved are safe, secure, and politically neutral, while being necessary and sufficient.
Two commentators who exemplify the sorts of critique I have in mind are John Naughton and Richard North. Naughton reports regularly in The Guardian and has a background that encompasses engineering, computing and the internet, and contributing to the public understanding of technology. (https://en.wikipedia.org/wiki/John_Naughton) His recent articles in The Guardian/The Observer and The New Statesman deal directly with the issues raised above. 'Slouching towards dystopia: the rise of surveillance capitalism and the death of privacy' [52] was written in February 2020, before the full implications of the pandemic had swept across the world, but the intimations of what was to come were all too evident. In effect, Naughton echoes TBL's fear of a coming digital dystopia, describing how '[O]ur lives and behaviour have been turned into profit for the Big Tech giants-and we meekly click Accept'. He wonders: 'How did we sleepwalk into a world without privacy?' The title of the article indicates that Naughton places prime responsibility for this state of affairs not on the technology, but on the emergence of a 'new economic order', which Shoshana Zuboff has termed surveillance capitalism [53].
. . . defined as: "a new economic order that claims human experience as the raw material for hidden commercial practices of extraction, prediction and sales". Having originated at Google, it was then conveyed to Facebook in 2008 when a senior Google executive, Sheryl Sandberg, joined the social media giant. So Sandberg became, as Zuboff puts it, the "Typhoid Mary" who helped disseminate surveillance capitalism.
As the pandemic has developed Naughton has charted the way in which the process of slouching and 'sleepwalking' has accelerated as 'changes that would in pre-corona times have generated years of debate, dissent, hesitation, opposition and delay turn out to be possible overnight' [54]. To some extent, the rapid deployment of such measures may be warranted. There is little point in protracted and extended discussions and consultations when urgent action is required. But we must also be cognizant of the ways in which disaster capitalism [55] takes advantage of crises to shore up and even exacerbate existing inequalities and inequities. Klein's original use of the term was in her book The Shock Doctrine, published in 2007, but she has reiterated the key points of her argument in the current context.
Author, activist and journalist Naomi Klein says the coronavirus crisis, like earlier ones, could be a catalyst to shower aid on the wealthiest interests in society, including those most responsible for our current vulnerabilities, while offering next to nothing to most workers and small businesses. [56] Naughton offers his critique from an informatics background. Klein is a social activist and prodigious campaigner. Both concur with regard to the ways in which many aspects of governmental responses to the pandemic follow the logic of 'never let a crisis go to waste' (often attributed to Winston Churchill, although this is open to dispute). After 9/11, in 2001, invasive surveillance practices at airports, and other, more ubiquitous and covert measures, were introduced and quickly became routine and generally tolerated; yet previously, their introduction would have been far more controversial and widely resisted. So too, in 2020, we are witnessing a significant increase in the surveillance powers of states across the world, again with widely debated and disputed policies suddenly enacted with little or no room for discussion or dissent.
Unfortunately, the basis for such sweeping impositions is not well-founded, and the technology is nowhere near as effective as some would claim; for instance, the idea that tracking and tracing can be accomplished by everyone downloading a tracking app on to their smartphone is largely an ominous if legislatively convenient fantasy. Richard North, with a background in public health, has written extensively about the necessity for a boots-on-the-ground approach to tracking and tracing. (North blogs on a daily basis at eureferendum.com. Although I have no great sympathy with many aspects of his overall political position, I have great respect for his in-depth research and experience, particularly with regard to the pandemic and the ways in which it has been so tragically mishandled). This demands a well-managed and localized strategy, established and controlled by community-based public health inspectors, carried out by a multitude of people trained in tracking and tracing. A technological and centralized solution is no substitute. North's argument received stark and noteworthy support in a recent article which argues that countries such as the UK and USA have a great to learn from those who gained experience in combatting Ebola and tuberculosis outbreaks in Africa and Asia [57].
Poor countries have advice to offer.
Contact tracing is used all over the world, including in the U.S. The idea is to track down anyone in recent contact with a newly diagnosed patient, then monitor the health of these contacts. In the developing world, it's been a valuable tool in fighting infectious diseases like Ebola and tuberculosis. Public health workers there have lots of experience. ... Partners in Health, which is known for its work in Haiti, Rwanda and Peru, is helping to set up a coronavirus contact tracing program in Massachusetts.
Gibson's characterization of the internet as a 'consensual hallucination' is apt, but in our current global predicament, unnervingly close to Gibson's 'Jackpot', it is all too real and vital. The problem is that the hallucination is in danger of blinding us to the reality of how best to move forwards to whatever transpires as 'the new normal'.
At the start of this discussion I posed three questions.
[Q1] What has gone wrong? Why has the principle of putting everyone in contact with one another, with everyone having access to a vast array of information, gone so drastically awry?
[Q2] Is there something wrong with the technology itself?
[Q3] Why is Berners-Lee arguing for a 'contract', and is his strategy appropriate and feasible? In taking up these issues, my main aim was to focus on the issue of the way we need to develop clear and effective processes of governance of the internet>web. This involved reference to a variety of topics that have become all too relevant in recent months. Overall, my argument has assumed that it is in all our interests that such mechanisms exist and are effective, but one of the reviewers for the initial version of this discussion raised several points, deliberately playing devil's advocate, that challenge this assumption. For instance; Whoever says that the Internet>Web needs to be nice and be run according to Twitter or Facebook's policies on speech? Other than for laudable moral reasons, why is this a good idea? What gives the person a right to make this decision? I don't pay anything to Facebook, YouTube, Twitter, etc; what gives me the right to expect freedom of speech or respect of different cultures? The Twitter/FB as representing a "public square" argument aside, the right to post on social media isn't a naturally-born one.' Furthermore, the reviewer added; 'why is it so bad that algorithms reflect societal biases? The author makes a great case for why it should be expected, but what does this say about society? Is most of the interconnected world just plain mean and worse?
To an extent, this is a caricature of the misanthrope; someone who assumes the worst of humankind, seeing little or no prospect that things can or will improve, embodied by Samuel Johnson-'I hate mankind, for I think myself one of the best of them, and I know how bad I am'-and also, fittingly, Bill Hicks; 'I'm tired of this back-slappin' "isn't humanity neat" bullshit. We're a virus with shoes.' It is also a perspective anchored in an individualized world, with everyone acting primarily for their own self-interest. But it is precisely the fear of this war of all against all that drives us to the sorts of complex solutions presented by Locke, Hobbes, Kant, and Rawls. We could, as individuals or even as part of a concerted and coordinated effort, all decide not to use Facebook, Google, Twitter and the like. As my reviewer points out, no-one has an inalienable right to use these forms of communication, and we do not directly pay to use them, so we have no consumers' rights (Richard Stallman and many others argue that if use of a platform is free, then the users are the product). On the other hand, GAFA and BAT (respectively, Google, Apple, Facebook, Amazon, and the Chinese companies Baidu, Alibaba and Tencent) are now massive monopolies in terms of the products and services they offer, with more power and influence than many governments, let alone groups of dissatisfied users. The free-marketeers have made great play of Adam Smith's idea of 'the invisible hand' to justify policies centred around diminished state interference with the market complemented by active encouragement of the private sector and people's self-interest. But Smith only uses the term once in his 600+ page magnum opus; and was far more concerned in that book with the problem of cartels and monopolies.
"People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices." [58] The free-marketeers somehow overlook this issue, but it is critical and draws attention to the necessity for markets to be continuously regulated and brought back from any major imbalances and inequities; something that must involve the state in the form of government policies and public-sector agencies. The key criticism of such an orientation is that once states take on or are granted these powers, the economy is at the command of the state, and producers and consumers must adhere to governmental dictates. Hence the importance of discussion and clarification of the nature and processes of effective governance, and particularly in the case of the internet>web, which transcends national and regional boundaries; a topic demanding global discussion.
The current pandemic has brought this issue to the fore. My reviewer wondered why it was 'so bad that algorithms reflect societal biases', and that we just might have to accept the nastiness and callousness of the interconnected world. Yet this is to ignore the immediate and lethal consequences. Internet messages that advise ingesting bleach as a COVID-19 cure or preventative have spread and people have died as a result. Similarly, postings that blame the pandemic on the introduction of 5G technology have led to arson attacks on phone masts-few of which carry 5G technology-in some cases impacting the communications facilities of nearby hospitals.
When TBL called for a contract for the web, there was a stark distinction between the freely available and uncensored internet>web and the Golden Shield Project, colloquially termed The Great Firewall of China [59]. If the former was problematic for the reasons referred to in the earlier sections of this discussion, the latter veered too far in its efforts to 'protect'. One of the prime objectives of The Contract was to find a consensus position that steered a path between the increasingly dystopian free-for-all and strict control and censorship. Yet in the light of the pandemic and incidents such as those referred to above, there have been calls for widespread adoption of the Chinese model. Thus, Matt Taibbi reports in a recent article in The Atlantic which argues that '[I]n the debate over freedom versus control of the global network, China was largely correct, and the U.S. was wrong' [60]. According to Taibbi, the authors Jack Goldsmith and Andrew Keane Woods state that . . .
Constitutional and cultural differences mean that the private sector, rather than the federal and state governments, currently takes the lead in these practices . . . But the trend toward greater surveillance and speech control here, and toward the growing involvement of government, is undeniable and likely inexorable.
They see this not only as inexorable but as potentially desirable, with Taibbi noting that the authors see one benefit of the coronavirus is that it is waking us up to 'how technical wizardry, data centralization, and private-public collaboration can do enormous public good'.
Although this position may not be widespread, Taibbi positions it amongst a wider trend which he summarizes as 'Let's rethink that whole democracy thing', something 'that began sprouting up in earnest four years ago'-i.e., in the wake of Trump's election campaign and victory. This may or may not be the case, but in the context of the present discussion, it further underlines the importance of our being able to develop an effective form of governance for the internet>web that steers between free-for-all on the one hand, and strict, largely un-scrutinized, and overly robust censorship.
All of this adds weight to TBL's call for a contract. Allowing ubiquitous access to the internet>web has not proved to be the boon TBL and others envisaged in the 1980s. Yet the technology itself, together with others that it helped spawn, now provides a powerful and increasingly significant and essential component of 21st century social existence. There is nothing inherently malevolent in these technologies per se, but we have to learn to live with the complexities and paradoxes of the internet>web as they have developed and threaten to lead us to a dystopian future. One way forward may well involve a contract of some sort, but taking account of the full gamut of stakeholders is likely to prove far more complicated and fraught, as evidenced by the uproar surrounding the plan to sell off the dot org domain. Ultimately, we have to hope that our concerted and effective efforts will avoid any road to dystopia, instead ushering in something far more akin to Crocodile Dundee's idea of friendliness.
Funding: This research received no external funding.