On being promoted to a personal chair in 1993 I chose the title of Professor of Informatics, specifically acknowledging Donna Haraway’s definition of the term as the “technologies of information [and communication] as well as the biological, social, linguistic and cultural changes that initiate, accompany and complicate their development” [1
]. This neatly encapsulated the plethora of issues emanating from these new technologies, inviting contributions and analyses from a wide variety of disciplines and practices. (In my later work Thinking Informatically
] I added the phrase “and communication”.)
In the intervening time the word informatics itself has been appropriated by those more focused on computer science, although why an alternative term is needed for a well-understood area is not entirely clear. Indeed the term is used both as an alternative term and as an additional one—i.e. “computer science and informatics”.
On the other hand the word informatics itself has become widely used in conjunction with a host of other terms—e.g. health informatics, biological informatics, medical informatics, social informatics, community informatics—and these terms are widely understood and generally accepted across cognate areas of research and study.
This is the basis for the broad sweep of topics and disciplines that we plan to cover in this new journal.It has the general title Informatics, which might lead some to see it as an outlet purely for papers in computer science, software engineering, and artificial intelligence (AI), but a glimpse at the four sub-sections that we intend to cover should dispel that view. Although we hope it will not discourage computer scientists from submitting to the journal.
The four named sub-sections have been designated as Information Science; Biomedical and Health Informatics; Human Computer Interaction; Media Arts and Sciences. The Editorial Board will include Section Editors for each of these, and we anticipate that each area will be represented in some measure in each quarterly issue, although we also intend that some issues may focus on a specialized topic that deliberately cuts across the sub-sections or concentrates on only one or two of them.
We recognise that people will have their own conceptions of what these section titles actually encompass, but rather than seeking to impose a definitive set of characterizations—which is both unrealistic and unnecessary—we welcome submissions from a variety of perspectives based on different but defensible understandings of the terms. We would, however, encourage those submitting papers to give a clear indication of the sub-section which they feel is most appropriate for consideration of their work.
Issues centred on information and communication technologies (ICTs) have to contend with the paradox that while the hardware devices that embody these technologies are all too apparent and ubiquitous, understanding the activities with which they are involved is far more complex. Winston Churchill once remarked that “We shape our buildings; thereafter they shape us”; Marshall McLuhan rephrased this as “We shape our tools and they in turn shape us”. We need to understand that this applies to our technologies, and hence that studies of ICTs should bring this duality to the fore. We trust that submissions to this journal will specifically address this aspect, for instance in seeking to account for the ways in which technical advances initially developed as responses to technically defined problems or issues have now become core aspects of and drivers of social existence.
In many respects it can be argued that ICTs initially developed from the encounter between “the awesome task of the management of organizational processes on a grand, corporate and
societal scale and the ambitions of the modern state and corporation
” (original—applied to sociological discourse—terms in italics added by the author—Zygmunt Bauman ([3
], p. 76)). The very early efforts of Charles Babbage in the 19th century to develop a mechanical computer derived to a significant extent from his interests in formulating the most effective and efficient arrangements for management of a labour force in factories, as well as his desire to develop a robust technical means of providing accurate tables of the times of high and low tides. In the mid-20th century advances in computer technology were spurred on by the demands of the Allied War Effort, particularly for code breaking. By the early 1950s computers were beginning their move from their early mathematical confines to the modern corporation, exemplified in the first commercial application for the UK catering company J Lyons and Co—LEO or Lyons Electronic Office. This was in spite of much misunderstanding of the potential technology, (in)famously exemplified in faulty predictions such as “Computers in the future may weigh no more than 1.5 tons” (Popular Mechanics, forecasting the relentless march of science, 1949); “I think there is a world market for maybe five computers.” (Thomas Watson, chairman of IBM, 1943); “I have travelled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year.” (The editor in charge of business books for Prentice Hall, 1957)
One of the reasons these predictions were so wide of the mark was that these technologies were taken up and developed in ways that even their own progenitors and inventors could not have foreseen. This is not unusual in the history of technology; the telephone was first thought of as a device for broadcasting, while radio was originally developed for one-to-one communication. But technical solutions to what are initially seen as engineering problems often afford new possibilities—i.e. “they shape us”. Furthermore technologies often come about by accident, or were regarded as interesting but not really useful when first developed. An example of the former is the microwave oven, invented by Percy LeBaron Spencer who noticed that a chocolate bar in his pocket had melted when he stood close to a magnetron—a device for transmitting microwave signals that are essential for the operation of radar. The prime example of the latter is laser technology, first demonstrated in the 1950s, but not considered “useful” until sometime later:Indeed they were first described as a “solution looking for a problem”.
What all this demonstrates is that the relationship between technology and “everything else” is not straightforward. The phrase “technology and society” immediately raises the assumption that technology somehow exists apart from its social environment, leading then to questions regarding the influence of one on the other. Raymond Williams in his discussion of the development of television distinguished between those who argued in terms of technology as a social determinant, as well as those who argued that it was a “symptom” of society [4
]. In both cases technology comes to be seen as independent of social context. Technological determinism is the view that sees research and development as self-generating in an independent sphere, leading to new social conditions as the technology becomes widespread. The symptomatic perspective similarly sees technology R&D as independent, but with the results taken up and used by existing social processes.
Thus ICTs might be seen as deterministic if it is argued that these technologies were invented and developed as a result of scientific and technical research, subsequently developing power and influence as a medium and mechanism for commercial application, social communication and managerial control that altered many of our institutions and central forms of social relationships, cultural and social life. The symptomatic view would argue that ICTs once developed and on-hand were selected and taken up for their potential profitability, and/or were exploited in order to promote management and state surveillance and control.
The two positions share the assumption that technology is an isolated facet of existence, outside society and beyond the realm of intention. Williams wanted to stress that technology must be seen as being “looked for and developed with certain purposes and practices already in mind”, these purposes and practices being “central, not marginal” as the symptomatic view would hold.
Manuel Castells offers a similar position in rejecting determinism either of society by technology or vice-versa
Of course technology does not determine society. Nor does society script the course of technological change, since many factors, including individual intuitiveness and entrepreneurialism, intervene in the process of scientific discovery, technological innovation, and social applications, so that the final outcome depends on a complex pattern of interaction. Indeed the dilemma of technological determinism is probably a false problem, since technology is society, and society cannot be understood or represented without its technological tools.
Joseph Weizenbaum, one of the key figures both in the development of AI and its later critique, encapsulates the arguments of both Castells and Williams, particularly in regard to the latter’s call for regard to be taken of intentionality in technological development. In the 1980s Weizenbaum argued that information technology came “ready made” into the context that demanded its use; “the remaking of the world in the image of the computer started long before there were any electronic computers” [6
]. This point was amplified by Bruce Berman: “If the growth of capitalism based on hierarchy, controlled sequence and rapid iteration remade the world in the image of the computer, then the seemingly uncontrollable growth of corporate and state bureaucracy was the crisis that was averted by the computer’s appearance” ([7
], p. 23).
All this implies that we need to recognize technology, and in particular ICTs, as imbricated with all other facets of our social existence. There are already numerous outlets for highly specialized papers and reports on a whole host of technical subjects, so any newcomer, such as Informatics, needs to offer something distinctive. Hence the reference at the start to Haraway’s definition of the term, and my hope that it will evoke submissions on a wide range of topics that offers engagement with the full range of her vision.
2. A Note on Business Models for Academic Journals
When I was invited to become Editor-in-Chief of this journal I looked at the MDPI’s website and noticed that they publish a vast range and number of journals, with particular emphasis on the biological and medical sciences. They also announce at the top of their homepage that “MDPI is a publisher of peer-reviewed, open access journals since its establishment in 1996”.
The open access model of journal publication is an appealing one for authors since it seems to afford the opportunity for publications to be accessed by anyone and everyone, whether or not they have access via some institutional affiliation. The model, however, requires financing and resourcing from somewhere, and so in many cases it relies on an author pays system. MDPI state that their policy as follows:
Open access Publishers cover their costs for editorial handling and editing of a paper by charging authors’ institutes or research funding agency. The cost of handling and the production of an article are covered through the one-time payment of an Article Processing Charge (APC) for each accepted article. The APC of open access Publishers is only a fraction of the average income per paper that traditional, subscription-based Publishers have been earning. MDPI’s Article Processing Charge (APC) is the same regardless of the length of an article, because we wish to encourage publication of long papers with complete results and full experimental or computational details.
When I first saw this I was somewhat disconcerted, and this was echoed by several colleagues who responded along the lines of likening it to “vanity publishing”, also pointing out that it was particularly egregious to charge for publication given that researchers and other academics edit and review submissions for journals free of charge. When I raised this with the Managing Editor at MDPI she replied that the policy operated effectively since many authors were part of research teams that had funding and support that specifically earmarked amounts for publication and dissemination. Moreover MDPI allocated a number of papers in any issue as free-of-charge, and for a new journal such as Informatics there would be no charges for the first two years of publication. Also all papers are peer-reviewed in a fashion identical to most other academic journals.
If we look at the “standard” business model for academic journals the following points arise:
The journals are for the most part owned by the publishers, sometimes in conjunction with academic institutions or research bodies.
The business model is based on low-volume, high-price subscriptions; with some differentiation, and volume discounts for individual subscribers often via professional bodies.
Many of the costs of production are borne by voluntary efforts of editors, reviewers, and contributors.
Journals achieve some level of sustainability, with revenues exceeding costs, often based on the high cost of institutional subscriptions plus some advertising.
There is a form of branding, in part based on one or more of the following; the reputation of the publisher, the institution most closely associated with the journal, the editorial team, and the ratings and indices achieved for the contents of the journal itself in the past.
There is an assumption that there is a significant over-supply of submissions, and hence a lower number of acceptances; with a higher ratio of the former to the latter being seen as one indicator of quality.
This standard model became the accepted one in the era of printed journals. But it must be stressed that it relies on the goodwill and voluntary efforts of large numbers of people who edit, review, and respond to submissions. It also relies on fairly hefty subscriptions being forthcoming from institutional subscribers, which then become the main point of access, with associated costs of indexing, binding, storing and conserving. The model made sense in the pre-internet age when there were high transaction costs concerning receipt of submission, contacting reviewers, printing and distributing the journal and so on. This role was fulfilled by predominantly mainstream publishers, who were then the primary financial beneficiaries.
But with the advent of the internet the transaction costs of journals and all other forms of publishing sharply diminished. Furthermore these developments also opened the way for self-publication along the lines that Yochai Benkler in “The Wealth of Networks” [8
], has termed “commons-based peer-production”—“a collective effort of individuals contributing towards a common goal in a more-or-less informal and loosely structured way”.
In fact even in pre-internet times, the “reader-cum-subscriber pays” model relied heavily on this sort of collaboration, and it continues to do so, as indeed does the open access model, embodying the principle of “author pays”. Although this facet of academic life, whereby countless unpaid hours are spent reading, reviewing, and managing journal submissions, has often been generally accepted without comment, in recent times this has started to change. One recent example, almost a howl of anguish by Hugh Gusterson, appeared in The Chronicle of Higher Education [1
]. Gusterson is worth quoting at length, because he encapsulates several core features.
... I get paid nothing directly for the most difficult, time-consuming writing I do: peer-reviewed academic articles. In fact a journal that owned the copyright to one of my articles made me pay $400 for permission to reprint my own writing in a book of my essays.
When I became an academic, those inconsistencies made a sort of sense: Academic journals, especially in the social sciences, were published by struggling, nonprofit university presses that could ill afford to pay for content, refereeing, or editing. It was expected that, in the vast consortium that our university system constitutes, our own university would pay our salary, and we would donate our writing and critical-reading skills to the system in return.
The system involved a huge exchange of gifted labor that produced little in the way of profit for publishers and a lot in the way of professional solidarity and interdependence for the participants. The fact that academic journals did not compensate the way commercial magazines and newspapers did only made academic publishing seem less vulgar and more valuable.
But in recent years the academic journals have largely been taken over by for-profit publishing behemoths such as Elsevier, Taylor & Francis, and Wiley-Blackwell. And quite a profit they make, too: In 2010 Elsevier reported profits of 36 percent on revenues of $3.2-billion. Last year its chief executive, Erik Engstrom, earned $4.6-million.
The older model relies to a significant extent on institutions—universities and research organizations—subsidizing publication. Consequently those without links to such organizations usually find it difficult or impossible to access the publications, and increasingly university libraries, with decreasing budgets, are reducing the range of journals to which they subscribe, or taking lower cost options such as those which only allow access 12–24 months after publication. As a consequence, not only do academics give freely of their time and resources for peer-review, and all that is involved in taking papers from submission to eventual publication or rejection, but once published, their work is only available to a limited audience.
The “author pays” model might seem a retrograde step, adding further burdens—now including payment as well as time and voluntary effort—to the process of getting published. But it does increase the potential readership of such publications since this model is usually wedded to an open access policy for any and all potential readers, so the papers will be immediately and widely accessible on the internet. This does engender dangers of over-production, with a burgeoning of such journals all competing for customers in the sense not of readers but of paying authors; but in time this may well be manageable in a manner similar to the ways in which the general Open Source model operates with appropriate quality measures (see below).
But it is critical that we are all aware that both “reader pays” and “author pays” models rely heavily on voluntary, unpaid efforts from the wider community; and both operate peer-reviewed policies that build upon these voluntary efforts. It may be that the latter model results in well-funded researchers subsidizing those less fortunate, who in turn can reach a wider readership, and so gain in reputation and potential funding accordingly—a virtuous circle. On the other hand this feature may be stymied if research funding explicitly precludes allowances for publication and dissemination. Now a further aspect has developed in regard to this debate. In the UK the research councils [RCUK] together with the main university funding body [HEFCE] have recently endorsed the position taken in the Finch Report [9
] that “a clear policy direction should be set towards support for publication in open access or hybrid journals, funded by APCs, as the main vehicle for the publication of research, especially when it is publicly funded”.
In a recent, cogent article Meera Sabaratnam and Paul Kirby [10
] have argued that as it stands the UK government policy is a threat to academic freedom. Although the push towards “pay-to-say”, as opposed to “pay-to-read” might seem to be what the Finch Report sees as the “golden route” to open access, it poses four threats to
“academic freedom through pressures on institutions to distribute scarce APC resources and to judge work by standards other than peer review”
“research funding by diverting existing funds into paying for publications (and private journal profits) rather than into research”
academic equality “both across and within institutions, by linking prestige in research and publishing to the capacity to pay APCs, rather than to academic qualities”
“academic control of research outputs by allowing for commercial uses without author consent”
Fear of these threats is well-founded, given that the UK policy is critically vague, leaving far too much reliant upon goodwill and insightful actions on the part of publishers, research funders, and universities. Sabaratnam and Kirby urge academics to respond by lobbying for what they term “green open access of all post-peer reviewed work within journals and institutions”, as well as demanding “clear policies from Universities around open access funds”, and ensuring that “institutional resources are not unnecessarily spent on APCs”. They also stress the need for academics to work to “[P]rotect the integrity of scholarly journals by rejecting the pressure for ‘pay-to-say’ publishing”.
With regard to the detailed characteristics of, and differences between the green
routes, interested readers can refer to the article itself, and to various commentaries such as that to be found at the RCUK website [11
]. From these sources it appears that the gold route will apply to all research that is wholly or partially funded by any of the UK Research Councils, although there is always the danger that any research published elsewhere, whether or not it has had such funding, may be deemed less worthy—a particular problem given the penchant in the UK and elsewhere for researchers and institutions to be “rated” on the quality of their research outputs. Certainly giving the name “gold” to this route almost guarantees that anything else will be seen as less valuable. (And it must be stressed that studies across several disciplines have found no obvious positive correlation between the ranking of a journal and the number of citations received for the papers it publishes, which rather undermines the whole principle of publishing in the first place.)
Moreover this option will employ what is termed a CC-BY copyright licence, which is defined on the Creative Commons [CC] website as a licence that “lets others distribute, remix, tweak, and build upon your work, even commercially, as long as they credit you for the original creation. This is the most accommodating of licenses offered. Recommended for maximum dissemination and use of licensed materials.” [12
] The CC website lists five other types of licence, each of which is more restrictive and weighted in favour of the authors than CC-BY, which as Sabaratnam and Kirby point out effectively removes key rights and control from authors over use of their outputs.
The term “open access” in the context of the Finch Report might be seen as somewhat misleading, particularly with regards to the link to the Creative Commons Licence. The Creative Commons concept developed from work by Larry Lessig and colleagues seeking to ensure that an effective and fruitful public domain would flourish in the internet age, offering a clear alternative to the “all rights reserved” model of copyright that has predominated for many years [13
]. Lessig has consistently called for an alternative to the prevailing model in which “creators get to create only with the permission of the powerful, or of creators from the past”. In so doing he has provided a sharp focus on issues around intellectual property rights [IPR] and the ways in which the internet provides the basis for a creative commons. His work can be seen to emanate from the model of Open Source, although the two are not identical, and indeed several trenchant criticisms of the CC licences have been made from the Open Source community.
Benkler, basing his ideas on the Open Source model, uses the term “Free access”, and contrasts this with the traditional one centred on “Ownership”. The former is characterized by Participation, Creativity, Flexibility, Community, Growth; the latter by “Propertizing”, Copyright, IPR, Extracting “rents”, Profitability, Commodification. The traditional model of reader-pays leans predominantly to the latter, albeit that the “the reader” is usually an institutional body. On the other hand the “open access” model put forward by RCUK and others is also fundamentally centred on ownership; with an even more slanted reliance on institutional bodies paying the costs.
Benkler also distinguishes between what he terms three different social forms, “reds”, “blues”, and “greens”, using the metaphor of story-telling. I quote from it here, but altering it for the context of publication of research papers.
Imagine three [storytelling] societies: the Reds, the Blues, and the Greens. Each society follows a set of customs as to how they live and how they [tell stories] report new discoveries. Among the Reds and the Blues ... there is one designated [storyteller] type of outlet for publication/dissemination. ... Among the Reds, the [storyteller] outlet is a hereditary or traditional position which includes the key decision making process regarding what to publish/disseminate. Among the Blues, the [storyteller] outlet is decided through periodic elections by simple majority vote. Every member of the community is eligible to offer him- or herself as an outlet or publisher ... and every member is eligible to vote. Among the Greens ... everyone publishes reports on new discoveries. People [stop and listen] read or attend to them if they wish, sometimes in small groups of two or three, sometimes in very large groups. [Stories] Research reports in each of these societies play a very important role in understanding and evaluating the world.
Both the existing “reader-pays” model and the RCUK-endorsed “author-pays” models veer to the Red format; in both cases there are only a limited number of outlets, and each also encompasses the decision making processes that determine what gets published. In Lessig’s terms each model in slightly differing ways adheres to an environment in which “creators get to create only with the permission of the powerful, or of creators from the past”. Moreover the role of the commercial publishers is largely unchanged; the flow of funds is slightly different, but their source remains predominantly located in universities and research institutions, albeit that the funding bases of these latter organizations is changing.
The Green variant is akin to the Open Source model, with both being made feasible by the almost negligible transaction costs and networking possibilities afforded by the internet. An approximation to the Blue variant might be found in professional bodies in which there are periodic elections for posts that encompass editorial control of their formal proceedings or other publications.
All three models require resourcing and should also incorporate processes to ensure quality and accuracy. Benkler’s story-telling metaphor only addresses these issues in a somewhat tangential manner, since story-telling is only peripheral to day-to-day activities, and the quality issue is indicated largely by the size of audience that can be attracted. As things stand the state-of-play with regard to academic publishing cannot be regarded as “if it ain’t broke don’t fix it”; it may not be “broke” but it is clearly in a state of considerable flux. The traditional model is already being dismantled, and so it is important that the various communities of practice with an interest in the issue make their voices heard and keep an open mind on the various options.