Distorted Signals and Bias in (Science) Communication
This post was created for my graduate course in Digital Media, #MC7019.
“Signals were distorted as they passed across the circuits; the greater the distance, the worse the distortion.” - James Gleick ‘The Information’
While Bell Telephone Laboratories was basking in the glory of their new tiny electronic semiconductor in 1948, gaining media and technological attention, Bell Labs engineer and mathematician Claude Shannon was working in relative obscurity on a different kind of invention. His was a “Mathematical Theory of Communication,” a theory of information, its units, its modulation, transmission and interpretation. Thanks to Shannon, information made its first leap to becoming quantifiable – it could be processed, stored and retrieved.
“Information theory began as a bridge from mathematics to electrical engineering and from there to computing.” - James Gleick ‘The Information’
Fast forward to modern computer systems, artificial intelligence, platforms that not only store, transmit and and process information, but create new forms of information themselves, allowing humans to communicate, or exchange information, in ways previously unimaginable. Today, again previously unthinkable, our systems of communication and information production can actually overwhelm us. As Gleick puts it, “[we] have met the Devil of Information Overload.” What happens when your eyes and ears are met with so much information that only selective attention is possible? What do you do with a “tsunami of data,” as Richard Saul Wurman calls it[1]? Perhaps we have forced ourselves to become greater information filterers than information absorbers.
So perhaps we can’t blame ourselves for producing what Batya Friedman and Helen Nissenbaum describe as biased computer systems, systems that produce short-cuts in processing the information we desire. But how can a computer system be biased? Think about your favorite social media platform. You might think, how can Facebook, or Twitter, essentially a web interface backed by code and databases where your tweets are stored, be biased, in the way a racial profiling strategy is biased? The computer system itself may consist, at the most fundamental level, of unintelligent, inanimate lines of code, of bits of information. However, the individuals and social institutions that create these lines of code can be biased, and can, purposefully or ignorantly, implement algorithms or design traits that are inherently biased.
In a 1996 article in ACM Transactions on Information Systems[2], Batya Friedman and Helen Nissenbaum suggest that bias within computer systems can arise primarily through three different mechanisms: preexisting bias with roots in social institutions, technical bias and emergent bias. While preexisting bias and technical bias show up within the function and output of the computer system itself, emergent bias arises in the context of the user. According to Friedman and Nissenbaum:
“This bias typically emerges some time after a design is completed, as a result of changing societal knowledge, population, or cultural values. User interfaces are likely to be particularly prone to emergent bias because interfaces by design seek to reflect the capacities, character, and habits of prospective users. Thus, a shift in context of use may well create difficulties for a new set of users.” (p. 335)
The authors use the term bias to refer to “computer systems that systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others” (p. 332). But what if a different form of bias, more closely related to the goals of news production and mass communication, could arise from computer systems in the realms discussed by Friedman and Nissenbaum? While the authors discuss computer system bias in terms of people, what if we considered bias in terms of ideas?
A quick Google search for “Twitter Tips” yields a wealth of information and advice on how to craft the most effective Tweet. According to WikiHow’s How to Write a Better Twitter Headline, effectiveness on this platform is all about temptation and grabbing the eye:
“Think about how you scan Twitter. Nobody opens every single link or even reads every single tweet; do you? What is it about the tweets you do read and follow through on that make you notice them? Basically, when there is a cacophony of headlines competing for attention, you're going to be looking for the tweets that reward your reading effort and that's precisely what your followers do too. The "rewards" focus on such things as the usefulness of the tweet, the sense of urgency compelling you to read it, and the unique nature of the tweet content[3].”
A social media platform that limits users to 140 characters per update may, by virtue of its design, create an emergent bias for links to news articles, or particular ideas, garnished with short and sweet, tempting headlines. For entertainment news, this might not seem to be much of a problem. But for science news and complex scientific ideas not easily summed up in five words with a link and a couple of catchy hashtags, this “idea bias” might be more of a problem.
It is nearly impossible to accurately and honestly sum up most basic biology research studies with attention-grabbing words and phrases such as “revolutionary,” “groundbreaking,” “new treatment for…” or “cure in a petri-dish.” But these phrases abound in Twitter headlines of science news. Of course, individual news articles and blog posts may also feature these words and phrases in their headlines to grab readers. But is it possible that the design and use of Twitter itself biases the success of sensationalized science news headlines? The more sensational a science news Twitter headline is, the more it grabs Tweeters’ eyes, the more they retweet the headline, the greater likelihood it will “trend.”
Of course, this isn’t necessarily true. An accurate and well-worded tweet from a credible science news outlet will likely gain more popularity among science-savvy Tweeters than a sensational and inaccurate tweet. However, among the lay audience Twitter community, tweets that feature a “new cure for [X]” might gain more attention than tweets that stick to the scientific facts.
From Langdon Winner’s 1986 The whale and the reactor: a search for limits in an age of high technology:
“It is obvious that technologies can be used in ways that enhance the power, authority, and privilege of some over others, for example, the use of television to sell a candidate. In our accustomed way of thinking technologies are seen as neutral tools that can be used well or poorly, for good, evil, or something in between. But we usually do not stop to inquire whether a given device might have been designed and built in such a way that it produces a set of consequences…” (p. 125)
I think it would be interesting to test the hypothesis that sensational science news tweets get viewed more often, or shared more frequently, than non-sensational, contextualized tweets. Does “A #Cure for #Down Syndrome in mice? [Link]” get re-tweeted more often than “Experimental Compound Reverses #Down Syndrome-Like Learning Deficits in mice [Link]?” Does this “re-tweet effect” depend on the credibility of the Twitter handle or the scientific literacy of the audience? By providing a single platform where both of these tweets can become instantaneous best-selling headlines, potentially even grouped under the same hashtag or trend, does Twitter bias the success of the more sensational headline, simply because it is short and tempting?