Via Culture Digitally
By Tarleton Gillespie
-----
I’m really excited to share my new essay, “The Relevance of Algorithms,”
with those of you who are interested in such things. It’s been a treat
to get to think through the issues surrounding algorithms and their
place in public culture and knowledge, with some of the participants in
Culture Digitally (here’s the full litany: Braun, Gillespie, Striphas, Thomas, the third CD podcast, and Anderson‘s post just last week), as well as with panelists and attendees at the recent 4S and AoIR conferences, with colleagues at Microsoft Research, and with all of you who are gravitating towards these issues in their scholarship right now.
The motivation of the essay was two-fold: first, in my research on
online platforms and their efforts to manage what they deem to be “bad
content,” I’m finding an emerging array of algorithmic techniques being
deployed: for either locating and removing sex, violence, and other
offenses, or (more troublingly) for quietly choreographing some users
away from questionable materials while keeping it available for others.
Second, I’ve been helping to shepherd along this anthology, and wanted
my contribution to be in the spirit of the its aims: to take one step
back from my research to articulate an emerging issue of concern or
theoretical insight that (I hope) will be of value to my colleagues in
communication, sociology, science & technology studies, and
information science.
The anthology will ideally be out in Fall 2013. And we’re still finalizing the subtitle. So here’s the best citation I have.
Gillespie, Tarleton. “The Relevance of Algorithms. forthcoming, in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.
Below is the introduction, to give you a taste.
Algorithms play an increasingly important role in selecting what
information is considered most relevant to us, a crucial feature of our
participation in public life. Search engines help us navigate massive
databases of information, or the entire web. Recommendation algorithms
map our preferences against others, suggesting new or forgotten bits of
culture for us to encounter. Algorithms manage our interactions on
social networking sites, highlighting the news of one friend while
excluding another’s. Algorithms designed to calculate what is “hot” or
“trending” or “most discussed” skim the cream from the seemingly
boundless chatter that’s on offer. Together, these algorithms not only
help us find information, they provide a means to know what there is to
know and how to know it, to participate in social and political
discourse, and to familiarize ourselves with the publics in which we
participate. They are now a key logic governing the flows of information
on which we depend, with the “power to enable and assign
meaningfulness, managing how information is perceived by users, the
‘distribution of the sensible.’” (Langlois 2012)
Algorithms need not be software: in the broadest sense, they are
encoded procedures for transforming input data into a desired output,
based on specified calculations. The procedures name both a problem and
the steps by which it should be solved. Instructions for navigation may
be considered an algorithm, or the mathematical formulas required to
predict the movement of a celestial body across the sky. “Algorithms do
things, and their syntax embodies a command structure to enable this to
happen” (Goffey 2008, 17). We might think of computers, then,
fundamentally as algorithm machines — designed to store and read data,
apply mathematical procedures to it in a controlled fashion, and offer
new information as the output.
But as we have embraced computational tools as our primary media of expression, and have made not just mathematics but all
information digital, we are subjecting human discourse and knowledge to
these procedural logics that undergird all computation. And there are
specific implications when we use algorithms to select what is most
relevant from a corpus of data composed of traces of our activities,
preferences, and expressions.
These algorithms, which I’ll call public relevance algorithms,
are — by the very same mathematical procedures — producing and
certifying knowledge. The algorithmic assessment of information, then,
represents a particular knowledge logic, one built on specific
presumptions about what knowledge is and how one should identify its
most relevant components. That we are now turning to algorithms to
identify what we need to know is as momentous as having relied on
credentialed experts, the scientific method, common sense, or the word
of God.
What we need is an interrogation of algorithms as a key feature of
our information ecosystem (Anderson 2011), and of the cultural forms
emerging in their shadows (Striphas 2010), with a close attention to
where and in what ways the introduction of algorithms into human
knowledge practices may have political ramifications. This essay is a
conceptual map to do just that. I will highlight six dimensions of
public relevance algorithms that have political valence:
1. Patterns of inclusion: the choices behind what makes it into an index in the first place, what is excluded, and how data is made algorithm ready
2. Cycles of anticipation:
the implications of algorithm providers’ attempts to thoroughly know
and predict their users, and how the conclusions they draw can matter
3. The evaluation of relevance:
the criteria by which algorithms determine what is relevant, how those
criteria are obscured from us, and how they enact political choices
about appropriate and legitimate knowledge
4. The promise of algorithmic objectivity:
the way the technical character of the algorithm is positioned as an
assurance of impartiality, and how that claim is maintained in the face
of controversy
5. Entanglement with practice:
how users reshape their practices to suit the algorithms they depend
on, and how they can turn algorithms into terrains for political
contest, sometimes even to interrogate the politics of the algorithm
itself
6. The production of calculated publics:
how the algorithmic presentation of publics back to themselves shape a
public’s sense of itself, and who is best positioned to benefit from
that knowledge.
Considering how fast these technologies and the uses to which they
are put are changing, this list must be taken as provisional, not
exhaustive. But as I see it, these are the most important lines of
inquiry into understanding algorithms as emerging tools of public
knowledge and discourse.
It would also be seductively easy to get this wrong. In attempting to
say something of substance about the way algorithms are shifting our
public discourse, we must firmly resist putting the technology in the
explanatory driver’s seat. While recent sociological study of the
Internet has labored to undo the simplistic technological determinism
that plagued earlier work, that determinism remains an alluring
analytical stance. A sociological analysis must not conceive of
algorithms as abstract, technical achievements, but must unpack the warm
human and institutional choices that lie behind these cold mechanisms. I
suspect that a more fruitful approach will turn as much to the
sociology of knowledge as to the sociology of technology — to see how
these tools are called into being by, enlisted as part of, and
negotiated around collective efforts to know and be known. This might
help reveal that the seemingly solid algorithm is in fact a fragile
accomplishment.
~ ~ ~
Here is the full article [PDF]. Please feel free to share it, or point people to this post.