from social networks to machine learning harassment.
new tools, systems, and infrastructures have profound consequences on how we think of ourselves, relate to one another, organize collective life, and envision desirable futures.
Social networks have originated dozens years before social media has become ubiquitous on the Internet. Georg Simmel, a German sociologist, is considered to be the founding father of social network research.
Simmel asked the questions on how people related to each other and introduced the term of ‘formal’ sociology. ‘Formal’ means that the attention is more emphasized on the nature of connections and the amount rather than their content. Beginning from the 1950s this approach has been used by a lot of American sociologists, however, in a slightly different context.
Researchers started analyzing the nature of connections that originated in communities: family, professional network, etc. The idea has furthered in understanding densities of those ties. In the 1970s the network has become a popular theme among multidisciplinary research groups. This has led to the origins of new methodologies and the idea of weak ties, which is often used to describe different phenomena on the internet.
In this article I will try to explore the effects of social network not only on social media, but also on some recent ethical discussions around technological objectivity of anti-rape technologies and online harassment that is evoked by some of the machine learning algorithms. I will take the socio-technical lens in exploring this topic.
From Social Network to Social Media to Internet Research
Based on the historical background the phrase ‘social network,’ which is widely used on the internet, has been directly borrowed from the sociological tradition. The internet is, therefore, a self-fulfilling prophecy; it was expected through technological utopia, where it tested the language to describe the future internet. Current social media basically gave us the visualization of something that was described early on in history. Facebook, for example, had the app called Friend Wheel, which would draw individual social networks, including family members, peers, colleagues, etc. Berry Wellman in the book “The Networked” shared that previously for sociologists people were just representatives of some groups — more or less stable communities like social classes. Now the concept of a group or community is getting redefined. According to Wellman, a person turns out to be part of different networks, and not their large group, which does not decompose into components.
Further, a sociologist can apply different types of analysis to a person. One of them is a mathematized network analysis, through which they can, for example, study homophilia in social networks. This is a phenomenon when people with similar perceptions and views are on the same network. “Google bubble,” a situation where only what is in line with our views falls into the information stream around us and what does not correspond, works both in Facebook feed and in Google search results. If a person is a conservative, they see what happens to their conservative friends, and they don’t see what the libertarians think at that moment.
The second affordance of social media is less mathematical — the actor-network theory, invented by the sociologist Bruno Latour. (“Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science,” Kofman) Inside this theory, in general, all interaction is described as networked: now not only you and I are participating in the interaction, but also a voice recorder (which makes me speak a more correct speech, similar to a written one) and a notebook (in which there may be prepared questions and remarks invented by during the interview). After all, table and tea are also part of our interaction with you. When researching the Internet, the interface takes the place of these things. The American sociologist Danah Boyd, for example, suggests that the Internet is the public private space. (“Data & Society,” Dana Boyd) It means that a person might not be fully aware of how public their action can be. The phenomenon of popularity shows that when a person shares the picture with several friends, it spreads to the whole internet and reaches unintended people.
The utopian idea of the Internet originated in the 1980s and 1990s, when lots of European thought leaders were talking about the idea of globalization, of a single humanity. The internet has embodied that idea, especially with the campaign aimed to bridge the digital divide. Though the Internet doesn’t have borders, people haven’t started just talking to strangers on the other corner of the world. In contrast, the Internet enabled smaller communities with similar interests, places, etc. Imagine yourself sitting at the conference on “Computers, Freedom, and Privacy” in the early 90s, where people were discussing the problem of social responsibilities in the information era, the ideas of privacy and publicity, etc. And you are realizing that people are talking way before the Internet took a physical shape in life. All those studies were strongly prepared by futurology, and the initial interest of the many was humanitarian. In parallel, the technological side has been developing. The ability to digitize data was used by both specialists in digital humanities and those who worked with statistics. In this period two types of research organizations occurred. On the one hand, it is a society of mathematicians, technologists, sociologists and, say, biologists who began to use new statistical methods to, for example, extrapolate biological models to understand public life. On the other hand, several studies emerged on how the Internet affects democracy, the economy, and other issues that concern society. At the beginning of the 2000s, regular polls about the Internet began, large research centers appeared such as Pew Research Center’s Internet & American Life Project in America, the Oxford Internet Institute.
Another important point of institutionalizing Internet research is law. At Harvard and Stanford, research centers arise around legal issues. The background is largely related to the struggle for copyright, that is, changes to copyright laws, free access and distribution of content. This comes from Lawrence Lessig, who formulated a rather simple explanation of why talking about copyright in the digital era is old-fashioned and wrong. (“Free Culture: the Nature and Future of Creativity,”Lessig) He gives this comparison: if you have an apple and I have an apple, and you give me an apple, then you have zero, and I have two. And if you have an e-book instead of an apple, then you have nothing to lose. This idea is so fascinating to a number of people that it becomes clear that the right to information ownership should change. And still, there are many people who are loud advocates for open-source software and information. At the end of the 2000s a lot of universities and corporations such as Microsoft, Google established research centers.
MIT Media Lab’s funding scandal all started with the news of Jeffrey Epstein’s suicide in prison last year, just right before the charges for sex trafficking. It shuddered the whole scientific community because the case involves some utterly urgent ethical issues around modern science and its members. Like a domino effect, the incident caused a series of other events, which deal with a much larger, deeply rooted in history, cultural issue of gender inequality and sexual assault in the scientific community. Sociotechnical imaginary poses several questions: who dominates in the scientific community? Who has more power in contributing to scientific research: scientists, government, philanthropists who donate billions to research communities, or institutions themselves? Zooming out to a bigger picture, are technologists becoming thought leaders in modern science or are they building a bigger, better perfect world that could be designed through the applications of research at the scientific institutions? It’s important to acknowledge that though science is a pure truth-oriented practice, it is configured to a commitment to social good at the core of its purpose. (‘Significant Mistakes’: MIT Releases Details of Epstein Funding Scandal,” Chen)
However, sociotechnical imaginary assumes not only the structural organization of the scientific process but also matches it with some kind of desirable future that it is moving towards. Is social good the same as a desirable future? Did Epstein invest millions as the belief to bring an actual social impact for the future of the society or as the belief in being a part of techno-elitism of technocracy to shape the future of the community. Who has more potential or control of science or its own technological advancement? A reasonable answer would be a collective effort, but how equal is it. This brings to a higher risk of unintended consequences depending on each stakeholder’s positionality, influencing the scientific thinking and assumptions made during the hypothesis test. In this way inequality possibly can shape the future of the datafied world.
From Internet Research to Technological Objectivity
Who might have thought that the rise of the Internet and the advancement of new technologies would be so ubiquitous that it would become the key source to be leveraged by some people in power. The notion of technological objectivity is invisibly embedded into some of the social contexts that oftentimes result into unintended consequences.
The article “Victim blaming meets technological objectivity: Anti-rape technology and its design” examines how cultural prejudice of sexual assault victims as liars and the belief of technological objectivity converge to instruct the design of anti-rape technologies. Lots of new mobile applications emerged with new features as push notifications, auto-reply, geo-location, automatic alert messages, etc. Some people celebrate the innovation in this space, however, many others think that those technologically-enabled solutions perpetuate the issue. “At the heart of these debates are notions of women’s agency, bodily regulation, and the liberatory potential (or not) of emerging technologies.”
However, with the rapid growth of human data and more advanced statistical analysis the idea of machine learning arises. Algorithms that can build models based on the historical data in order to predict some future events. This powerful idea brings a much larger attention nowadays in what we do on the Internet and how we interact with our own digital footprints.
From Technological Objectivity to Machine Learning Harassment
The public data that is easily accessible and open to everyone has been leveraged for political use too. For example, Google Maps data can be combined with consumer or even exposed data to infer household or neighborhood type, which will be used to create or unlock detailed profiles of specific political audiences. In addition, a group of AI researchers from Stanford stated that voting patterns can be predicted based on available Google Street Views with cars with the help of deep learning. “If the number of sedans in a city is higher than the number of pickup trucks, that city is likely to vote for a Democrat in the next presidential election (88% chance); if not, then the city is likely to vote for a Republican (82% chance)” (Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States, Stanford Research)
When machine learning algorithms perform on some of the human-centered topics like facial recognition, traffic regulation, language, their outputs reflect a very dynamic behavior of data. But, oftentimes, many technologists disregard the fact that at the core source of that data are humans. From the initial stage of data collection to empirical analysis people introduce their own biases in those algorithms. Should a person’s social media activity be used to predict their likelihood of committing a crime?
In the article “Machine Learning Harassment” the author shares about the case with London’s Metropolitan Police, which instituted a system designed by Accenture in 2014 to analyse gang activity using five-year-old data from the police department and then looking at social media indicators. The software draws from a range of sources, including previous offences and individuals’ online interactions. “For example if an individual had posted inflammatory material on the internet and it was known to the Met — one gang might say something [negative] about another gang member’s partner or something like that — it would be recorded in the Met’s intelligence system.” Muz Janoowalla, head of public safety analytics at Accenture, told the BBC.
ProPublica had critically analysed risk assessment software powered by AI known as COMPAS. It has been used to forecast which criminals are most likely to reoffend. Guided by these risk assessments, judges in courtrooms throughout the United States would generate conclusions on the future of defendants and convicts, determining everything from bail amounts to sentences. The software estimates how likely a defendant is to re-offend based on his or her response to 137 survey questions. ProPublica compared COMPAS’s risk assessments for 7,000 people arrested in a Florida county with how often they reoffended. It was discovered that the COMPAS algorithm was able to predict the particular tendency of a convicted criminal to reoffend. However, when the algorithm was wrong in its predicting, the results was displayed differently for black and white offenders. Through COMPAS, black offenders were seen almost twice as likely as white offenders to be labeled a higher risk but not actually re-offend. While, the COMPAS software produced the opposite results with whites offenders: they were identified to be labeled as lower risk more likely than black offenders despite their criminal history displaying higher probabilities to reoffend. (“Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks.”)
This historical manifesto uncovers how much impact technology has enabled in social networks, institutional knowledge and research, and machine learning algorithms. Data-driven services and artificial intelligence-powered devices now shape innumerable aspects of our lives. Beneath the surface of these technologies, computational and increasingly autonomous techniques that operate on large, ever-evolving datasets are revolutionizing how people act in and know the world. These new tools, systems, and infrastructures have profound consequences for how we think of ourselves, relate to one another, organize collective life, and envision desirable futures.
Thanks for reading. I am currently a junior at UC Berkeley, an intern at the New York Times, and ethics researcher for Division of Computing, Data Science, and Society. Learn more about me here.
Work Cited:
Rainie, Harrison, and Barry Wellman. Networked: the New Social Operating System. MIT Press Ltd, 2014
Sanchez, Julian, and Utc. “Internet for Everyone Campaign Aims to Bridge Digital Divide.” Ars Technica, 24 June 2008, arstechnica.com/uncategorized/2008/06/internet-for-everyone-campaign-aims-to-bridge-digital-divide/.
Warren, Jim. “The First Conference on Computers, Privacy, and Freedom .” CPSR, 22 Dec. 2004, cpsr.org/prevsite/conferences/cfp91/home.html/.
Lessig, Lawrence. Free Culture: the Nature and Future of Creativity. Penguin, 2005.
Kofman, Ava. “Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science.” The New York Times, The New York Times, 25 Oct. 2018, www.nytimes.com/2018/10/25/magazine/bruno-latour-post-truth-philosopher-science.html.
“Danah Boyd.” Data & Society, datasociety.net/people/boyd-danah/.
Chen, Angela. “‘Significant Mistakes’: MIT Releases Details of Epstein Funding Scandal.” MIT Technology Review, MIT Technology Review, 2 Apr. 2020, www.technologyreview.com/2020/01/10/1/mit-jeffrey-epstein-donations-media-lab-seth-lloyd-funding-ethics/.
“Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks.” Benton Foundation, 23 May 2016, www.benton.org/headlines/machine-bias-theres-software-used-across-country-predict-future-criminals-and-its-biased.
Collective, Tactical Technology. “Machine Learning Harassment.” Machine Learning Harassment, xyz.informationactivism.org/en/machine-learning-harassment.
Collective, Tactical Technology. “Victim Blaming Meets Technological Objectivity: Anti-Rape Technology and Its Design.” Victim Blaming Meets Technological Objectivity: Anti-Rape Technology and Its Design, xyz.informationactivism.org/en/victim-blaming-meets-technology.