By Kent Anderson
Recently, in response to a number of scandals that depended upon electronic communications their subjects thought were shared in confidence, NYU professor Scott Galloway described the basic problem as “the digitization of mistakes.” In so doing, Galloway was pointing out how teens, young adults, and adults now have to deal with the fallout from moments of bad judgment or emotional vulnerability.
In a prior era, evidence of embarrassing moments was hard to gather, and even harder to broadcast. Cameras weren’t everywhere. When there were pictures, they were shared clandestinely perhaps among a few people — a wayward Polaroid or a snapshot that captured something odd in the background. Taking pictures surreptitiously was difficult. Selfies (and those of your private parts) were difficult to take, and even more difficult to see through to completion — after all, unless you had a darkroom, someone was going to see them as they were developed.
Reproducing an embarrassing photo you glimpsed was also difficult — you needed the negatives, or to take and develop a picture of the picture. Even if you had the negatives, your reach was still limited by a variety of factors — the time and cost of development, the mail, and so forth.
Now, everyone has the negatives, and everybody has access to a worldwide broadcast platform — one that never forgets.
By contrast, what happened with the government in the Commonwealth of Virginia with pictures of the governor dressed in racist clothing shows that old, traditional photography and printing can still derail careers — when fed into modern information machinery.
The prevalence and permanence of words and pictures captured by smartphones, and broadcast either privately (the people think) or semi-publicly, have generated new fears and worries that are now being baked in at the generational level. As a result, the “digitization of mistakes” — the capture and broadcast of fairly normal or lightly weird but still embarrassing behavior — has led to skyrocketing rates of anxiety, depression, and suicidal behavior among teens, especially girls and young women.
Part of the untold story here Galloway touches upon is that two-bit blackmail is surely going on all the time in schools and universities, as well as other places where people gather or compete. It occurs with far lower monetary or reputational stakes than the scandals that make the news, but with perhaps a greater level of vulnerability felt by those so victimized. Adults with legal resources and financial means can weather a scandal touched off by an inappropriate text or picture. But a developing personality in a school where she or he is already feeling insecure, with doubts about parental support, school support, friend loyalty, personal definition, and more — well, to have a bully threaten to release a topless or bottomless photo, evidence of drinking, a picture of illicit drug use, or just something embarrassing (a bloody nose, a bad hair day, an ugly smile) can be devastating.
This kind of bullying even affects successful, strong adults. Sara Wachter-Boettcher is an accomplished expert in digital privacy, and the author of a fantastic book I reviewed last year, “Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech.” Wachter-Boettcher was invited to give a talk at a Google office about online privacy. Afterwards, the talk was posted on YouTube. The ensuing harassment she received from sexist trolls was predictable, rapid, and demoralizing. This, along her experience with the entire span of online garbage, has changed her view of technology — it used to be fun and enjoyable, but now it is miserable and exploitative. As she writes in the recent issue from McSweeney’s entitled, “The End of Trust”:
It’s not that technology broke my trust — at least not at first. But it broke my context: I don’t know where I am. . . . This used to feel freeing: I didn’t have to choose. I could simply exist, floating in a mix-and-match universe of my own design. But left unchecked for so long — by shortsighted tech companies, and by my own petty desires — the lack of context bred something sinister: a place where everyone’s motives are suspect. I don’t know who’s watching me, or when they’re coming for me. But I do know they’re there . . .
The digital world has changed. It is no longer simply fun. The pond has flipped, and what was once a novelty inside normalcy has become a new information establishment where civility, privacy, and personal autonomy are the novelties. The new establishment was built by people and companies out to exploit others, and there are few signs it will change soon.
We need to set publication practices and policies accordingly — which is why it bothers me that we’re still pursuing strategies hatched 20 years ago around access to information. This is why the idea of unfettered access to unvalidated or untested scientific papers via preprint servers seems unwise to me. This is why stripping funding from editorial processes strikes me as an unforced error. This is why underfunding Western academic libraries during an unprecedented rise in papers from China makes no sense. Things have changed, and people who are trained to handle and expected to handle information carefully are more important than ever. “Shortsighted tech companies” aren’t reliable, nor are their business models or attitudes. Motives matter. Editors, professionals, and ethics matter. Information and community integrity matter.
Major public health exploitation has emerged from public misunderstandings around a few things published in journals. One is a letter from years ago that speculated that ingestion of HGH in steaks might have anti-aging effects. This was pure speculation, and relegated to the dustbin of history — until a search engine surfaced it for a snake oil salesman. And now we have HGH supplements all around us. The same goes for the anti-vaccination movement. Without search and social distorting a lousy paragraph in a paper, we might not now be facing the resurgences of measles and chicken pox.
A more salient example is how some believe the opioid crisis was fueled by a letter in NEJM from 1980, which asserted that addiction rates are negligible for opioid-based pain medications. The author of the letter didn’t think much of it, saying in an interview:
That particular letter, for me, is very near the bottom of a long list of studies that I’ve done.
Via intentional citation pressure from commercial entities, this speculative letter became a bedrock source for a crisis now causing thousands of premature deaths. These are all lessons in how scientific speculation can be intentionally misused to create public health dangers.
A common theme to these incidents is that each had an unpredictable incubation period, came out of nowhere, and have had major long-term consequences. These were not issues you could have identified as explosive or dangerous a priori. And once they were exploited, the consequences reverberated for decades in the echo chambers of the current information environment.
What makes sense now that we’ve seen what we’ve seen? Is discoverability an unalloyed good? Is transparency always in everyone’s best interests? Is access something someone could exploit to line their pockets and victimize others?
We need to adapt, and one place to start is for us to move beyond rehashing ideas from 1998. If we don’t hit “pause” and have a serious rethink, we are also culpable in the digitization of mistakes. Only the digitization of our mistakes — or the implications of having our ideas removed from context — may be uniquely devastating.
A version of this essay was originally published on February 11, 2019, on “The Geyser” at https://thegeyser.substack.com
Author, The Geyser