Menu

Sanspoint.

Essays on Technology and Culture

Can Empathy Scale to the Internet?

Cynicism is easy, especially when all you can see serves as justification for it. It also blinds you to anything that might serve to contradict that same cynicism. Reading about the growing Internet counterculture of racism and misogyny that festers in the Web’s darkest corners, and see it not only refuse to be disinfected by sunlight, but encouraged—cynicism is an easy refuge.

The problem with cynicism is that it solves nothing. A cynic may be a failed idealist beneath the prickly exterior, but it’s not for want of trying. Because we cynics have tried—or tried to try—and seen our efforts go nowhere, we take our ball and go home to snark from the sidelines. Cynicism works for us because it feels good. If we can’t save the world, let’s at least enjoy watching it fall apart. Cynicism also eats away at our empathy, and empathy is what we need most now. This came into focus for me while reading a wonderful piece by Parker Molloy. Her conclusion stuck with me:

“[S]o long as we look at the world through the lens of objective good versus objective evil, we’ll never truly be able to understand why anyone does anything… The world would be a better place if we could all learn to empathize a bit more with one another… to not view people as pure evil or pure good, and to understand that we’re all in this world together so we might as well make the best of it we can, as one big happy human family thing.”

To get past the false dichotomy of “objective good versus objective evil,” we need to develop the skill of empathy—and it is a skill. Some of us have a more developed, innate sense of empathy than others, much in the same way that some of us have a more developed sense of musicality than others. That doesn’t mean whatever degree of skill we have in empathy can’t be improved through effort and practice.

The Internet and social media provide countless opportunities to practice empathy. What comes through our timelines and streams, whether we agree with it or not, comes from a real place and real emotion. This is true, whether we’re seeing anger, joy, sadness, bemusement, threats and abuse, or support and love. Even the darkest, cruelest, and most cynical attempts at humor come from a place of genuine emotion. Understanding this is the first step—and that step is the hardest of all.

Online communication remains text-based for the most part, meaning we lose much of the metadata of conversation; facial expressions, tone, and—too often—context. A Tweet in isolation offers at most 140 characters of information. It’s place in a larger conversation is lost, making it easier to decontextualize and for someone to apply their own meaning and agenda to it. There are imperfect methods—hacks, really—to bring that missing data back into our online conversations, but an Emoji or a GIF can only go so far. For minds that expect more information in a conversation than the basic content of a message, communication, and thus empathy, becomes all the more difficult.

This may explain why so many view what happens online as being less than “real.” How can it be, when all the hallmarks of human interaction are lost to the medium? That unrealness also gives us license to be someone other than ourselves—in whatever capacity we could be said to have one self—when interacting online. The normal laws of behavior and propriety are suspended, and we are free to express ideas and behaviors we never would in the “real” world. In practice, this is known as the Online Disinhibition Effect. It is beyond real, and the data speaks for itself. For example:

“A Johns Hopkins University study in 2007 found that 64 per cent of bullied children were exclusively attacked online. That is, many children who were habitual bullies on social media would completely refrain from this behaviour when meeting their victims in person.”

The above data point comes from a remarkable essay called “Possessed by a Mask” by Sandra Newman for Aeon Magazine. Newman draws parallels between the historical role of masks in human society, and the way we behave behind the mask of the Internet.

Even if we’re using our real names, we’re so disconnected as to be masked by default. When we’re masked in a room of other masked people, the rules often stop applying to us. It take a conscious act of will to see past the masks. As Newman says in her conclusion: “Above all, we should remember that, behind the masked figures that surround us, there are people as vulnerable, fallible, as real as ourselves.”

While empathy is hard before you add the Internet, that’s an explanation, not an excuse. Ideally, empathy would be baked in to every social and communication product from the beginning, but empathy, as a concept and a skill, is not valued by technology companies in their products.

There are many reasons for this, beyond the nature of the medium. One is the massive amount of privilege afforded to those who build our communication tools. If you’ve never experienced abuse, harassment, or even the inevitable painful memories that come with time, you won’t think about it. It becomes a blind spot in product development, further deprioritized in favor of juicing the numbers, monetizing the service, and generally serving “investor storytime” to keep the money rolling in.

When the companies that define our online communication start to take the abuse of their platforms seriously, we’ll finally hit the turning point on the technical problems of harassment and abuse. It’s largely been lip service until now. We’ve seen it on Twitter through GamerGate, with Reddit, and even at South By Southwest. But we can no more place the blame for tech companies failures of empathy on the Internet entirely at the feet of Venture Capitalists any more than we can place it at the feet of the companies they fund, the engineering teams building the products, or entirely at the feet of the users.

We’re all to blame at some level, and we’re all responsible for finding a solution. It is possible to build systems on a technical level that—if not ones that strengthen empathy, at least making it prohibitive to be cruel and abusive before the fact. Makerbase is a great example, and Anil Dash’s “8 Steps” are a good start for anyone entering the social space. It starts with a willingness to think about these problems before they occur. And, as long as we’re sticking to the VC model, it takes a willingness for VCs to reward companies that think about these problems.

As for us, the end users who have to live with these flawed channels of communication? Thank goodness for the work of people at Crash Override Network and The Online Abuse Prevention Initiative for building strategies and tools to protect us from bad actors of all stripes. More can be done, but we’ve started having the conversation, and it’s slowly starting to pay off. But, any solution must have empathy at its core, for all users.

Here is where I need to make it clear that I am not equivocating. The behavior of certain groups, such the deliberately offensive “Chanterculture,” is often indefensible. There is no excuse, no defense for the harassment, abuse, threats, and violence their victims have experienced. We need to develop empathy for the abusers who lack empathy, as well as their victims. As I said before, even the abuse comes from a place of genuine emotion. There’s more to the callous cruelty than its visible manifestations. Understanding it will go a long way to helping the perpetrators of online abuse mend their ways and finding peace for all involved.

Empathy has to work both ways for it to be effective. The challenge in scaling empathy is the struggle of developing empathy for those we would prefer to have nothing to do with. Empathy doesn’t mean agreeing with their viewpoint, merely trying to understand where they come from. It doesn’t mean freeing abusers from the consequences of their actions, but those consequences must draw from empathy. No obstacle—a mask, a lack of information, a lack of the metadata of communication—should stand in the way. Surmounting these obstacles takes substantial effort, but it’s within our reach. It just requires us to care—and to drop our cynicism.

Internet and social media companies need to develop empathy for their users. Users need to develop empathy for other users. Top-down technological solutions can get us part of the way there, but we can try in our own digital lives to be more empathetic on a daily basis. I find myself going back to Jess Zimmerman’s excellent piece: “Can the internet actually be an empathy boot camp?” Many of the points she makes I have echoed above. I’ll echo another: we’re all going to screw this up, and often.

“It’s harder now to be convincing, and easier to put your foot in your mouth; you’re virtually guaranteed to accidentally hurt someone and have to apologize… But this lack of control over your audience forces you to consider more people’s needs more deeply, to become and remain more aware of the variety of human traumas, motives, histories and concerns.”

Each of our mistakes, our failures of empathy, are a chance to learn and strengthen our skills. It feels, however, that we often don’t bother to notice those failures for all the reasons outlined above. If our empathy is going to reach Internet scale, we have to start building it here and now. Let’s, all of us, work towards a more empathetic Internet, beginning with us. Stop, slow down, and think before you post another snarky comment. Try to understand the motivations of others, and try to get less outraged over outrage. Practice, practice, practice empathy in our lives, whether we’re an end-user, an engineer, or a product manager. This applies to the Internet and the “real” world, but the former is where the bigger challenge lies.

Twitter is Not a Comment Section—It’s Worse

I’m no fan of comment sections. They’re all-too-often the worst of the worst of the worst of the Internet of Garbage. This is mostly because nobody wants to pay anyone to keep them from becoming a dumpster fire. Comments are easy social glue to keep people on your site, and for cash-strapped digital publishers, the ad views and metrics from a burning dumpster fire at the bottom of each page are still preferable to paying someone to spray a hose on it.

But, there has been a trend of high profile websites turning off comments. Popular Science led the charge in 2013, tech site Re/code switched theirs off a year ago, and back in July The Verge switched their comments off as a temporary measure. And there’s been more than a few others, as well. The most recent high-profile site to ditch comments is Vice Media’s Motherboard, which led to a backlash from a particularly gross, and very popular web comic focused on video gaming. I won’t be linking to it.

You’d think I’d be all for this development. One less section of toxic ooze at the end of otherwise great writing, one less place for people to be horrible to other people without consequence. I should be running naked through the streets in celebration. (You can thank me for that image later.) But, I’m not. I’m sitting at my desk, grumpily writing about why it’s bad.

See, there’s a huge problem to just ditching comments on a high-profile, highly-trafficked website, and that problem can be summarized in one, simple word: Twitter. Twitter is not a comment section. Twitter is worse than a comment section. Comment sections on websites generally have the advantage of being self-contained. Twitter is a public network, with a public harassment problem that its new leadership have decided to ignore in favor of shuffling chairs around. So, now if you want to make a statement about an article, you’re putting yourself at risk of becoming another victim of rampaging hordes with pitchforks, virtual and real. How is this an improvement?

Oh, right. It’s an improvement because it gives the publication an avenue for “feedback” while freeing them of two burdens. First is the burden of displaying advertising next to the poorly-spelled, vitriolic vomit of your typical comment section dweller, second of churning through underpaid community moderators—or outsourcing the task to overseas moderation companies. The publisher comes out smelling of roses, and with somewhat diminished overhead costs, while the sort of people who are going to be obnoxious assholes whenever they have a megaphone get free reign to be obnoxious assholes with the biggest megaphone in the world.

There are people who are trying to fix comments. You have ideas like the brilliant Jess Zimmerman’s proposal to “make comments cost money”. She’s “not proposing just charging to comment…” but also that “we should pay people when their comments reach a certain threshold of value.” It makes sense. If comments are as valuable to the online reading experience as some people would have you believe, why not provide some financial incentive for people to write good ones?

A more practical (read: “cheaper”) solution is something along the line of Digg’s new Digg Dialogue, or the croud-sourced model of Civil Comments, which is in beta. Their idea is:

Instead of blindly publishing whatever people submit, we first ask them to rate the quality and civility on 3 randomly-selected comments, as well as their own. It’s a bit more work for the commenter, but the end result is a community built on trust and respect, not harassment and abuse.

Elsewhere, MakerBase is being designed from the ground up to mitigate the possibility of abuse. It’s not a comment section, but if the same principles can be applied to one, along with some of the other ideas being floated around, a day might come where comment sections are actually worth reading. But first, publishers will have to care, and make investments in these ideas. As long as money is tight, and Twitter is ubiquitous, it’s not likely to happen. So, we all suffer for it.

Twitter Rearranges its Deck Chairs

This isn’t about stars versus hearts, or favorites versus likes. Changing a single icon without changing the basic functionality of a feature is basically shuffling deck chairs around on the SS Atlantus. The ship has already run aground. Twitter’s job is to free their ship and get it out to sea, before the action of the waves slowly turns it into a rotting hunk off the coast, fit for little more than a tourist attraction. That’s the problem Twitter under Jack Dorsey is facing down, and it is, so far, failing spectacularly.

Less than a day after the heart issue, Twitter’s only African-American engineer, who was downsized after Dorsey’s return, blasted the company’s abysmal diversity. Elsewhere, Brianna Wu revealed that since Dorsey’s return, Twitter’s response to harassment has fallen through the floor:

Not that Twitter’s ever been great about dealing with harassment, but it’s shocking and saddening to see what little progress they’ve made disappear under the new, old leadership of Jack Dorsey. Perhaps their community management team was also a victim of the layoffs.

It’s a long running joke in Tech Twitter about every announcement to Twitter’s board of directors or major hire that nobody at Twitter actually uses Twitter. Every action since Jack’s return to the company has driven that narrative home. There’s a disconnect between what Twitter actually needs to improve its experience for new users, and what it thinks it needs to do, probably due to the influence of Wall Street analysts who demand to see big numbers.

For too many people, Twitter is synonymous with online harassment. Who would wade in and start tweeting away when the wrong post could lead to being on the wrong end of pitchforks? Twitter Moments, changing favorites to likes, giving stock to employees, incomprehensible TV ads, these are solving the wrong problems. Jack, stop moving the deck chairs, and start patching the leaks and push the ship out to sea.

Why Online Abuse Matters

I had a brief conversation with a friend on a private Slack channel about South by Southwest and their decision to cancel a panel about harassment. Well, for values of conversation equal to two lines each before we all had to go back to work. He pointed out that if SXSW was going to host an anti-harassment panel, they’d be a martyr. If they dropped it, they’d be screwed from the backlash—and he’s not wrong, either—certainly not on that second point. As for being a martyr? Well, having the harassment conversation sure didn’t hurt XOXO Fest last year with Anita Sarkeesian’s talk, or this year with Zoe Quinn’s talk. (No video for that one yet.)

If there’s a hill any conference, or indeed any decent human being should be willing to die on, it’s the hill of saying “stop being terrible to other people.” It’s certainly a hill I’d choose to die on, having been on the receiving end of more than anyone’s fair share of real life harassment and abuse, and on the giving end of more than anyone’s fair share of digital harassment and abuse. It’s not something I want to suffer through again, and it’s not something I would wish to happen to anyone, either.

For reference, by the way, anyone’s fair share of harassment and abuse is zero. Harassment and abuse shouldn’t happen, but it’s an inevitability of the human condition. The entire point of civilization is to save ourselves from the animal instincts we have to harm each other, be it through word or deed. We’re all stuck on a rock, spinning through the endless void at countless millions of miles per second, and we’re all stuck here together, in an ideal case, for seventy or so years a pop. Maybe we can try to make those years as pain-free as possible for each other, yeah?

But let’s bring this discussion back down from the cosmic level. In the last year or so, online harassment and abuse has become a major issue, in no small part because of organized campaigns like GamerGate. Despite this, the attitude remains among people who should know better, that online harassment and abuse is less than real and worthy of their time. The idea that a harassed person can just turn off their phone, unplug their computer, and go about their life, blissfully free from the virtual slings and digital arrows of outrageous fortune is a myth. It has been for over a decade. The Internet is real life, not some sort of magical cyberspace that we can slip into and out of at will. It a part of nearly everything we do, and the people who are decidedly not connected to it are a shrinking minority.

Zooming back out a bit, there’s over seven billion people on this rock, and nearly half of them are online. We’ve managed to cram an estimated 3.2 billion people into a room, given them all megaphones, and told them to go nuts. Admittedly, most of those 3.2 billion are hanging around with the other people in the room who speak their language, but there’s still a lot of people crammed together with megaphones. More than we’ve ever had to deal with before in the history of the species. This is going to have consequences.

A few days ago, I made a flippant tweet about how “[t]he average human capacity for empathy does not reach Internet Scale.” Judging from the number of Retweets and Favorites it got, including one from Arthur Chu (!), it struck a nerve. All the people who are bemoaning the passing a more civilized Internet age are either privileged nerds, deluded about the past, or—likely—both. I mean, we’ve been having this same basic discussion about online harassment for over twenty years! (Trigger Warning: descriptions of sexual assault.) We’re still no closer to solving the problem, and it’s not for lack of trying.

Actually, I’ll take that back. In many cases, of which the recent South by Southwest debacle is only the latest, harassment and abuse has been an afterthought at best. Whether in—for lack of a better term—real world spaces, or digital ones, most people are content to just cover their eyes and ears to the potential for humans to be thoroughly terrible to each other for reasons. To quote some musical philosophers, “If you can not see it, you think it’s not there. It doesn’t work that way.”

The creators and caretakers of our public and private meeting places, online and off, disregard the potential for abuse either because they are not likely to be victims themselves, or only focus on protecting themselves and people who pay the bills. What they don’t know can’t hurt them, at least not until their space becomes a toxic hellstew of abuse towards people who aren’t like them. The numbers bear his out: It’s been shown time and time again that women get a disproportionate amount of online harassment. Women of color get an even worse deal than white women. These are facts. They are not negotiable.

For anything to change, more people need to see that online abuse matters. It matters to abuse victims, and it matters to abuse enablers, both the willing and the negligent alike. This is especially true for the people who create and run the online spaces we congregate in. It’s clear to see the tide is starting to turn in a few places, but only because there’s enough people screaming bloody murder about their victimization. We need to have conversations about how to design systems to prevent abuse before it happens. We need to engineer systems that reward constructive community building, not just the latest hot take, outrage, snarky comment, or threat of violence.

Cynical as my opinion that human empathy doesn’t reach Internet Scale may be, that doesn’t mean I don’t want to be proven wrong. A cynic is, to quote George Carlin, “a disappointed idealist,” after all. Online harassment and abuse matters, and the discussion around how to stop it matters. That people are willing to make threats of violence to shut down the discussion is proof alone of how much it matters. What will have to happen for the rest of the world to take it seriously? When I ask that question, the disappointed idealist in me has all kinds of answers, none of which I want to put down in words. It also desperately wants to be proven wrong. It’s time to take all that utopian rhetoric around the Internet, and actually do something with it.

KonMari for Social Media

Your social media feeds are a mess, and it can be super-stressful. Like PTSD-inducing stressful. There’s always something new to see, comment on, like, favorite, retweet, get outraged about. Especially the outrage. It’s enough to make you wonder why the hell you agreed to friend your Sarah Palin-loving cousin who you only see at weddings and funerals. When Twitter, Facebook, Tumblr, and all the rest have you frustrated and annoyed, it’s time to go all KonMari on that shit. I’m here to teach you how.

Okay, I’m not saying that, like proper KonMari, every single thing in your feeds has to bring you joy. You want to know about the bad things too, at least when it’s people close to you. And you’ll never be truly free from all the sheer frustrations and annoyances of social media. What these steps will do is just help make your feeds a place that’s just a little less anxiety-inducing.

It’s way too easy to add someone to your friends list on social media, so the best place to begin is to unfollow and unfriend as many people as you can get away with. Take your time on this. The friend of a friend of a friend you met at a party whose posts you see and never comment on? Unfriend him. Your aforementioned Sarah Palin loving cousin? Unfriend her, but only if you’re sure nobody’s going to guilt-trip you about it.

There is a better way, at least on Facebook, to deal with the guilt-tripping family and friends you have to be friended with, if you want to keep up appearances. On most services, you have a binary relationship with the people in your feeds: you either follow them, or you don’t. I’m no fan of Facebook, but one of the few things they do right is let you “Unfollow” someone while still being friends with them. What this means is that you don’t see anything they post, share, like, comment, or whatever in your news feed, but they can still see yours—but there’s a way around that, which I’ll explain later.

Another way to tame your Facebook News Feed is to set your friends as either “Close Friends” or “Acquaintances” instead of merely being Friends. Close Friends shows you more of their posts, Acquaintances less. Another wonderful benefit of these lists is that you can set posts to only be visible to certain groups. To go back to your Sarah Palin-loving cousin, if you assign her to Acquantances, and set any political posts you make to “Friends, except Acquiantances” you won’t have to worry about drowning in a sea of angry notifications as a flamewar erupts on your profile page.

On Twitter, things can be a little more complicated. There’s two ways to get your timeline in order, beyond just unfollowing people, and they both work best when you’re using a Twitter app that isn’t the official one. (I love Tweetbot.) The first is Twitter Lists. If you want to keep up with certain accounts, but don’t want them clogging up your timeline all the time, you can set up a list and check it whenever you prefer. I keep lists for the bands I like, technology news sites, apps I use, and for local stuff.

The second thing Twitter lets you do is muting. Twitter offers basic muting features, but some apps, especially Tweetbot, let you mute with more ruthlessness and effectiveness. You can mute key words, hashtags, and even entire accounts for a single day, a week, a month, or for eternity. Tweetbot also lets you mute all retweets from a particular account, so if someone cool is retweeting a lot of stuff you’re not interested in, you’re only two taps away from a quieter timeline.

If only all the other social networks were as flexible in how well you can manage your feeds. For things like Instagram and Tumblr, where it’s just a binary, follow or unfollow, the only way out is to just up and unfollow anything that’s causing you more angst than it’s worth. Even if it’s your Sarah Palin-loving cousin. Especially if it’s your Sarah Palin-loving cousin. Don’t feel guilty about it.