Menu

Sanspoint.

Essays on Technology and Culture

Who’s To Blame for Toxic Ads? The Network.

It’s easy to want to blame Reader’s Digest, or Yahoo, or Forbes, or Daily Mail, or any of these sites for screwing viewers by serving them malicious ads and not telling them, or not helping them with the cleanup afterward. And it’s a hell of a lot easier when they’ve compelled us to turn off our ad blockers to simply see what brought us to their site.

But the problem is coming through them, from the ad networks themselves. The same ones, it should be mentioned, who control the Faustian bargains made by bartering and selling our information.

— Violet Blue – “You say advertising, I say block that malware”

Programmatic ad networks are sometimes the only game in town for sites with general audiences. That there’s no quality assurance process for the ads they push out, nor is there incentive for them to check until something blows up in their faces. When an ordinary user can be infected by clicking a link on what should be a safe and trusted site, like Forbes, like Yahoo!, like Reader’s Digest, can you blame them for taking steps to protect themselves.

In an ideal world, it would be impossible for an ad to inject malware into a system, but in an ideal world, advertising could be trusted to be secure, not privacy-leaking and more infuriating for people who just want to read some content. I work in online publishing, and I block ads. If the risk of supporting publishers is that your computer becomes compromised by malware, I’d suggest you do the same.

Anil Dash – “Toward Humane Tech”

The conversation about the tech industry has changed profoundly in the past few years. It is no longer radical to raise issues of ethics or civics when evaluating a new product or company. But that’s the simplest starting point, a basic acknowledgment that what we do matters and actually affects people.

Anil Dash – “Toward Humane Tech”

That we’re finally asking questions and starting the conversation about what the technology industry has become is a great start. All too often we treat technology as its own thing, something with its own sense of agency and purpose. We cannot forget that technology is created by humans, and it is controlled by humans. It also inherits all the flaws and foibles of the people who create it. Technology should serve and benefit everyone, not just the bottom line of companies that leverage it. And we start changing this by challenging assumptions to the contrary.

Alice Maz on Twitter Randos and Splaining

Communication is hard. Like, really hard. Brain-to-brain state transfer is impossible, so we rely on an untold number of tools, signals, assumptions, wild guesses, and luck in the hopes that we can get someone else’s black box to generate something vaguely similar enough to our original for practical purposes. (And the bastards usually don’t even have the common courtesy to echo it back so we can see if we did it right.) What strikes me about “splaining” is that it’s so widespread–both the ostensible act and the complaints about it–and so consistent. Two reasonably distinct groups of individuals speaking on arbitrary topics, but the interactions generally resemble the same form and end up in the same place. While it would flatter me greatly if the vast majority of the people in my out-group turned out to be malicious and/or stupid, it seems more reasonable to conclude the groups communicate differently and as a result have a difficult time communicating with each other.

—Alice Maz – “Splain It To Me”.

An interesting take on a social media phenomenon. This is well worth your time.

I suspect that a large part of the problem with “splaining” and other communication failures on Twitter and elsewhere is that we lose much of the metadata of conversation. To borrow Alice’s example, you can tell from tone of voice and mannerisms what someone means when they say “Get the fuck out of here!” to you. In a textual environment, we have to draw inferences from our relationship to this person and our previous encounters with them.

Even among nerds who value information sharing over other forms of communication, we still need some conversational metadata to fully divine meaning. A “Get the fuck out of here!” @-reply could be positive engagement, or it could be a threat. It’s possible to know this with clarity, but the nature of the medium makes it harder. And with a “rando,” it becomes harder still.

Alice also writes a great footnote on Twitter mentions, and what “public” actually means online. It’s 323 words that could be a standalone essay of their own. One problem, of sorts, with Twitter, is that we all use it differently. For some of us, it’s a salon, for others, it’s a megaphone. I think the average Twitter user is somewhere between the two. It would behoove Twitter to keep this in mind, and adjust the platform to give users a more granular degree of access from the world-at-large, rather than the binary options of “public” and “private”.

Whitney Phillips: “Everyone you encounter on the internet is a person.”

[B]efore we do or say anything online, before we retweet unconfirmed details about the latest gun-related tragedy, before we post a shrill, sensationalist article to Facebook, before we furiously peck out our own hot take, we have to ask ourselves: Does this have the potential to make someone’s day worse? Someone’s life worse? If the answer is maybe, back away from the computer. Go outside and look at a tree. And remind yourself: Everyone you encounter on the internet is a person.

— Whitney Phillips – “We’re the reason we can’t have nice things on the internet”

There’s so many great things to pull from this pice, but the above is probably the biggest one. We’re so disconnected from each other’s genuine feelings by the nature of the Internet as a medium. Taking the first step, asking the questions “Does this have the potential to make someone’s day worse? Someone’s life worse?” is a small step towards bridging the empathy gap online. We may be the reason we can’t have nice things on the Internet. We also can be the catalyst to change that, even in some small way.

Change The Schema, Not The User

Despite the intention of opening new worlds and reaching millions of users, we select our identities from a drop-down menu. We enter one value for our names, gender, sexuality, relationships and ethnicity, constraining our digital personhood in a database schema. These designs limit expression while excluding and erasing marginalized identities. They reflect the restricted imagination of their creators: written and funded predominantly by a privileged majority who have never had components of their identity denied, or felt a frustrating lack of control over their representation. But human identity, relationships, and behaviour are all endlessly complex and diverse: our software needs to start expecting and valuing marginalized identities instead of perpetuating their erasure.

— Emily Horsman – “The Argument for Free-Form Input”

So much of humanity cannot be reduced to simple choices that are “easy” to add to a database schema. Even those of us who seem to neatly fit into the boxes of tech should have the choice not to. Emily also makes an important note: how much of this data do they really need? Even for ad targeting, it would be better for the data collected to be accurate, and free-form input is the way to do it.