It’s been a long journey with a lot of unplanned pit-stops, but the Mindful Tech series has reached its conclusion. Here, I gather all the posts together for your reading convenience.
Part of the problem that leads to technological overload is that we tend towards use our gadgets on their terms, and not our own. How many people think to change the settings out of the box? How many people absent-mindedly tap “yes†to the pop-up dialogs of apps asking for permission to send notifications, access our personal data, track our location, etc.? Most of us will just take the path of least resistance, and when that path becomes crowded with annoyances, distractions, and frustrations, we’ll pin the blame on the gizmo, and not our thoughtless use of it in the first place.
Why does well-meaning technology get in our way? It’s a result of that well-meaningness. Default settings are optimized to please the majority of users, and notifications—when applied right—can be important and useful. There’s no world in which defaults will be the right fit for everyone, of course. If you’re the sort of person who thinks about your relationship to technology, you’re probably not in that majority of people.
Notifications, too, can be well meaning. We need to know things, like if our spouse can’t pick the kids up from school, when a sudden thunderstorm is bearing down upon our location, or a reminder to take our medication. This falls apart when some less well-meaning people tap into the same functionality to drive us nuts with ads, get us to take another turn in their mindless tile matching game, or some other nonsense.
Because online services cost so much to run, and because we users are willing to pay so little, our data is harvested to fill in the gap with relevant ads. That same data could be used to give us fantastic insights into ourselves and our habits. It could make information overload less likely, showing us what we need to know, when we need to know it. Sadly, those solutions are, like notifications, easy pickings for companies who want to catch us with the right ad at the right time instead.
It’s easier and easier to give up our data, and harder and harder to know what we’re giving up, let alone what we’re getting in return. When all we feel we’re getting from our technology is stress, frustration, spam, and an uneasy sense of being overwhelmed, it’s time to adjust the terms of the deal to favor ourselves.
Throughout this series, I’ve offered ways to rethink aspects of your digital life. I’ve stayed away from specific recommendations and how-tos, with some small exceptions, because our needs all vary. There are some of us who need a constant connection to our job, or to our family. There are some of us whose preferred leisure activity exists in the same space as where they do their work. We are on limited budgets, have to deal with systems that are our of our control. There is no one-size fits all solution.
Now, it’s up to you. The first step is to identify the problem. What part of your digital life is giving you the most stress and strain? That’s where you tackle things first. Experiment with the tools available to you, as long as that experimentation doesn’t stop you from doing the thing you want to do. If there’s something you want to try, but aren’t sure if you can, then start Googling. I guarantee, you’re not the only person out there who’s frustrated and looking for a fix. And in the end, yes, the off-switch is there, if you need it. Just don’t expect the problem not to come back when you turn it back on.
Part of why I obsess a bit over technology, over workflows and setups, over people’s home screens, is that I want to know how they deal with the same issues I deal with. We have the greatest information sharing technology at our fingertips, and in our pockets. Let’s use it, and share our tips, our tricks, our solutions, and our failures. We’re all in this together, all struggling and coping in our own ways, but still together.
I want to know what you’re doing to use technology in a more mindful way. Get in touch.
Slowly, but surely, New York City is rolling out free Wi-Fi kiosks on street corners. They’ll come fully-loaded with USB charging ports, VOIP calling, and an embedded Android tablet for those folks who don’t have a device to connect to it. Oh, and giant screens on each side to show ads to everyone. Ads based on “an audience profile algorithmically derived from the information the kiosks collect from their users,†to quote Nick Pinto in the Village Voice.
Rick’s article is a bit more… alarmist… than I would be in describing the LinkNYC kiosks. I’m no fan of ads that use my personal data to serve me something an algorithm considers “relevant.†This is partially because those algorithms are so regularly off base, and partially because I don’t feel these companies have the right to that data in the first place. Yes, even if I’m legally opting-in by connecting to the Wi-Fi in the first place. If you want to show me an advertisement, fine, but you don’t need to know anything and everything about me to show me one.
The point is, someone is always going to pay, one way or another. I can pay $50 a month to my local ISP for internet access, or I can pay in data for the local LinkNYC kiosk. (At least in theory. They won’t be installing them in my neighborhood until early next decade.) Sidewalk Labs is paying, but they want to make that money back, so they’re going to display ads. This shouldn’t be a surprise—it’s how the Internet works now-a-days.
What pushes LinkNYC into the creepy zone is that instead of the ads showing up in just my web browser, they’re going to be displaying on 55†screens on the street. I hesitate to call it propaganda like Nick Pinto does. It’s more just potential embarrassment if I happen to walk past my local kiosk and see an ad for men’s underwear, because I happened to be shopping for some the other night.
Of course, I can pay for internet service that isn’t going to chop up my browsing habits and spit ads out for every passerby. The ostensible goal of LinkNYC is to connect all those poor people who can’t afford high-speed internet access, or much of any internet access. If I’m going to assume my browsing with my pay ISP is secure and unmonitored—and there’s no reason to assume it is—but why should privacy be a luxury product?
Last February, the FCC classified internet access as a public utility, akin to water, electricity, and phone service. I have to wonder how something like LinkNYC would work in a world where internet access was regulated the way we regulate electricity and water.
Yes, you have to pay for those, too. Either you pay yourself, or someone’s taxes pay for it via welfare programs and utility assistance, or you get your water shut off. The difference is that there’s no solution for ad-supported water in the home. Yet. (“Before you take a shower, you need to watch this 30 second ad for Geico.â€)
In the meantime, the biggest concern most people have about LinkNYC is that homeless people are using them to watch porn, or have late night dance parties. I’m all for more people having better access to the Internet. I just wish there was a way for it to happen without trading privacy for the privilege. It’s true, someone’s always going to pay. And whoever is paying is going to want a return on their investment.
Being overwhelmed by our technological lives is inevitable. How we deal with it is another matter. It’s never a bad idea to step away when things get crazy, but that doesn’t necessarily mean cutting the cord. There’s been a number of people experimenting with disconnection, digital sabbaticals, and other binary ways of taking time away from technology. If these work for you, great, but temporary abstention is not an effective way of dealing with the core problem. I should know, having taken my share of Social Media Sabbaticals.
The problem with plain old disconnection is that it doesn’t actually help you confront the underlying problem of being overwhelmed. As soon as you turn that switch back on, it’s incredibly easy to slip into old habits and undo any benefits of your experiment. Total abstention only works when you try to make a permanent thing. Ask anyone who’s been in Alcoholics Anonymous. If you only want to moderate your behavior, however, disconnection is a bit like dealing with nail biting by wrapping your fingertips in Band-Aids.
Disconnection can be the right thing, as long as you understand what you’re doing, and why. For example, if you’re taking a vacation, you’re fully disconnecting from your digital life. This means not checking work email, not keeping with the streams, and just taking photos for yourself. If the goal of your disconnection is to focus entirely on the experience of your trip, that’s a win. Just don’t expect it to impact your habits in any meaningful way once you get back.
So, what can you do when the inevitable technological overload happens?
You can scale back. If a particular social network, app, or service is demanding more of you than you feel up to, drop it for a while. Overwhelmed by the volume of your feeds? Unfollow, unsubscribe, unfriend, and filter, mercilessly if necessary. You can always add things back later, if you need—keep a list of what you’ve purged and why, just in case. And, on social media, be prepared with a decent excuse when someone comes by demanding a reason why you unfriended them. (I prefer to unfollow people on Facebook for this reason.)
When you’re overwhelmed by the complexity of your systems, try simplifying things, even as a temporary measure. Identify the bare minimum you need to still get things done, and pare back. You can even try going analog, at least for those things that don’t have to be 100% digital. The mode switching is good for the brain, and it’s a lot harder to overwhelm yourself with the general linearity of paper and ink or pencil.
If you’re good at self-control, you can try to establish rules for yourself. No Twitter during working hours, or check Facebook only on Friday. Set a deliberate time at which you’ll disconnect for the day. There’s a number of tools to help you do this, from hardcore apps that’ll completely disable your computer’s Internet connection, to ones that’ll gently remind you how much time you’ve wasted on your iPhone.
For those of us with multiple devices in our lives, consider making a specific device your “communication†device. Configure all your messaging and other apps to only alert your specific communication device, and manage it all from there. And if things get hairy, you can stuff it in a drawer and ignore it while you focus on the important things—whatever those may be for you. There’s so much power in our tools, to both enable us and distract us, that it behooves us to use it well. Being able to route all that distraction to one place is powerful, though also dangerous.
And yes, don’t be afraid to just unplug the cable from your router if that’s what you really need. Don’t be afraid of Airplane Mode if it’ll get you through a tough spot. Just make sure you understand that it’s a temporary fix, not something that’ll change your relationship to all the gizmos that connect to it. Disconnection works, but it’s one tool of many in our arsenal. Use it wisely.
The computer is a cold, hard, logical, and rational thing. It does not care for your squishy human ambiguities, nuance, and irrationality. The computer lives in a world of zeroes and ones, of pure logic and reason, and it is always correct. Quibble all you want with its results, but you’ll get nowhere. The problem isn’t the computer, it’s between the chair and the keyboard. It’s you.
This is the attitude reflected in the technolgists who dream of algorithms and software eating the world. Instead of all the ambiguous, flawed human beings doing all the work, the world will be run by intelligent, logical, hyper-rational software AIs. Humans will be freed to do all the creative work that the machines can’t—and also to program the machines, of course. Maybe, eventually, some of the machines can program themsevles, but in the meantime, better learn to code, ’cause it’s gonna be the only job left.
Problem is, of course, that computers aren’t logical, or rational. They’re just good at following directions. Those directions might not make any logical or rational sense, but as long as they’re valid and executable, the computer will follow those directions to the letter. If the computer is given instructions that encode the biases, unconcious or not, of the programmer, the results will contain those same biases. Nobody likes to hear this, if only because so many developers love to see themselves as logical, rational, and unbiased. Problem is, there’s no such thing as a completely logical, completely rational, completely unbiased human being.
Those technologiests who play up artificial intelligence as the solution to human foibles either don’t understand that AI will inherit those same human foibles, or don’t care. The latter is much more sinister, but no matter the case, you still end up with what Maciej Ceglowski puts so succinctly as “money laundering for bias.” The casual disregard for algorithmic bias, the blind ignorance of the very human problems that humans build into their systems, acting as if the compiler or interpreter washes away the original sin of human bias, is already causing problems.
But it’s not the technologists who are having their photos tagged as gorillas, seeing ads suggesting they might be criminals, not seeing ads for high paying jobs because of their gender, or asked to pay more for the same service based on the racial makeup of their neighborhood. Instead, technologists like Elon Musk are more worried about AIs becoming smarter than humans, the plot of a million terrible science ficgtion movies. If the AI missteps of the last few years are any indication, it will be a long time before any AI becomes smarter than us—especially since we’re the ones building them.
The sooner we wake up to this fact and start addressing the problems of bias in algorithms, the better. One solution is to open the playing field of development to a wider, more diverse group of people. Or, even better, rethink the degree to which we even want to use automation and algorithms in the first place? If these pieces of software aren’t going to be any less biased and flawed as the humans who make them, what’s the benefit? Instead of the impassive, black box of a computer algorithm making the decision, let’s go back to the human being. At least then we have a chance to appeal.