We have devices that know just about where we are. There are technologies being developed and deployed to know where we are inside buildings. Our smartphones have light sensors, microphones, fingerprint scanners, gyroscopes, and other ways to know about the environment it's in. Our pockets hold computers that can change interfaces on a whim, without the limitation of physical buttons. Add in the power of the sensor arrays on our devices, and I grow ever more convinced that the future of computing is going to be context-aware. We're already making the first, tentative steps, with “stone knives and bearskins.”
Knowing the potential, the current state of context-aware computing leaves me wanting. I've already complained about the difficulty of precise location tracking in dense urban areas, but there are workarounds for that. Living in the Apple ecosystem puts me at a disadvantage here, as it's too locked down for an app to think to silence my ringer, or switch out my home screen apps when I connect to the office Wi-Fi. My phone knows when I'm in motion, but it doesn't know to shut off the ringer if I'm driving a car. I'm limited to geofenced notifications and whatever limited triggers I can get with IFTTT. There's so many other possibilities, but I know Apple won't implement them on a hardware and OS level until they can do it right by their standards.
The other obstacle to context-aware computing is that it would require a lot more setup, or it would take time for the device to learn a user's habits. We see this now with services like Google Now, which tracks your location and tries to learn your habits. It takes time for it to realize that you try to get to work at 9 AM, and you take the subway, both ways, except on Wednesdays when you go to meet friends for coffee after work. Asking a user to provide all their locations and settings up front is just asking for trouble. More than half of smartphone owners don't have a passcode on their devices, according to Apple. Asking them to configure a home screen for the office and at home is pushing it. Adaptive is the way to go, but I would hope any context-aware tool would have an option for direct setup.
Context-aware computing also provides the use case for wearables—to a point. Google Glass style computing would be overkill, but a smart watch that could provide context based alerts and information would be useful, if done right. 1 I'm still not sold on a wearable device that's little more than a way for your other devices to know you're near and buzz when you get a notification, Ã la Craig Hockenberry's theoretical iRing. There are too many failure points: battery capacity, forgetfulness, and marketing to name three. Something like a Fitbit Force, with Apple's attention to detail and integration would pry money out of my wallet. (I'm also the kind of nutball who wears a watch every day.)
The dream for me is to have my phone be my outboard brain—reminding me to leave home early when the trains are backed up, that I made lunch reservations somewhere, I need to pick up my dry cleaning when I get off the subway, that it's time for me to turn off the screens half an hour before I go to bed. I want my devices to be smarter than me about the things where I am dumb. I want to be able to set up the requirements, and be able to forget about it, unless there's a dramatic exception to my routine. I want all of this, and I wanted done in a way to avoid notification fatigue.
Wishful thinking? Of course. But, we're so close now. Some improvements to the sensors in our phones, another iteration or two of battery technology, and a few good apps are all that we need to have truly context-aware computing in our pockets. The future is just over the horizon, and I hope enough people want to go there.
And this won't be done right until we have low-power connectivity, attractive low-power displays, and high capacity batteries that can fit in a watch. One out of three ain't good. ↩
Recently, I tried some more location-aware stuff on my iPhone: life-logging apps, context sensitive notifications, automatic logging of when I get to work, that sort of thing. I love the idea in theory, but implementation is rough in practice. I think I know why. Part of it is the inexactitude of cell phone GPS technology. Another is that many of the companies and individuals building location aware apps live in places like Silicon Valley and other places with a lot less human density. Most cities in the world, let alone the United States, aren’t this dense.
I live in New York City. I work in Midtown Manhattan. Measured on Google Maps, my morning coffee shop is within a 700 foot radius from my office. That sounds like a lot, but even the smallest geofence I can set up with IFTTT’s iOS location channel is large enough that it triggered a reminder to log my coffee intake while I sat at my desk. Three times. So, I turned that off. Looking into it, the smallest geofence range I can create in the iOS Reminders app is a radius of 328 feet. (IFTTT’s smallest range appears to be larger.) Combine that with the quirks of GPS data, and no wonder I’m seeing my Starbucks loyalty card on my lock screen when I sit at my desk. The geofence is just not narrow enough.
How much of this is a hardware limitation, and how much of this my environment’s density being a special case, I’m not sure. For what it’s worth, I’ve been a geolocation edge case from the moment I moved to an apartment over a coffee shop in West Philadelphia. (Am I home? Am I getting coffee? Is there even much of a difference?) But, if we’re working towards a context-aware world, with our little GPS-laden phones at the center, it behooves the people making the technology to figure out how to pin someone’s location down better. Until then, over eight million people will be left out of the revolution. All because their phones don’t know if they’re at the office, or at the bar across the street.
App developers working in this space, or any space that relies on making generalizations about human behavior, need to think a little more about their potential audience. Not everyone drives a car, not everyone has a commute above the surface of the earth, and not everyone gets their coffee more than 1000 feet from their desk. For location-aware apps and services, being able to identify small differentiations means the difference between an app you can use, and an app that just frustrates. It’s early days, and these are early adopter blues. Better to point the issues out now, before the normals get on board.
Another high-profile case of a company poking in a user’s email (Microsoft, in this case), has lead to certain tech pundits espousing self-hosted email. Again. While they’re right, and email you host yourself is (mostly) immune from a third party accessing your data, there’s an intolerable air of arrogance around their idea of self-hosted email. What I read, when Ben Brooks writes about “owning” email is: “I have the time, money, and technical skill to administer an email server. If you don’t have all of those things, you’re getting what you deserve.”
Yes, Google crawls through your email to target ads. Yes, that sucks. Even if you “personally don’t even like emailing people who use Gmail,” you don’t get the right to act high and mighty, because you have time, money, and skills that the majority of people don’t have. Furthermore, unless you’re hosting your server in your own house, where you control all access, remote and physical, there’s still the possibility of someone getting at your email. Even Fastmail with Australia’s laws protecting data, would probably cave if someone knocked on their data center door with a court order and a bunch of men with guns.
So, unless you are totally fine with your email being accessible to the government, and the company hosting it, I suggest you go host it yourself.
But that’s the biggest problem. Self-hosted email is outside of the reach of most people. I have the chops to set up a basic email server, install a spam filter (or have MailRoute point to my server). What I don’t have is the money to spend on hardware and hosting, and the time to keep a mail server up to date with security patches and other administrative crap. If you expect your average Gmail user to “own” their email to the tune of a few hundred dollars up front, and the monthly price of hosting, you expect too much. Suggesting, as Marco Arment does that a user who handles truly sensitive data pay for a Fastmail account is more reasonable, and not nearly as condescending. (And why has Google allowed unencrypted connections for this long, again?)
There’s problems with the arrangement around free email services, to be sure. I’m not happy about Google’s ad algorithms poking through my email, but it’s a trade off I’m willing to take to not do it myself. I’m really not happy about the idea of my government poking through my email either, but I’m not going to blame Google for that. [1] We can address these issues, and educate people about what they’re giving up when they sign up for free email services, without the intolerable air of technological privilege. I suggest people like Ben Brooks try that, before being smug about how secure his ivory tower is.
See previous comment about men with court orders and guns. ↩
Microsoft is apparently working on “no-touch” screens. It’s the latest salvo in the push towards the Minority Report sort of touchless UI, and I remain heavily skeptical. While a touch-free UI might be a good way to interact with a large display over a distance, it’s very difficult to have a device that tracks hand movement while ignoring false input. An easy hack solution would be to require the user to wear some sort of bracelet or ring to trigger the tracking, but that has its own set of issues. You might get someone to do it at work, but nobody wants to put on a bracelet to change channels on their TV.
What really confuses me is the idea of touchless smartphone and tablet screens. What makes smartphones and tablets so inherently usable is the touch UI. Human beings are tactile creatures. Touching things is our primary way of interacting with the world, and the touchscreen UIs we have now are extremely intuitive to people because of it. A touchless tablet/smartphone interface removes that direct interaction, and adds another layer of abstraction. It’ll be harder to learn, and for what benefit? No fingerspoo on your shiny smartphone? The touchless UI is the latest bit of sci-fi UI hotness we all want on our desks. Unlike Star Trek: The Next Generation’s touchscreens, [1] the Minority Report UI is of limited usefulness. If you can manipulate something directly, and you’re right by it, direct will win.
I’m not saying that touchscreens, as we know them, are the end-all of user interface design. What really excites me is incorporating haptics into touch devices. The latest episode of John Gruber’s The Talk Show, with guest Craig Hockenberry touched on this, using the example of the old iPhone “slide to unlock” interface. If there was a way for a glass touchscreen to simulate the texture of buttons, to signal resistance to an object being dragged, and otherwise mimic a real physical interface, that would open up a ton of new possibilities. Accessibility for the blind would be a boon.
In a Twitter conversation with Joseph Rooks, he said “Figuring out why something can’t be done well is the easy part though. It’s more fun to imagine how it can be good.” He’s right, but there’s a pragmatic streak in me that has me questioning the utility. There are problems that a touch-free UI can solve, but they’re limited in scope. It’s the same with Google Glass style wearable computing. As I mentioned at the start, I can see this technology working, if not well, at least passably in for large displays like TVs. A touch-free tablet or smartphone is a great demo, but it’s a reach to imagine it solving any problem a normal person would have.
In truth, even the Star Trek all-touch UI has its problems too. How do you control the ship when the power to the screens is down? ↩
The former CEO of eMusic thinks the price of music is too high. David Pakman has some stats on his side, and it's worth checking them out. The numbers make an almost convincing case, but the conclusion he draws reads as “We'll make it up in volume.” That scares me as a music fan. The cost of streaming services is a huge problem. After my screed against streaming music, I've decided to give music streaming one more chance to impress me. While it's early days in the experiment yet, I can't say any of the services I've tried give me a compelling value proposition for $10 a month. Half of that might sway me. It's a fraction of the $60 to $100 I spend per month on music in one form or another. 1
There are three things that worry me. First is that streaming royalties, and streaming profits are borderline non-existent. While digital distribution of music as download is almost free for labels and indies, alike, streaming has serious overhead costs, and the more people who use it, the more it will cost unless they can pony up for a sweetheart deal à laNetflix and Comcast. If they can barely afford to keep the music playing, and barely afford to pay the artists, cutting the price in half isn't going to help, even if the labels allow it.
Second, it's the people on the technology side, not the creative side, who are leading the charge about music being overpriced. I'm reminded of ex-Indie Rocker Tim Quirk, now with Google Music, who claimed “[Y]ou can't devalue music”. He's out of the music-making game, not recording, performing, or worrying about whether he'll be ever see a royalty check because his album hasn't made back the advance. The technology people make money when the company they work for makes money. They make money when more people give them money to use the service, or at least to buy ads. The last person to see the last of the money is the artist.
This isn't new. The artist typically got a pittance even when music was sold on Big Black Discs. Unless they became a Top 40 sensation, multiple times running, or at least managed to build a large enough fan base to sustain them, most music acts keep the money coming in through performance or licensing deals. And the label will take a cut of that, too. David Pakman talks a lot about consumers, but the word “artist” or “band” does not appear a single time in his piece. The supply of music is taken as a given, and unfortunately, he really can get away with that line of thinking. People aren't going to stop making music if the money dries up. It still comes off as callous.
What worries me most is that the streaming services and the download services alike are competing against free. Free is very dangerous to compete with. YouTube is an insanely popular way to listen to music, and while it does do revenue sharing for ads, it's not everywhere and for all artists. Unless you're Psy, you're probably not getting anything more than the price of a cup of coffee. Naturally, YouTube is popular among the teenage demographic, the ones who typically don't have much money to spend on music anyway. Even the ad-supported free tiers on the various music streaming services are competition to the paid version, and I don't even think that stopping users from skipping songs, and inserting one minute ads after each track would drive paid growth.
Music, like every other form of art, is an inelastic good. The demand is perpetual, and constant. It's just that music consumers have so many ways to get what they want, and many of those ways are a lot cheaper than $9.99 a month. Because of this, it behaves in an elastic manner, but an artist who is willing to play the game can still find a way to make money. That's the foundation of the relationship between labels and artists—if the independent hustle is too much, a label can take over some, or all, of the business aspects for a price. Part of why labels set streaming rights prices so high is to cover their expenses and provide their end of the bargain with their artists. (Though a lot of it is, yes, gratuitous pocket-lining.)
Something will have to break before we see things level out again, and it looks like streaming music might be it, if it can't start making money. Downloads may be losing ground to streams, but only because of price. Somewhere a balance is tipping, and I don't know what it will take to right it again. I worry that when it finally tips, it's going to take a lot of the artists I admire with it. That's why I'm doing what I can to support them with my money, directly when possible. Unfortunately, most people don't have that much of a connection to the music they listen to. If whatever way people opt to get their music in the future can forge that connection, we may be in business.
And that is a tough row to hoe.
I am, however, an outlier, considering the average music listener puts out that amount per year. ↩