Menu

Sanspoint.

Essays on Technology and Culture

Finding a Place for iPad

The iPad has always struggled to find a place in my computing life. Not that I haven’t wanted to use it, but it’s historically been a much more limited device than my laptop, while lacking the portability of my iPhone. It didn’t help that I opted for an iPad 3, which was a compromised device thanks to the retina display. While it was fine for a couple versions, OS updates only dragged the performance down—and as for the fancy new features? Forget it.

It did find a niche as where I read my RSS feeds in the morning, read comic books at night, and occasionally banging out words on the go. Not that I did much of the latter. The iPad 3 lacked the portability of the previous models—it felt heavy in my bag, so I mostly left it at home on the dining table. In some ways, it felt like I’d spent $500 on an entertainment device that I could occasionally use for “real” work if I wanted to put up with the limitations of the hardware and software.

About a month in with the iPad Air 2, however, and I’m singing a very different tune. Where the iPad 3 was fun to use, it never made me want to use it more, even before OS updates caused it to slow down. The Air 2 is fast and flexible enough that it can not only do a huge chunk of what I can do on my Mac, but it does it well enough that I want to use it more. I understand how Myke Hurley feels about his iPad Pro now. Doing some stuff on the iPad is slower, but it feels… better somehow.

Case in point: I’ve wanted to see if I could use the iPad for my web programming project. Matt Birchler posted a guide for his web development workflow, but it didn’t fit my needs. I’m developing in JavaScript, and storing code on GitHub, so Coda was out. I’d found a couple Git clients for iOS, so I could access my code on the go. It was just a question of editing it and testing it. After listening to the second episode of Canvas, a podcast on iOS productivity, I found a way.

It turns out the Git client, Working Copy, works as an iOS Document Provider. So, I can use a programming text editor, like Textastic to do the editing, and the changes propagate back into Working Copy where I can test in its integrated browser. It’s not perfect: Textastic hasn’t been updated for split-screen multitasking yet, but it works well enough that I was able to push some bug fixes back into my GitHub repository right from my iPad. That’s incredible.

I don’t expect I’ll be sitting at my dining table and banging away at JavaScript when I can have a more functional coding environment on my Mac. When I’m away from the Mac, though—and the Air 2 is two-thirds the weight of the iPad 3, so I’m carrying it around a lot more—it’s great for quick fixes. Besides, that’s just what I can do with it now. Who knows what incredible functionality iOS 10 will bring, or what apps someone will develop to make what I do on my Mac more appealing to do on the iPad.

Maybe the iPad won’t have a niche. Maybe it’ll become the computer I choose to do most of my work on. I don’t see that happening any time soon. There’s still too many limitations to iOS and the iPad hardware right now, but that’s a temporary problem. Apple’s shown they want make the iPad into something powerful enough for more than just content consumption. I wouldn’t be surprised in a year or two if Apple releases a version of Xcode for the iPad, if only because I can’t imagine iOS Engineers not wanting to write code for their platform on the platform. Until then, I’m happy with my Air 2 and it’s capabilities—but I’m also eyeing the iPad Pro with more than a bit of gadget lust.

Why New Features Win Out Over Fixes

Back when I worked for The Startup, my job also included some Q/A testing. I took this aspect very seriously, since as the Community Lead, I saw myself as the public face of our product. When our users—or would-be users—ran into technical trouble, I’d lobby for a fix. The on-boarding process was a particular sticking point. It was a multi-step, multi-page mess, and people often abandoned it halfway through. They’d still be registered, and we could count them in our user numbers, but they never came back.

Our CTO was more interested in rolling out new features than fixing core functionality. When nobody used the new features, he blamed me for not promoting them enough. When they did, and they broke—which happened a lot—I took responsibility, falling on the sword for our users. If you paid $99 to post a job, and ended up with no listing, but a charge on your credit card, swooping in and making it better with a free upgraded listing is bound to make you feel better.

The CTO disagreed. Vehemently. He’d grudgingly get around to a fix, while conspiring with the Founder to propose and built out another quarter-baked feature at the next Scrum. Meanwhile bugs and issues languished, despite piles of tickets in Jira—which the CTO administered and used to set priorities. When I pushed back against shifting the focus of development away from the core product, yet again, towards the new job board feature… well, that’s when I knew I was on the way out.

This isn’t a problem exclusive to my former employer, or even to startups. It’s prevalent enough that there’s multiple names for it: “Shiny Object Syndrome” and “Featuritis” for example. New features look cool. You can write a press release or a blog post. It looks like progress, and can even be a driver of user growth. Whether you’re a venture-backed startup who needs to get the numbers up for your next round, or a publicly traded company who needs bigger returns this quarter, new features provide the illusion of progress. Plus new features are more fun to build for the development team.

So when Twitter ignores fixing long-standing bugs that allow bad actors to ruin the experience, or adds and proposes new features like Moments, Polls, and 10,000 character Tweets, they’re focusing on the illusion of progress for their investors. The product team isn’t loyal to the users, they’re loyal to the investors who keep the ship afloat. Worst of all, this is by necessity. Otherwise, how do they keep the lights on and cover the catered lunches for the staff?

There’s only two ways out of this conundrum. The first is never to get into it: to have “a thousand no’s for every yes” as part of the culture from the beginning. A brief look at the technology sector will confirm the rarity of this approach, however. Companies with a culture like this are rarely rewarded, and consequently rare on the ground. This leaves the other approach of having leadership stand up to investors for the good of the product—which rarely ends well.

In the case of Twitter, I can’t see Jack Dorsey having the temerity to stand up to public shareholders to put an emphasis on the tools users need to deal with harassment on the platform. If he wants to keep his gig, he’d be better off proposing new features to grow the company’s audience, get more advertisers, and improve the company’s finances. That’s all the investors care about, which really sucks for the rest of us.

It’s Not That Tech Doesn’t Care, It’s That Tech Doesn’t Care About What We Care About

Randi Harper, founder of the Online Abuse Prevention Initiative, posted a few things Twitter could do to prevent abuse. In the same threat, she also justifiably rips on Jason Calacanis and Vivek Wadhwa for their weak-sauce solutions to a very real problem that rarely affects them. And when it does, it certainly doesn’t get to the level that many—mostly women and people of color—face every time they log in.

In some deference to Calacanis and Wadhwa, as successful VCs, they have experience and skills that help companies succeed, grow, and overcome many of the hurdles they will face. They’re also avid Twitter users, so they have a vested interest in keeping the platform alive. (They might also be investors, too. I couldn’t confirm in a quick Google search.) They don’t get human psychology. They don’t get how systems can be twisted for their own ends—only “disrupted” by something more efficient.

Jason and Vivek care about Twitter, but they care about the wrong thing. The values of the technology industry as we start 2016 are still about rapid growth and high returns, damn the cost. It’s why we have nearly 150 startups valued at or over $1 billion. When your priority is getting a big return for your investors—and a nice little bonus for yourself—anything that doesn’t directly contribute to that is not a concern.

That includes diversity. That includes tools to prevent abuse. That includes, above all, empathy.

That needs to change.

As tech eats more and more of the world, the companies doing the eating will eventually have to confront these problems. We’re already either seeing it happen, or seeing them stick their heads in the sand. For the companies that address the problems head-on, with real inventive solutions and ideas, I predict real success. For the companies that keep their eye on the valuation, and not the quality? I predict nothing but gloom. It’s not too late for Twitter to pivot in this, but first they—and their investors and advisors—have to care.

I Am Not an Apple Pundit

When you write about technology, you have to write about Apple. They’re one of the prime movers in the technology world, along with Google, Facebook, Amazon, and (amazingly still) Microsoft. You can’t avoid it, whether you like Apple’s products or not. And, seeing as there are five Apple branded products on my desk right now—and one on my wrist—I think I would put myself in the former category. I even keep up with Apple news and rumors—though the two are conflated so much it’s hard to tell them apart.

But not everything Apple does is important to me. I don’t have a TV, so I don’t have an Apple TV. I don’t drive a car, so I don’t care about a theoretical Apple Car. I own a pair of bluetooth headphones, so if Apple does drop the headphone jack on the next iPhone, then I’ll be prepared. Not that we even know Apple’s going to do that. The degree of Kremlinology and tea-leaf reading in the Apple community is mind-blowing and often frustrating. How many hot takes and think pieces were written on the Smart Battery Case alone? How many words were written about Apple Watch Edition pricing, when it had no bearing on the product?

I use Apple products because they are the best tools for what I do, not out of a sense of loyalty to the company. I could probably do the majority of what I do on my devices with Android and Windows, or even with Android and Linux. My investment in Apple is only to the degree in which they continue to make the best tools for what I do. I don’t see that changing any time soon, though I sympathize with Marco Arment’s concerns. I also see no reason to worry about Apple’s hardware yet.

I’ll keep writing about Apple when I feel there’s a need. That doesn’t mean I need an opinion on every move the company makes, every company they acquire, and whatever rumor is floating around today. It feels like a distraction. Expect me to write less pieces on future device features, and more about the developing role of new hardware platforms. The latter gives me much more to chew on.

The Participatory Web

A few days ago, I mused on how we put the Web on a diet, by way of Maciej CegÅ‚owski’s excellent “The Website Obesity Crisis” talk. The conclusion I drew from Maciej was:

“[W]e people who make stuff on the web strip this crap down and focus on making awesome stuff everyone can use without compromising a user’s computing power or their privacy, and make it easier for someone to get started making that awesome stuff.”

But doing that is going to be very, very hard, and there’s many reasons why this is the case. I’m going to single out two big ones below.

The Audience of the Web has Changed

I got on the Web for the first time in 1996. I wanted to, because I was a 12 year old geek, who loved computers, and fascinated by this whole Internet thing. So, I begged, and pleaded to get online. My parents finally relented by buying a 56K model to slap in my 486 and signing up with our local telco’s ISP. A few weeks later, I’d set up an MST3k fan site on Geocities, learning HTML out of a book.

Back then, the only way to put anything on the Web was to find hosting—either by setting it up yourself, or through paid and free services—write HTML, and upload it somewhere. By 1996, there were tools, like Geocities, that made this easier. If memory serves, Geocities even included a web-based FTP tool so you could upload your pages and images within the browser. It wasn’t difficult by any stretch, but it could be quite intimidating to the new user.

Over time, hosting platforms of all stripes developed page building tools to make the process easier. Now, you didn’t have to learn HTML, or pay out the nose for a WYSIWYG editor, to build a web page, This made it easier for less technically adept users who were joining up to stake their claim.

At the same time, web technologies became more powerful, letting you do all kinds of crazy stuff. For a while, my personal Geocities site had giant images, several pieces of embedded audio, a scrolling text Java applet, and a CGI script hit counter—pre-broadband! (I didn’t keep it this way very long, though.)

The history of popular web platforms, ever since, has been tools that make it easier to put something in front of people, while reducing the amount of effort needed to do it. Twitter, Facebook, Tumblr, Instagram, YouTube, Medium, whatever web-based platform you want, they all operate on that same basic principle. The easier it is to put something in front of people, the more likely a person is going to do it.

I don’t think that the people who spend their time writing on Medium would be the sort of people who would go through the hassle of setting up hosting and blog software fifteen years ago, because it was such a hassle. I should know—I did it. But even if you want to go through that hassle in 2016, it’s still harder to get started in 2016 because…

The Technology of the Web has Changed

In 1996, the fundamentals of the web were HTML files. Fancy folks might write Perl scripts and have a cgi-bin folder on their web host to summon them. Maybe there was a FileMaker database to spit out flat HTML files as a proto-CMS. I don’t know. I wasn’t a professional web person then. Whatever you were doing, you could likely hit View Source in your browser, and see HTML code that could be saved and tweaked. Not so much now.

Even if you’re building something basic in 2016, you need to factor in multiple screen sizes: phone, tablet, desktop, etc. JavaScript is essential for nearly everything. Heck, if you’re viewing this site on a phone, I’m using JavaScript to do the little slide-out navigation thing, and this site is almost 100% pure text content. In 1996, you could have one person, with a title like “Webmaster” who handled the whole thing. In 2016, you need a team for anything larger than a basic weblog—unless you’re offloading it to Squarespace.

We’ve incorporated a ton of dynamic crap into the Web. Flash might be dead, but its ghost lives in on Adobe Edge and other technologies for animation and interactivity on the web. These things didn’t just happen—we asked for them, and we built them. Some of them became standardized. Some have not.

There are sites on the web whose HTML files merely call a series of JavaScript files that generate the entire page on the fly. That’s insane, when you think about it. The stack is huge and complex. No wonder people go to Medium if all they want to do is write words.

Jeffery Zeldman, a web guy of the old school, recently wrote that:

“[T]oo many developers and designers in our amnesiac community have begun to believe and share bad ideas—ideas, like CSS isn’t needed, HTML isn’t needed, progressive enhancement is old-fashioned and unnecessary, and so on. Ideas that, if followed, will turn the web back what it was becoming in the late 1990s: a wasteland of walled gardens that said no to more people than they welcomed. Let that never be so. We have the power.”

I don’t recall the late 1990s on the web being that terrible, but he’s got a point about the technology stack. The more complicated we make the Web, the harder it is to participate, the easier we make it for companies to create walled gardens for us to live in.

Is There No Way Out?

My hope is that we’ll find a way out in time, much like we did in the 90s and early 2000s. I’m looking on this debate as a technically inclined hobbyist user, not as a developer or designer. We’re at least having the discussion, which is a good start. What worries me is that any solution requires buy-in from the people who actually make what we see on the web. Back when that was largely other geeks, it was easier.

Now so much “content” is from media companies who call themselves technology companies, and technology companies who call themselves media companies. Other companies control access to the audience for those content companies. They all will want a cut, and they will all want ways to lock down any solution to benefit their platforms, not the audience or the creators. Not that most people creating stuff will even notice, if only because they are having an easier time of it.

Any solution will have to take both of these into account. We need to make it easy for people to participate in the Web, and we need to make the tools and technology to do it open and safe. Otherwise, we might be able to participate, but only on Facebook, Twitter, Google, or Apple’s terms. At which point, is it even the Web any more?