April 15, 2012

An evidence-based refutation of the Project Glass parodies

The following was written by Saul Reynolds-Haertle, a close friend of mine who is too busy starting his PhD at Georgia Tech to run his own blog. It is posted here at his request.

It seems like half of the internet is complaining about Google's new wearable computers. They say that they're distracting, they say that you can't see through them, and they ask why you would want to be hooked into the internet like that in the first place. However, all of the basic usability complaints are built on critically unsound foundations: none of the complainers have used one of the devices. I've at least tried the technology1, and I have some facts and some ten-second experiments that I hope will address the more common concerns.

Obstructed Vision

We'll begin with the claim that head-mounted displays would obstruct the wearer's vision. This is the most common argument of the parody videos that spread like wildfire immediately after the announcement. All of them go wrong before they start, mostly because they fail to consider the fundamental differences between the human eye and the recorded video used to make the announcement. Here are a few of the ways in which this difference is important.

First, people are already missing huge chunks of their visual field. To start with, your brain only really pays attention to your foveal region, which is about the size of your thumb at arm's length. On top of that, most adults have floaters, and everybody has the high-school-science "blind spot" where the eye's neurons displace actual sensing elements2. Add in your nose, eye socket, hair, and sunglasses, and your brain is continually compensating for large portions of your visual field being useless. Since the icons presented by Project Glass are in the region occluded by hair and eyebrows, you won't even notice them unless you go looking for the interface.

Secondly, and more interestingly, your cell phone makes you go blind every time you look at it.[pdf] The problem is focal length. It takes nearly half a second for the human eye to change its focus from something on the horizon to something at arm's length, during which time you see just as much (or as little) as you do during a saccade. The trick here is that an HMD can be calibrated to appear as if it's way far away, so your eye can view its image without having to waste time bending lenses around. The effect, when experienced in person, is somewhat striking.

The third fact that this argument misses is more important still. You have two eyes, but Project Glass is only on one of them! In order to convey how critical this omission is, I want you to conduct a short experiment. First, hold up your left hand. Second, cover your left eye using your left hand. Third, continue reading.

Texting While Walking

The next major complaint is that the glasses will present a crippling distraction. I'll agree that the glasses will be distracting, but I'm equally sure that they'll be less of a distraction than people think.

The big thing this argument forgets is that people are actually pretty good at being distracted. Next time you're at the grocery store, count how many times you look down to read your shopping list while you're still moving. While you're walking down the street, pay attention to how often you look at things around you - eye-catching passersby, nice clothing in storefronts, birds flying across the edges of your vision, and so on. Count how many people you see with their noses buried in their book or their phone while they walk. Compared to all this, getting an email really isn't that much of a problem.

Wearable ComputerI'll even provide some direct evidence that HMDs aren't distracting. The guy in the picture at right is Thad Starner. He's been wearing a head-mounted display continuously since 1993; he's also been deeply involved in Project Glass for a while. Unlike Project Glass, though, his HMD displays a full 1024x768 computer screen with an internet connection, a browser, and Emacs - far more distracting than Google's streamlined interface. I work with some of his students and talk to him occasionally, so I can tell you one thing: he doesn't run into walls very often.

Of course, "doesn't run into walls very often" isn't exactly a scientific measurement, so Dr. Starner has gone ahead and done some research.[pdf] In particular, he and his colleagues gave some german students a shelf full of bins and a series of orders to place. Some students were given lists of items on paper, others were given pictures of which bin to grab items from, others were given orders over an earphone, and the final group was shown the order on a head-mounted display. Despite having an image floating right in front of them, HMD-equipped users completed their tasks with a third as many errors and about fifteen seconds faster than their competitors. Head-mounted displays aren't distracting; if anything, they're *less* of a problem than the average intrusion.

Too Connected to Think

The final argument, and possibly the most important, is that a permanent internet connection is bad. As you might expect, I believe exactly the opposite.

Put simply, we use computers enough that we've become used to them, not only habitually, but in that our brains have physically shifted around to work with them. Research[pdf] shows that, if given some information and induced to memorize either the information itself or where it can be found, people are much better at remembering where information can be found. We've all observed this firsthand. We remember the name of the page on Wikipedia rather than its contents, we remember the name of the function or the class rather than its signature and we look it up in the documentation, and so on.

We're well on our way towards Charles Stross's "posthuman genius loci of the net", where a person's computer contains so much information that forced separation results in crippling amnesia. As another experiment, I challenge you to remember more than half of your address book, email or phone number or snail-mail, without using your computer. To recall a definition, a cyborg is "a person who is part machine". Not a human body with robot bits, but a _person_, a self-aware consciousness, which leans on computers to do some of its work. We are all cyborgs. Project Glass simply makes it harder for us to lose our machine halves or drop them in the toilet.

Given that, why is there so much naysaying in the first place? You already have a laptop or a smartphone that organizes your pictures, navigates for you, handles your calendar and your address book, and (by way of Wikipedia) remembers the capital of Albania so you don't have to. Most of you have it with you 24/7. I simply don't understand the objection to making it more powerful and easier to use. I make my computer handle as much of my life as it can; when I'm not doing arithmetic, I can be doing calculus, and when I'm not trying to remember someone's phone number, I can be talking with them about the meaning of life. My computer makes me a better human because it gives me the freedom to exercise my rationality and consciousness. Project Glass is something I welcome. It helps me be human even when I'm not sitting at my desk.

1 Virtual Retina Display
2 Demonstration

comments are disabled
Discuss on Hacker News

April 10, 2012

Language Wars Are Pointless

Sometimes it really amazes me when anyone actually takes language wars seriously. If I casually mention "pointless language wars" and someone leaves a comment ignoring my entire blog post, informing me that language wars are not pointless, I seriously question their priorities (and possibly their sanity).
As a veteran language warrior, I resent the claim that my efforts are "pointless". There's a lot of terrible software out there, and one of the reasons for this is that inappropriate choices were made on the outset.
Oh, was I not clear enough before? Language wars are pointless. They are pointless for 2 reasons: You are never actually arguing about the language, only design philosophies, and these language wars invariably disregard context, which makes all results of said war completely meaningless.

In this crazy rant Alex Munroe1 made about PHP being a terrible language, he seems to be confusing "PHP is a bad language" and "PHP adheres to bad design philosophies", and even then its just design philosophies he doesn't agree with. This is all well and good, but that doesn't make PHP a bad language. It's like saying a hammer is bad because you don't like the handle - just because its not good for you doesn't mean its not good for everyone. If you want to prove that PHP is fundamentally a bad language, you have to demonstrate that almost all the philosophies that it follows are bad for everyone. You can't say a philosophy is good in one language and not good in another without accidentally tripping over yet another philosophy. A good example is his complaint about the '@' operator suppressing errors. You can suppress errors in python too, its just harder. The argument is that its too easy to ignore errors, therefore it encourages bad programming, therefore PHP is the devil. This doesn't make sense, because you aren't saying PHP itself is bad, you are saying that you believe a programming language should make writing bad code difficult, which is a philosophy. Consequently, you can't just say that PHP is bad, you have to say that every single language that does this is at least partially bad. Of course, the guy loves writing code in perl, so I suppose he prefers using a language that makes it hard to do anything.2

Let's say you actually prove that something is bad for everyone. Have you proven that its bad for everyone in every possible context? You have to do that too. The problem with language wars is that people go "Yo man whatcha using PHP for we got dis new trunk called Django that's so wicked sick goddamn get with the times homie" without ever actually asking you what you are trying to do and in what context you are doing it in. You cannot critique a language without first specifying what context you are critiquing it in. If you do not specify the context, even if its something vague like "for most non-performance critical applications", everything you say is an over-generalization and therefore invalid. There is quite literally NOTHING that you can absolutely guarantee any piece of code should do - not even the fact that it should work (for example, testing static code analysis). Because of this, unless you are saying X is a bad language for doing Y, and not simply that a given language is bad, forever, no matter what, you are doing a bad job of critiquing. Even brainfuck is useful for illustrating how a Turing Machine works. As the old saying goes, never say never (or always).

With this said, most of the points Alex Munroe makes against PHP are actually quite valid.3 In fact, if he had instead argued that PHP is a poor choice of language for building a modern, professional website, I'd probably agree ...but that doesn't make it a bad language. Maybe its a bad language for that purpose, but that's as far as you can go and still hold a legitimate opinion. There is an endless, endless torrent of people insisting that C++ is a terrible language, and it's not that their reasons are wrong, it's that they are ignoring the context of what I use C++ for. It just follows design philosophies they don't agree with in their line of work. It's fine that they don't agree with them, but uh, that's why we have lots of different programming languages and why we need to stop assuming programmers are all the same. What are you going to do, tell Linus Torvalds to code Linux using Lisp instead of C because Performance Doesn't Matter?

This problem of blindly following idioms like "Performance Doesn't Matter" and "SHARD EVERYTHING" in the database world is that you are separating the advice from the context it was given in. This makes language wars pointless and causes serious issues with the new wave of databases that are pissing off a lot of people (BUT MONDODB IS WEB SCALE) because fanboys who don't understand the context those databases were designed for simply assume you should use them for everything and therefore anyone using MySQL or PostgreSQL is a dinosaur. You can't just forget about Unknown unknowns (things you don't know that you don't know). If someone is using a language and you don't understand why, don't assume its because they're an idiot because the language is bad no matter what, assume that its because there is a variable you forgot to account for.

It's kind of like watching a carpenter yell at a plumber for building a house upside-down, except the plumber actually built a toilet and the carpenter just thinks it looks like an upside-down house.

1 whose name took me almost 2 minutes of digging around his blog, and then his site, until I finally found it buried in the fine print on the footer.