For this week’s article on making decisions, I thought that exploring how we decide to communicate with computers would be timely since we seem to be in the middle of a transition from strictly qwerty and mouse input with a monitor and audio output. In fact, with the popularity of tablets like the iPad, many people have already migrated to a substantially different way of communicating with computers.
In discussions of this type, input (human to computer) gets a lot of space, but output (computer to human) is fascinating. Think of how relatively quickly LCD screens replaced CRTs (except on NCIS, where CRTs are still on some agents’ desks — that is probably an accurate portrayal of governmental speed in conversion). Output evolved from single-font yellow letters (no images) on a green background to 32-bit full color and HD, but it is still a 2D screen displaying computer output. The addition of audio output was primarily for alerts, but has morphed into 5.1 surround sound that can crack plaster in your ceiling.
Broaden your definition of computer and you quickly realize that audio output is the primary way to communicate with various computers over the telephone.
What other senses could be used for output? Look to gamers and other edgy people. Motion actuators and tactile response are already a reality in niche markets. Will they ever become as ubiquitous as screens? Maybe. Audio output was a niche thing at first, also.
But we should not limit video output to a screen. I have a pair of glasses that allow me to display video on the lenses. A control varies the percent of light coming in from “real space” as opposed to light coming to my eye from the video. These glasses are a toy, but similar things are rapidly becoming useful for navigation and purely weird things. They are a cousin to the heads up displays on fighter aircraft. I might give up my BlackBerry for an Android or iPhone. RIM seems to have a nice augmented reality app called Wikitude, but not for my model. That lack can generate fickle customers.
On the input side, we generally do not think of motion as an input, but with tablets and augmented reality apps, the location, orientation, and motion of a computer is truly a form of input. To this we can add the continually improving speech recognition that is used almost exclusively by airlines to give you the latest data. But there are other types of input gradually gaining acceptance. Laptops can look at you and decide if they want you to operate them. Strangers are rejected. Maybe chemical sensors will become popular to distinguish between operators.
The best input obviously depends on the application. It you call a number for information, being led down a menu tree by pushing buttons on command is annoying. Voice recognition is a better choice than a modified keyboard input. If you are writing a book or blog, and if you are reasonably good at typing, then voice recognition is not as good as using a keyboard for input. This is true both because the typing can be faster and because the written and spoken languages are sufficiently different, that effective dictation is not a simple as you might think.
The best output also depends on the application. If I want to watch an epic movie like Lawrence of Arabia, a large screen projector is best. Watching it on an iPhone might be interesting, but misses the point.
Consolidation of devices also affects the nature of both input and output. While washing my car, I routinely listen to music stored on my phone. That would have been a foreign concept 10 or 20 years ago. I talk to my car about several things, and it listens. My refrigerator is not as smart, but when it wears out, the next one will likely talk to me and give me a warning when the milk is getting old or I am running low on eggs.
Whatever I say here will likely be hopelessly dated because the whole concept of what constitutes a computer is changing rapidly along with the economics of selecting methods of input and output. It is not economical at this time for me to have a refrigerator that monitors milk — it is technically possible, but not economically feasible. One day it will be commonplace. Technology has a fine history of reducing costs — particularly for computers and peripherals.
All of this reminds me of my first trip through the New York Metropolitan Museum. There is a hall with a progression of weapons from purely sharp edges and blunt objects to automatic rifles. However, in between, when the art of making firearms was rapidly evolving due to the economic pressure of being able to kill more people cheaply, we find hybrid objects such as a single-shot pistol with a nasty mace head at the end of the grip. It was designed so that when you made your one shot, you could turn the pistol around and use it as a club to finish the job. Other pistols had knives under the barrel for a similar purpose. That is, as the new technology advanced, it was not good enough to completely eradicate the old. We see a similar process with Windows 8 offering both Metro and Aero systems, which cater to different tastes.
So what will happen with methods of input? Continue the analogy with weapons technology a bit further. Even to this day, most countries issue bayonets to their military to convert shooting devices into cutting devices (with an interesting twist, some bayonets can be mounted perpendicular to the barrel to act as a stabilizing monopod — cute). In the same way, I expect that regardless of how popular tablets and alternate inputs become, we will still have qwerty and mouse (or equivalent) input devices for computer control. After all, we still have brooms to supplement our vacuum cleaners.