Brainloop is a brain-computer interface that senses a user’s thoughts about motor commands (e.g., “move left hand”), and uses them to control software. The demo is beautiful and engaging simply because it shows a user controlling Google Earth. In past demos of brain-computer interfaces I’ve seen, the user is usually doing visually boring things like moving a cursor or surfing the web. Using the same type of input to control Google Earth makes it spectacular.

Note to self: when out to impress with a new input device, try to design the demo to include flying around the globe.

(via Smashing)

My hometown's newspaper puts those annoying quotes around the word "feel"

The San Diego Union-Tribune recently published an article about haptics:

On one computer, users could “feel” the contours of a virtual rabbit.

Do users “feel” the contours of a virtual rabbit, or do they just feel them? Do we “read” text on the internet, or just read it? When we watch a movie do we “see” the actors? Harumph.

The article is about Butterfly Haptics, which is a haptic interface based on magnetic levitation. I “felt” it at SIGGRAPH ’08, and it was extraordinarily crisp and strong. The only problem is that the workspace (range of motion) is tiny compared to other haptic interfaces, and there doesn’t seem to be a clear development path for expanding the workspace using magnetic technology. Nevertheless, it’s great to be able to add magnetic field actuation to the relatively limited number of technologies that can be used for haptic display.