XD Immersive interview

The following interview was part of a promotion for the first XD Immersive conference, which took place in October of 2018. At the conference I presented about my experiences designing augmented reality interfaces. The interview is also available on the UX STRAT website.

Paul: Can you tell us a little bit about yourself? Your current job role, the company, maybe a bio that kind of led you to this point?

David: My name is David Birnbaum and I direct the UX design team at Immersion Corporation. You may not have heard of Immersion, but you’ve probably felt our technology. We’re a small company but we work with some of the biggest companies to implement haptics. If you felt rumble in a game controller, it’s likely that that was our technology. We also developed some of the first force feedback joysticks and steering wheels for gaming.

We transitioned into mobile, and we developed haptic feedback systems for mobile phones. In mobile phones, when you use the motor to signal information to the user, that probably has its roots in our research. We license our technology and IP to companies, in fact many marquee names around the world, to implement haptic feedback for mobile.

The other exciting thing about Immersion is that we have a pedigree in VR. Way back in the day, during the “VR 1.0” era, we manufactured exoskeletons for arms and hands that would let you feel virtual objects. They were extremely expensive, and we would sell only a few per year to large industrial R&D facilities. So, we have that behind us, and as VR 2.0 debuts, and AR develops, we can draw upon our expertise and institutional knowledge.

AR and VR are my current focus areas, although I work on mobile technologies as well. I still find mobile user interfaces really interesting because there are lots of small “microinteractions” that can utilize haptics to make a more user-friendly product. As we move into a world where virtual objects are mixed with reality in a seamless way, there’s going to be a situation where like half the stuff around you is physical, that you can touch. The other half is ghosts that slip through your fingers, and that’s a problem, right? So, we’re currently looking at that and trying to solve that problem with haptic interfaces for mixed reality.

Paul: Cool. Well, what about yourself? Did you study user experience; how did you get into it in the first place?

David: I did not study user experience. I have two music degrees. After school I worked for a short time in the record industry. When file sharing sank the long-standing music industry model, I went back and studied musical instrument design, and I became fascinated by what it is about a musical instrument that enables expert performance, or virtuosic performance; what is it about an instrument’s interface that lets you practice every day and get better at playing it, as opposed to something that’s a toy that you figure out how to use and then you put down.

For example, the mouse is similar to a toy in the sense of you know how to use it, but you’re not getting better and better at using it. You learned how to use it once and now that’s it. And there are specific reasons that the mouse has that property. The first one that comes to mind is that its positioning system is relative, not absolute. The second one is that it has very limited tactile feedback and gesture sensitivity. The point is, looking at musical instruments taught me that you can break down physical interfaces into components that will tell you how deep or flexible the interface is for expression and control. That research led me to haptics, and I knew as soon as I felt my first haptic interface that the haptic field could sustain my attention for a long time. I remember very clearly, I was in a summer program and I learned to build a force feedback device out of an old hard drive and a microcontroller. It was an “a-ha!” moment for me. First of all, this is an untapped design area. This is something that people aren’t really thinking about. The field of haptics was driven by engineering labs; there were not a lot of tools, and not a lot of effort behind the creative side. There was some, but it was neglected as compared to visual and audible media. So, I thought, “I could get involved in this, and I could make a difference.”

And, by the way, this will one day be a multi-billion-dollar industry, right? Because, the idea that all of our technology that we’ve had so far, all the problems that we solve in UX, they’re for your eyes and a little bit for your ears. And yet the sense that haptics engages, which is your embodied sense of touch, is the most important sense for telling you who you are, where you are in the world in relationship to other objects, what you’re holding, what or who you’re interacting with… and yet it was almost completely overlooked until recently.

So, I decided I wanted to devote my efforts to developing tactile or haptic design. I joined an R&D team at Immersion, so I was very much on the technical side at first. I was developing hardware prototypes and software prototypes. I was a UX person at heart, but I didn’t know what that was called at the time, and I don’t think anybody else did either. I would just draw these diagrams of what turned out to be storyboards and interaction design specifications, but I didn’t have that background. So, I had to not only teach myself how to do that and go back and take the courses and get involved in the community and learn from other people, but I also had to kind of sell the idea of UX internally as something that Immersion needed. That was also a formative experience because it forced me to articulate and communicate the value of UX to a diverse group of technologists and business people.

That’s why I love UX STRAT, because I went through a lot of the problems that others talk about at that conference. At that first UX STRAT conference, we were all talking about the same problems. How do we get a seat at the table with the executive team? How do we present these ideas? How do we make sure we’re aligned with product?” All these things. It was really amazing; it was amazing timing for me, because it just made a huge difference in the way that I approached my job.

Paul: So, David, could you tell us about your upcoming workshop?

David: Well, let me just start by giving a little bit more detail into augmented reality and how we’re trying to solve that problem from a haptic perspective, and then I’ll get into kind of what we’re showing at XD IMMERSIVE.

In augmented reality, you have a major problem for haptics. This is different from VR, where haptics is kind of a solved problem in a way. Since the 1990s, there have been lots of research papers written on the benefits of haptics to VR for productivity, the feeling of immersion, and the illusion of presence. For example, if you add well-designed haptics to a VR experience, you are almost certainly going to make it a better one. We don’t need to convince anyone of that; we just need to continue developing the technology and the tools. We’ve been doing that as we have developed prototypes and products for VR controllers. VR controllers are friendly to haptics. A controller is charged on a charging station, it has triggers and buttons on it, and there’s a lot of room to integrate haptic features. And, as the user, because you’re already strapping a big, inconvenient headset on, being asked to put a big glove or controller on your hand is not that big of an ask. So, haptics for VR is an easier problem to solve than AR.

With augmented reality, you need to allow the people using the interface to go about their daily lives. They have to be able to hold a steering wheel, hold the handle of a shopping cart, interact with their phones and their pens. They have to turn doorknobs, they have to shake other peoples’ hands. These things cannot be interrupted or degraded by the need for a peripheral held in the hand. So, this is the problem we set out to solve, or at least make progress against over the past few months. We’ve been doing generative research, interviewing product designers and artists, drone pilots, live streaming influencers in China, and other kinds of interesting, diverse people who have some touchpoint with AR.

This led us to the creation of what we call the Ring, which is a thin component that you wear around one finger. It has wires coming off of it today, but one day we hope those go away. The Ring allows you to touch objects in mid-air. We decided that it should not simply vibrate. It needed other unique characteristics. We’ve been doing vibration for many years, we know how to do that. We need to move beyond it. We actually think that a vibration ring could enhance the AR experience on its own, but we want to show a visionary demo of next-generation haptics. We thought, AR is coming in five years, so what’s going to be happening in haptics in five years? We need to show that. So, we needed something that was more organic and more nuanced, with a wider palette of design possibility than just vibration.

So, we used a kind of exotic material to fabricate the actuator, which allows the Ring to squeeze and flutter your finger in these ways that just feels distinct from vibration. When we were looking back through the generative research, a question we asked ourselves was, “Which use cases could be used to leverage this organic feel?” Where we landed was a mid-air painting app we call ARt: Augmented Reality Touch, which, of course, spells “art.” It lets you paint in mid-air, and you can feel the liquid of the paint. We also have a palette that you hold in your other hand, made out of a large touchpad. There’s no screen on it, but when you look at it through your AR smart glasses, the palette is augmented. You can see graphics floating around, and you can place virtual graphics on it. That’s where you can mix color. So, you can suck up paint from a paint blob, and you can mix together a new color, painting and feeling the medium as you do it.

All of this stemmed from an ambition to show the future of haptic AR. But having said that, it’s not our intention to say, “Painting is the future of haptic AR.” That’s not it at all. We’re just using that as a context to show off this technology.

Paul: So which parts of that do you plan to teach people? What can they learn in the workshop?

David: Today, people generally understand what the word “haptics” means. With the Taptic Engine and with VR coming to fruition, people now understand that word, so we can skip most of that. However, I’ll still introduce a short section about haptic design. How is haptics useful to designers? What are the tropes, and the tools, and the approaches that we think about all the time? For example, we think about things like texture or about transmitting levels of urgency–that works really well with haptics. You can imagine knocking on a door in the usual way – the knocking pattern and intensity can feel friendly or urgent. We can do that with haptics. There are certain design tools that we have in our tool kit that we know we can go back to, and we have a design system.

I’ll present how you, as a designer, can think “haptically.” And that does matter to you, because even if you’re not building products with next-generation haptic technologies, you are probably designing apps, and these apps are experienced on phones, which have an interface to the actuator. The design of apps can often be stronger when designers think multimodally about how visual, audio, and touch are working together to present information to people. It’s just an untapped value that people don’t know they can utilize.

So, that’s the first thing, and I’ll show some concrete examples. We have demonstrations on mobile phones of various haptic use cases, and we can go through some of those. We can also go through the design story of the LG V30. This was a phone that was released just a few months ago, and it won several design awards. It’s the first time I’ve seen technology reviewers and journalists saying on the internet, “This thing has a great feel, and you need to demand this level of haptics in your next Android phone.”

Then, beyond mobile devices, I plan to talk more about the challenges around AR and how, as we move into this new world of mixed reality, we can start to think multimodally. We should be able to think about what gesture you’re using, and what haptic feedback that gesture should generate, the problems associated with virtual object scaling and how haptics can reflect that.

After that, I plan to show mobile demos. I can show ARt, the experience, as well as a demonstration of a 3D user interface with haptic navigation. If we want to go even further into it, I can talk about haptic media also. Over the past few years I worked extensively on designing custom haptic tracks that synchronize to video content. This technique was deployed to the market and used for movie trailers and ads. What’s interesting about all that is we did a neurological study about the impact of haptics on people’s brains while they experienced video. We gained insights about how haptics affects people’s tendencies to engage with the ad, to recall images from the ad, and things like that.

Paul: Okay, cool. As you look into the near-term future, the next three to five years, what’s your crystal ball say about haptics? What do you see as the next gen of haptics that’s coming up?

David: I believe that if and when VR becomes something that is in everyone’s living room, then we’re going to see an explosion of innovation making that experience more immersive. Console gamers have had haptics a long time; rumble has been around for 20 years. Game controllers vibrate, and they almost always have. So, gamers expect that experience. I see VR as kind of an evolution of the console experience, and I would expect there to be increasingly advanced haptics as VR improves.

There’s going to be a variety of things that you can do with haptic technology in the future. Haptics is an umbrella under which a lot rests. You can define haptics as technology for touch, but it’s more complicated than audio and video. Instead of hearing and stereo or seeing with rods and cones within two eyes, with touch you feel mechanical deformation of the skin in four different channels. Your ability to perceive external objects is a result of the integration of those four channels of information that takes place in your brain in a way that we don’t fully understand. Then there’s your sense of hot and cold, which is significant because these are two different channels, and you can use the conflict between those two channels in interesting ways. So, there’s just a lot of possibility. If you assume that the various games and content pieces that are produced for VR will need to differentiate from each other, we fully expect haptics to be a part of that differentiation, and so that would drive some innovation. It’s an exciting time to get involved with haptics, and I’m proud to be at the forefront of this trend.

Playground Space

Cross-posted from LinkedInAll images © SpaceX

A lot of people are talking about how the Falcon Heavy is going to open up new frontiers in space, since it can lift robots, building materials, scientific instruments, and even habitats at low cost. The Falcon Heavy’s successor, the Big, uh, “Falcon” Rocket, may even let us colonize deep space and Mars. Exciting, for sure. But I want to talk about something different. I want to talk about Starman.

Starman is a 1984 romantic science fiction film starring Jeff Bridges and directed by John Carpenter. It’s also the name of the mannequin sitting in a Tesla Roadster that the Falcon Heavy placed in solar orbit on Tuesday. The Roadster’s purpose during launch was to stand in for a real payload of the kind SpaceX customers are expected to lift with the Heavy system. But I believe Starman is much more than that – I think he’s the most important breakthrough in space travel since Apollo.

How could that be? Why would a mannequin pointlessly revolving around the sun be more important than the Space Shuttle program, or the International Space Station? Because Starman has tipped us into a new era of space exploration. Before Starman, space was serious business. Today, space can be approached with a sense of humor and a spirit of improvisation.

Media reported that the Roadster was a test payload, used because no customers wanted to take the risk of launching an expensive satellite on an untested launch system. That’s true, as far as it goes, but think about the extra trouble it took to mount a sports car inside the rocket so that it would have a low likelihood of compromising the mission. Could you imagine the headlines if the first Falcon Heavy mission seemed to be going well, until something unexpected happened with the Roadster, and then everything fell apart? Elon Musk would have been criticized for a hare-brained idea that cost ungodly amounts of money. (Maybe the car is just similar enough to a satellite or probe that seeing it survive the launch will convince customers that SpaceX has what it takes to keep their payloads safe. But if that was the goal, there were other ways of achieving that while maintaining a closer resemblance to a real payload.)

The key to understanding Starman is that he’s pointless – and at the same time, deeply meaningful. He’s pointless because there’s no reason to go through the effort and risk to launch a car into solar orbit aside from wanting to be funny and create viral content. It’s frankly still astonishing to me that it even happened.

Starman signifies that space is now a place where we can make jokes and pull pranks. Having a sense of humor is a sign of confidence, and pulling an expensive prank signals an excess of resources.

The center console in the car, which permanently displays the words “DON’T PANIC!” has several meanings, as far as I can tell. First, it’s a funny cultural reference. Second, seeing it from the perspective of the dummy in the driver’s seat makes you think about how it would feel to float through space in a sports car for a billion years while staring at those words, which is hilarious. But most importantly, it’s a message to us still trapped on Earth. Don’t panic – our destiny to become a spacefaring civilization is finally around the corner.

Welcome to playground space!

Tactile Design: New tools let you create user experiences for the sense of touch

Cross-posted from LinkedIn.

UX is a key differentiator for intrepid brands testing the boundaries of what is possible in the mobile form factor. Now that mobile devices are central to people’s lives, consumer expectations have quickly evolved. No longer tolerant of disruption or poorly designed apps and ads, they expect high-quality, immersive experiences. At the same time, the attention span of a mobile device user is shorter than that of a goldfish. This creates an immense challenge for brands who want a piece of that attention span. Never before has it been more important to engage mobile users in ways they’re not likely to forget.

In the fight for eyeballs and brain cycles, the impetus is on designers and developers to use technology as effectively as possible to get their messages across. Today’s sophisticated mobile users expect technology to treat them like people, and touch is an excellent, if underappreciated, way of doing that. The sense of touch and the technologies that engage it, called haptics, are underutilized in their capacity to trigger a different effect on the brain than visual and audio – one that’s more emotional, intimate, memorable, and human.

A turning point as haptic tech comes of age

Designers who are already familiar with the power of human touch know well that the sense of touch can elevate their work by delivering a new depth and excitement to their designs. For example, industrial designers invest significant time and effort into engineering the feel of products in order to convey the values that the product embodies. However, until only recently, the domain of tactile design was limited to a few dimensions such as material selection and the weight and shape of physical objects. As far as digitally synthesized touch was concerned, the pace of innovation was slow and there were limited options. The tools and techniques available to sculpt tactile sensations in expressive and creative ways were unfamiliar, and there was not a long history of best practices and training to draw from.

There are a few reasons that this was the case. As mobile devices increased their capabilities, people were seduced by the idea of mobile video, which prompted focused investment in the development of high-resolution screens. Battery life was a persistent and practical concern. The desire to document and share life’s moments drove unprecedented investment in camera technologies. The result was that haptics was rarely the focus of the design cycle of consumer devices. Haptic technology, invented back in the 1940s, tended to be bulky, complicated, expensive, and often inconsistent. Tactile design was rarely seen as an efficient way to engage people.

But that’s now changing.

New tools for today’s generation of haptic designers

Only recently has haptic technology matured to the point where tactile design can be programmed into software, just like the audio and visual elements of our devices. New advanced haptic features such as tactile video, smart notifications, and tangible user interfaces require high-quality haptic tech in order to work, not the buzzy, low-quality haptic motors from the past era. High-definition actuators, the components inside devices that move to create haptic effects like textures and patterns, are proliferating. The V30 handset from LGE is an early example of a device that has haptics carefully incorporated into its design language. The L16, from innovative camera company Light, uses high-resolution haptics to make its devices both more user friendly and provide a premium product feel.

Moreover, there are advanced tools now that let designers play, experiment, iterate, and refine, using interfaces similar to those for editing graphics, audio, and video. Designers and developers can quickly create a rich texture, encode it, synchronize it to video or interaction events, and render it on any haptic endpoint. These tactile design tools are flexible and integrate with creative software suites, audio editors, video editors, and many other platforms designers are familiar with. This opens up countless possibilities for the integration of haptics into media, advertising, and even AR.

This trend, of treating tactile experience as just another part of the consumer experience that should be controlled and designed in an intentional way, is industry-wide, and the demand for tactile design will soon be on par with industrial design, visual design, and audio design as another dimension of great products and content. The time has never before been more ideal for the most innovative designers and developers to consider touch technology as a critical new element in their design process.

As haptic designers, one of the most important challenges is understanding how touch fits together with other modalities to create convincing user experiences. There are many technologies and design elements that fall under the umbrella of tactile design, from forces and vibrations that help gamers become immersed in virtual worlds, to social touch features that let people feel more connected with each other, to tactile tracks that synchronize with video. A good creative tool is something that provides enough degrees of freedom, and also enough constraints, so that people can improvise and experiment in a “clean sandbox.” The latest generation of tactile design tools do this very thing.

Creating the future of UX with haptics

There is no doubt that haptic technology is the biggest evolution in human computer interaction since the GUI. Touch is the missing element from almost all our digital advances. Microprocessors, display, and graphics enjoyed an enormous amount of investment and mindshare. But haptics is one of the next technologies that will change everything. It has been a dark horse, but not for much longer, and when it is more broadly understood and used by designers and developers, it will place them in a small but powerful group of professionals who can think multi-modally when it comes to content and product design.

I’m calling on these early adopters to think creatively about using touch tech in their work, to learn these early-stage tools and technologies, incorporate them into projects, and most importantly, take the time to play and ideate with haptics for heightened memorability, stronger emotional response, and better design. Early adopters of this new tech are at an advantage, helping lead the development of mobile experiences that define the future.

Hit print

Cross-posted from LinkedIn.

In our digital and environmentally conscious workplaces, paper is becoming rarer. Our tools are digital, and our deliverables are posted to shared servers or passed around through email and chat apps. Printing on paper is even actively discouraged in some cases. I know someone whose email signature encourages readers to think twice before printing in order to preserve natural resources.

That’s an admirable sentiment. However, our team has found that printing documents and posting them on our walls has driven our team to new heights of efficiency and creativity. Lately, we’ve been printing almost everything we’ve been creating and posting it in a common area near our desks. Here are some reasons we’ve been doing that and some things we’ve learned along the way.

Printouts let you see relationships

Because the human eye saccades extremely quickly, we are able to take in a vast amount of visual information by simply glancing around. The reasons for that are evolutionary and beyond our scope here. The key thing to know is that you’re able to take in a lot more information when it’s arranged spatially in front of you than you are by accessing chunks of it in a serial fashion, such as when you click through a slide deck or navigate a computer folder. That’s why infoviz guru Edward Tufte has long advised that you buy the biggest computer display you can afford — the efficiency you gain when you can use eye movement instead of clicks and gestures to compare information is massive. For the foreseeable future, we will not have digital displays that cover our surrounding walls and let us quickly and easily post vast amounts of high resolution visual documents to them. For now, we have printouts.

Don’t just print the new stuff, either. Sometimes, when I dig into my archives, I’m surprised to find gems that are highly relevant to current work. Printing old, relevant work and posting it next to current work is a better way of exploring the relationship between these than would be putting them in the same directory of files. It also gives your work a sense of continuity and purpose, since it reminds you that many problems that seem very different aren’t really so, and that you’ve steadily been gaining expertise in your area of focus.

Showing other people printouts is courteous and effective

Studies have shown that people report that reading paper is more relaxing and easier for understanding complicated information when compared to reading a screen. If you want a good user experience for the stakeholders for whom you’re creating stuff (and you should), let them enjoy your work on paper, at least some of the time.

It’s easier to remember information you saw on paper, too. We can retain information seen in digital files, of course, but current evidence points to paper as superior for memory formation. One theory to explain this is that we use multiple senses when we encounter paper, including haptics and smell, so we have richer sensory input associated with the information. It also may be because paper allows us to use spatial memory. By laying out documents on a wall, you give people another way to retain the information you’re trying to communicate. For example, viewers might associate one of your ideas with the top corner of the wall next to the door, and this may make it stickier in their minds.

The spatial relationship between your viewers and the printouts is also something to consider and design intentionally. Important material that you really want people to see should be posted so that its center is about 57 inches above the floor. This is the same height many galleries use when they hang artwork, and allows a comfortable viewing angle for most people regardless of height. Oftentimes, if we have a document with many design concepts in it, we will post our favorite ones at eye level, and let the others proliferate above or below so that passersby who stop to spend more time exploring can discover them.

Immersing yourself in your work lets you enjoy the fruit of your labor.

There’s something to be said for the way documents up on walls establish the presence of your team to other departments and promote your team’s value to the organization. While it’s important that the wider org understands the value you bring, perhaps more important is the feeling of being immersed in your own good work as you spend time in the office. Humans do work for a variety of reasons, but one of them is for the satisfaction of a job well done, and seeing your own work up in your surroundings makes you feel competent and plugged in.

Another advantage is that the team’s physical environment will change naturally to reflect the state of the team and the organization. When a project gets old, if it was really cool and people still enjoy remembering it, leave it up. You’ll instinctively know when the posted material gets stale. When that happens, take it down and recycle it. If it’s stale, it’s unlikely you’ll need it again very soon, and if you do, you can always print it again. What you don’t want is to feel burdened by the weight of past work or bothered by seeing work that you always wished you had iterated one last time, or that has negative feelings associated with it. If you get a heavy feeling when you see posted work, take it down. If you love it, consider it a part of the decor.

A new name for a new era

This blog has had three names, including this latest one. When I started blogging in 2007, it was called Tactilicious. Then it changed to Hapticity. Both are plays on words having to do with the sense of touch, which has been a key focus of my career.

The new name, Contrary Motion, is different – it comes from music theory, and refers to two melodic lines moving in opposite directions.

This change doesn’t signify a move away from the themes I’ve been posting about for these past (almost) ten years. I’ll continue to write about anything I find interesting. But it signifies that 2017 marks the year that content creators, artists, and even the investor community will recognize on a mass scale that technology for the sense of touch can no longer play second fiddle to technologies for our other senses. Touch matters, and it will only matter more as VR, wearables, AI, and robotics really get going.

We’ve been building to this moment for a long time. Lots of people have been doing amazing things with haptics for many years. But I noticed something change in 2016 – when I met new people and told them what I do, a majority of them already knew what haptics was. Sure, I live in one of the most tech-forward regions of the planet, but signs of change show up here earlier than other places. We’ve passed a tipping point.

Contrary motion is a term from music, which fits with my background. But the meta-concept of two threads moving toward and away from each other to generate results that humans find compelling can be found in many other domains – for instance, dramatic conflict, or visual contrast. We are watching a fascinating shift where haptics stops being a technology and starts being just another form of creative expression.

In haptics, contrary motion even has a literal meaning, unlike in music, where it’s only a metaphor. Vibration is an oscillation, or back-and-forth motion. Forces push against people, and people push back, and through that interaction, meaning is generated.

We’re about to see haptics flourish on a scale never before imagined. Let’s talk about it!

My social media channels are another great place to continue the discussion. Follow me on Instagram, Twitter, and Snapchat.

Sensing everything

Presented at CHI 2012, Touché is a capacitive system for pervasive, continuous sensing. Among other amazing capabilities, it can accurately sense gestures a user makes on his own body. “It is conceivable that one day mobile devices could have no screens or buttons, and rely exclusively on the body as the input surface.” Touché.

A great light has gone out.

Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life. Because almost everything — all external expectations, all pride, all fear of embarrassment or failure – these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.

ADDED: I’m reminded of this quote by Picasso – it describes Jobs just as well:

When I die, it will be a shipwreck, and as when a huge ship sinks, many people all around will be sucked down with it.