Tod Szewczyk: Wave Hello to the Future of Marketing

| | No Comments
Tod Szewczyk: Wave Hello to the Future of Marketing

Between Samsung TVs where your hands are the new remote and a biometric payment system Whole Foods is trying out, we’re heading toward a gesture-driven world. Tod Szewczyk, VP director of emerging technology and digital strategy at Leo Burnett Chicago, dives into this reality and explores its implications for brands.


Will new gesture technologies fundamentally change the way consumers interact with brands?

Isn’t it curious how strongly language and gesture are linked? Together, the two form the foundation for interpersonal communication—such that if you take one from the other, you can lose half the meaning of what’s being said. In his book “The Expression of the Emotions in Man and Animal,” Charles Darwin observes that, “all of the chief expressions exhibited by man are the same throughout the world.” Modern studies have taken our understanding of gesture even further. A 2013 paper in the Annual Review of Psychology asserts that gesture can reflect our unspoken thoughts as much as our articulated ones, while another article in the Journal of Memory and Language connect gesture’s ability to make communication more memorable.

In other words, gestures help our communications be better understood, and better remembered. As marketers, those two takeaways should catch our attention—after all, what brand doesn’t want to be universally understood and remembered?

Not surprisingly, brands are noticing the power of gesture—look no further than your smartphone to see how its role is on the rise. Apple, Google and Samsung all released gesture-enabled flagship phones this year.

A swipe, pinch, wave of the fingers

Samsung’s latest phone, the S10, has taken a pretty significant step in their OneUI interface on the Galaxy S10 device—when a user turns on gesture controls, the three standard Android navigation buttons disappear (yikes!). The only way to control the phone then is to swipe up in certain places. Apple’s newest iPhones and iPads have improved gesture control elements baked into the latest operating system, and the new U1 chip included in the iPhone 11 family is rumored to be ‘a stepping stone to gesture-based controls to better interact with smart devices’, aka smartglasses. Google’s latest phone, the Pixel 4, includes a radar-based sensor called Soli that, according to CNET, is “able to detect 3D movement in space,” allowing users to control the device with simple hand gestures. As Google puts it, “You are the only interface you need.”

These may seem like incremental advances, but since the release of smart devices like the iPhone or the iPad, we are seeing a fundamental change in the way children engage with the world around them. Educators in the UK recently noted that children tend to swipe pages of paper books as they assume that they are screens; you may have seen a toddler do the same. As our phones become gesture-first, consumers will start to learn to apply that interaction to other aspects of their lives.

In the home, on the go and in stores everywhere

Samsung already leads the way on replacing the remote, allowing you to manipulate the TV with your hands through swipe, zoom, like and grab. From Audi to Volvo, nearly every major car manufacturer has a model with a hands-free liftgate, the gesture here is a wave of the foot under the rear bumper.

We are even seeing it in our apparel. Google’s Project Jacquard, which uses conductive threads woven into items to create touch-sensitive areas, can be found in the Levi’s commuter trucker jacket and Yves St. Laurent commuter backpack—allowing you to control your phone during a bike ride or on the commuter train with minimal distraction or interference.

Moving into the brick-and-mortar store, Amazon is currently testing a biometric payment system at its Whole Foods offices in New York that uses a combination of computer vision and depth geometry to recognize a consumer by the wave of their hand and link to their credit card on file to enable speedy checkout.

Next-level ad campaigns and brand interactions

As the conversation around gesture continues to grow, it will also continue to be refined. There’s a higher-order to gesture called Natural User Interface (NUI), which the interaction Design Foundation defines as “user interfaces that are natural and easy to use… where the interaction is direct and consistent with our ‘natural’ behaviour.”

Why is all of this important? In a not-too-distant world our phones will start to be replaced by products like AR-enabled glasses, which means the way we interface with the world will likely be by gestures that do not include tapping a screen—rather, simply manipulating the air. Most notable in the pursuit, Apple and Facebook are racing to release smart glasses within the next three to four years. Amazon’s Echo Frames are available now and have Alexa embedded in them, the first step in moving interaction off of the phone and into the air. And surely in this world of controlling our devices without touching them, brands will need to consider if there is a reason for their consumers to employ gesture when they interact. At some point in the next few years, 5G will be a reality and the response time consumers will have to any connected object will be immediate.

So, finally, we ask, “What does this mean for brands?

This isn’t an exhaustive list of every device that can currently or will in the future be controlled by NUI, but it’s enough evidence to demonstrate that our future interface with devices will be through the invisible layer. Just imagine, for instance, that you could summon the Nike Run Club’s app by making a swoosh gesture in the air, then use your fingers or voice to say what distance you would like to run and at what pace and then go. How about McDonald’s, which has been investing heavily in new technologies like voice to improve the consumer experience. Could it be that in a gesture-first world a consumer could signal the Golden Arches in the air, proceeding to place an order by either speaking or swiping through the menu that appears in the consumers field of view? Or what if you could hail an Uber by making a U, then use your finger to confirm your location on a virtual map floating in front of you and watch as the driver makes way to your destination?

The implications of these new interaction points are significant for brands and marketers, providing new opportunities to more naturally reach consumers and create unique bonds with them that will be hard to break. Gesture represents an unprecedented user-centric channel for brands to own, and in bringing gesture to the forefront of consumer interaction, presents a once-in-a-generation opportunity for brands to define or own a new interface. And what brand manager doesn’t have that on their roadmap?

It’s a reality that all brands must prepare for. From ordering a burger to hailing a ride, we are entering the gesture-first world.