BU Today

Science & Tech

Opening Movements

Your next security ID may be a defining gesture

8

To the casual passerby, Janusz Konrad seems a bit fanatical about tai chi: standing in his office, waving one arm to and fro, then spreading both arms and bringing them together. Duck inside, however, and you’ll notice he’s not stretching for his health; he’s stretching for a camera, and images on a computer monitor are responding to each gesture—zooming in and out of photos or leapfrogging through a photo series.

Konrad, a College of Engineering professor of electrical and computer engineering, and Prakash Ishwar, an associate professor, designed the computer’s algorithms to recognize specific body motions. They’re not making video games. This, they hope, is the future security portal to your smartphone, tablet, laptop, or the locked door: software programmed to recognize a gesture, from your torso, your hand, or perhaps just your fingers.

Armed with an $800,000 grant from the National Science Foundation and collaborating with colleagues at the Polytechnic Institute of New York University, the BU duo is developing algorithms for ever-smarter cameras. In doing so, they have to thread a tricky technological needle. “On the one hand,” says Ishwar, “you want security and privacy; nobody else should be able to authenticate on your behalf” by aping your gesture. On the other hand, if the system demands a perfectly precise gesture, you may have to flail your arms or other parts 10 times to get into your own account. “That’s annoying,” says Ishwar. (And people may think you’re either crazy or infested with lice.)

A workable system must be able to screen out distractions, like the motion of someone moving behind you or of the backpack you’re wearing, or changes in ambient lighting.

Yet using gestures as keys to cyber-locks would have some great advantages. A gesture, like a lateral swipe of your hand, has “subtle differences in the way people do it,” Ishwar says—and people vary in arm length, musculature, and other traits that might help a detector distinguish between you and Arnold Schwarzenegger or Elle Macpherson. True, gestures aren’t as unique as fingerprints or as irises or faces, for which there are authentication scanners. But unlike those traits, which theoretically are vulnerable if someone hacks the database storing them, an authenticating gesture that’s been compromised by an impostor can be replaced immediately, whereas getting a new fingerprint—well, “you wouldn’t like it,” says Ishwar.

Security passwords pose another problem: the most effective ones tend to be inconveniently complex. Konrad surveyed one of his classes and found that no one used a smartphone passcode longer than four digits. An effective motion sensor could “simplify, make more secure and more pleasant the process of logging in,” he says. He and Ishwar are working to develop gesture-based authentication algorithms to be test-run on Microsoft’s depth-sensing Kinect camera, used with the Xbox video game and the Windows computer operating system. “It can track your body,” says Ishwar, “get some skeleton approximation for your body, and then that information is provided to you in some real-time format.”

They also hope to use start-up company Leap Motion’s smaller depth-sensing device for notepads and laptops. The company claims that its device, the size of an iPod, will be able to read “micro-motions of your fingers,” says Konrad. In the next three to four years, “we want to develop something that’s extremely simple, inexpensive, and can be imbedded into other products and could be used daily by millions of people.”

One thing that is clear is that certain body parts, like hands, lend themselves to identity authentication better than others. “The degree of freedom that you have with your hands is significantly higher,” Ishwar says. “Maybe if I’m a yoga master, I can move my right leg and put it across my left shoulder, but most people can’t do that.” They’d like to experiment also with the torso, says Konrad, since people’s posture can vary. Then there’s Leap Motion and its potential for recognizing finger gestures.

“We plan to involve more and more body parts” as the research progresses, Konrad says. If that sounds vaguely Frankenstein-ish, consider that today’s security technology already involves fingerprints, iris scans, and face recognition. “Wouldn’t it be nice,” muses Ishwar, “if we could do that using our everyday body language or gestures?”

8 Comments
Rich Barlow

Rich Barlow can be reached at barlowr@bu.edu.

8 Comments on Opening Movements

  • David on 03.26.2013 at 9:02 am

    It is undeniable that gesture recognition will have its future in many areas. However, I believe a sophisticated password generator or a finger print recognizer can build a far more secured and realistic system. Anyone could simply imitate gesture since the “input” of the “password” is so obvious.

    • C.H. on 03.26.2013 at 7:50 pm

      What if biometrics were combined with gesture-based authentication? That would be much harder to fake.

      • David on 04.23.2013 at 12:25 pm

        Well, That would be a completely different field for research.

    • Visual Information Processing (VIP) Lab on 06.18.2013 at 2:13 pm

      Hello everyone,

      Thank you for your comments and interest in our work. We would like to respond to the questions that arose in these comments.

      First, it is important to note that gesture-based user authentication is not the same as gesture recognition. In gesture recognition, the goal is to identify the gesture being performed irrespective of who is performing it. Our goal, on the other hand, is to identify the *individual* performing the gesture and not the gesture itself.

      Next, in our envisaged application context, each user chooses a gesture that uniquely authenticates him or her. In this sense gestures are similar to signatures and passwords. The skeletal structure (relative lengths of limbs, etc.) is somewhat unique to each individual. In this sense a gesture is somewhat like a biometric, e.g., fingerprint, iris, voice, face — a physical property that is fairly unique for an individual. Now a gesture, like a signature, will typically be chosen by a user to be distinctive yet consistently reproducible by the user. This can be done, for example, by combining simple gestures in a sequence much like letters are combined to form words in language. In our preliminary experiments we have found that even when the gesture is fairly simple, it is non-trivial for someone watching it to reproduce it to the fidelity needed to trigger false authentication. It appears that like skeletal structure, the *gesture dynamics* too are somewhat unique to an individual even if he or she is performing someone else’s gesture with the express intent of accurately duplicating it.

      In summary, gestures, especially complex ones, are difficult to accurately imitate. On the other hand, a high-resolution photograph or a voice-recording can easily spoof a system that uses face or voice recognition. Fingerprints and iris scans are harder but they too can be “picked-up” and used for false authentication.

      Another key advantage of a gesture biometric over a conventional one such as a face, voice, fingerprint, or iris, is that it can be *revoked* when compromised. A gesture, if compromised, can be replaced by another. A face, fingerprint, iris, or voice cannot (at least not without significant pain to the user).

  • No Name on 03.29.2013 at 9:50 am

    Has anyone considered the issues involved when someone becomes injured and no longer has the same mobility or a range of motion they had when they made their “gesture” password? What then will they do? For example, they may have been playing a recreational sport and then dislocated their shoulder resulting in having to use a sling. It just so happens one of their gestures involves the rotation of that arm at the shoulder socket. Will they now be locked out until they heal?

    • Visual Information Processing (VIP) Lab on 06.18.2013 at 2:14 pm

      Excellent point. If such a situation should arise, the user can replace his/her old gesture with a new one or even repeat the old gesture under the new mobility constraints. As an analogy, imagine a person who needs to sign a bank-check but who has severely injured the fingers of his or her writing hand(s).

  • Ivanna on 11.08.2013 at 5:29 pm

    What would happen if the gesture sensor brakes and for some reason stop working, how would someone then use their Security ID with his/her gesture?~

Post Your Comment

(never shown)