When Navneet Alang takes the stage as part of the New Network Innovation Lab at the MIT Media Lab tomorrow, he’ll share an unusual insight: his research group at the University of Illinois has built a “virtual AR stand-in for world mapping.”
Published in the journal Nature this week, this research might not sound like the most exciting and groundbreaking thing, but it’s actually key in demonstrating how intelligent, emotional machine learning can be harnessed in order to discover and understand our world in new and interesting ways.
Our world, of course, has changed significantly in recent years. A majority of new inventions — such as smartphones, audio visual devices and self-driving cars — do not use human-directed, analog tools such as physical maps or images in order to teach machines what’s real. Instead, they rely on a set of immutable, computer-driven and automated models.
Think about how your smartphone’s landscape map auto-erases roads when it comes across an unmarked, unmarked or unrecognized junction. Or imagine you’re standing by a dead end, and a 4-wheeled car drives into the ditch next to you. That, or a robot disrupts you on a grocery store aisle as you push to the back so you can clear the books out of the way, only to arrive back in front and knock over your coffee.
Where this stuff gets interesting, and where machine learning can lead to big changes, is when technology and humans are joined. This occurs because humans and machines are both looking at the same world. Recognizing faces and approaching people is extremely difficult, but it’s also a difficult task for computers to achieve. That’s why we rely on the “fake it till you make it” approach. Humans recognize faces because they can actually smell the difference between someone in the crowd and an avatar. That’s why a two-person conversation consists of talking to one person and listening to the other, and it’s also why we send a greeting card.
In this AI world, the potential for future breakthroughs increases because many of today’s applications and tools are derived from some form of AI. There are undoubtedly many years of research left to complete. All machine learning technology uses a narrative approach to classify a disparate sea of information, but computers are unable to recognize pictures, textures and other information that are unique to a specific object or scene. To get around this problem, research teams are designing models that use neural networks. These networks recognize not only photos and video but human-directed tasks, including the idea of participating in conversation.
In other words, while computers are indeed able to identify faces in a video stream (a capability they’re often able to do without training — the brain’s way of describing information), they have no way of distinguishing a person’s vocal action from that of a computer-generated person’s. That’s why radio speech recognition systems rely heavily on facial recognition and how they respond to facial emojis (like the crying-laughing-scream face). Meanwhile, the Audi Video Image Recognition Research Group at Audi AG has been taking a different approach by building facial recognition to the point where it can identify people who don’t answer their phone — an important piece of good luck in congested places.
Even within the photo and video processing arenas, there are bright spots: The Microsoft Cognitive Services SDK was recently installed to improve image and video recognition capabilities in devices and, according to the company, “will fundamentally change the way people and machines interact in the years to come.”
But there is still much work to be done. Scientists and engineers continue to develop cognitive algorithms that can tell people that because we are standing next to each other, that will add value.
Research like these will aid our collective understanding of our world as it changes. And it will lead to greater awareness for those facing existential challenges, such as the homeless population in San Francisco, who rely on data for survival.
This is what inspires me and others in the field of AI. Humanistic AI is the first step in realizing the promise of the remarkable future we can create together.
Leave a Comment