4 Predictions for the First Year of the Ambient Computing Decade
Straddling the border between two decades, it’s now clear that the most impactful technological development of the past 10 years was the evolution of the smartphone into the most widely used computing platform in the world. Looking forward, the next decade will be marked by ubiquitous ambient computing, with sensors, displays, processors, and communications capabilities embedded throughout the physical environment.
In 2020, a couple of technologies that are essential building blocks in ambient computing will be ready for broader adoption and will start popping up in consumer products. Here are the four emerging developments I think will have the biggest impact:
1. Edge processors with artificial-intelligence engines
Putting processors with powerful neural-network compute accelerators in smart speakers, smart displays, security cameras, and other Internet of Things (IoT) devices will make them faster, more reliable, and more private. Right now, if you ask Alexa to turn on the living-room lamp, the trigger word will be detected in your device and then your voice will be streamed to a data center in the cloud for speech recognition and interpretation. Subsequently, a message is typically sent to a second cloud facility that will communicate instructions back to the lightbulb.
New, efficient neural-network compute technology and new methods to compress hugely complex neural networks will give these devices the power to recognize speech, interpret images, and identify patterns without sending data to the cloud. Thus, video and audio of every intimate moment at your home will stay in your home, rather than being transmitted to an unknown data center in the cloud.
Many high-end smartphones already contain chips with specialized circuitry to efficiently process the huge computations required for computing using neural-network algorithms. Advances in this technology now allow it to be implemented in processors that are economical to deploy in devices that retail for less than $100.
2. Perception through computer vision
Thanks to the combination of low-cost, high-performance image sensors and advances in computer vision through the use of deep-learning algorithms, a multitude of devices now can interpret images, not just capture them. Your doorbell will have different sounds for your neighbor, the UPS carrier, and a stranger, as well as recognizing different vehicles in your driveway.
Instead of simply receiving a stream of motion events that may or may not contain anything interesting, you will only get notified when a meaningful event occurs. Furthermore, the AI engine technology previously mentioned will allow these compute-vision algorithms to be run in edge devices, maintaining your privacy while perceiving the world around them.
In addition to working on live camera streams, computer-vision technology can be applied to video. Your streaming video box will be able to skip or select scenes based on their content. Imagine, for example, watching the highlights of a baseball game using a computer-vision system that zooms past everything but plays where the ball is hit. Alternatively, perhaps you want to just watch clips of the game that include your favorite player. Your smart video device will be able to create an endless assortment of personalized, individual highlight reels.
3. Proactive interaction with devices
Today, to interact with a smart device, you need to wake it and issue an explicit command. “Hey Siri, show me the video from the baby monitor.” Increasingly, the technology will be able to take the initiative, handling some tasks automatically and alerting you only when necessary. You won’t have to check on the baby every 10 minutes; the computer-vision system will notify you if she wakes up.
Your tea kettle might notice a pattern when you come home from work and ask you, “Would you like to heat water every weekday afternoon when the garage door is opened?” These proactive capabilities are enabled by advances in computer vision and the ability to run the algorithms in the edge devices.
4. Ambient computing at work
Most of the innovation in the IoT has been around the development of smart-home devices. Workplaces already have security, communications, and computing infrastructures, using older generations of technology. This year expect to see more companies deploying sensor, voice interface, and edge processing technology to embed connected intelligence into the physical environment of their offices.
Think about the time you’d save if you didn’t have to set up the slides and video conference for a presentation. As soon as you walked into the meeting room, your face will be recognized, the people on the invite list will be connected, and your presentation will appear on the screen. If you need to change something, just ask. “Office assistant, connect to Chris in accounting and bring up the third-quarter sales projection.”
In 2010, you could already see the rapid evolution of smartphones with touchscreen interfaces, selfie and world-facing cameras, and always-connected data. Nevertheless, few could imagine all of the ways those technologies would be woven together and applied. This leads me to one final prediction: Prepare to be surprised by what the IoT does for you over the next year.
We’re at an exciting point in time when lower costs and higher performance enable widespread deployment of smart connected sensors and interface devices. We’re also learning a great deal about how to build useful applications, many employing artificial intelligence, that take advantage of these technologies. As much as these four developments are going to impress people in 2020, remember that the decade of ambient computing is only just beginning.
Patrick Worfolk is CTO of Synaptics.