To enable next-generation sensing applications, the sensor industry is moving toward a goal of measuring 10 axes: a three axis accelerometer, three axis gyroscope, three axis magnetometer, and a pressure sensor.
When emerging opportunities in the microwave and general electronics market are brought up, sensing is often mentioned. Already, touchscreens and accelerometers have seen vastly increased adoption in mobile phones. Yet sensing data also can be used for medical applications, to track altitudes, or to sense touch or presence over a screen or object. Freescale Semiconductor offers a white paper on this topic to explain the various opportunities for sensing.
Titled “Transformation of Sensing to Address the Expanding Contextual World,” the six-page paper begins by looking at the market landscape. This past Spring, iSuppli released a report predicting that the microelectromechanical systems (MEMS) market will almost triple over the period from 2011 to 2016. Wireless communications will lead this growth with 26% of the $12.5-billion market. It is followed by consumer electronics (21%) and the industrial (19%), medical (16%), and automotive (13%) markets.
Sensing is evolving from sensor hardware to a “contextual sensing” and “sensor fusion” approach. As Freescale describes, such approaches combine sensor outputs to achieve an output that is more detailed, accurate, and useful than a reading from a single sensor. Now, for example, the iPhone and iPod Touch use an accelerometer to enable simple functions, such as the switching from portrait to landscape view according to the angle of the device. In the future, however, simple gesture recognition is going to become even more complex. Computers will be able to sense and interpret natural human movements as a way of interacting with devices. If a user is holding a tablet computer and moves it further from his or her face, for example, the tablet would automatically zoom out of a map of photo. In contrast, moving the tablet toward one’s face would cue it to zoom in on a map or photo.
In this way, sensor fusion will merge the output of two or more sensors to obtain a result that is more intelligent and useful than the output from a single sensor. The result will be intelligent contextual sensing, in which sensors enable decision-making capabilities within the context of their environment. For mobile devices, for example, the device will know the user’s location [thanks to Global Positioning System (GPS) capability] as well as what and who he or she is near. Going one step further, it will provide access to information about those people and places.
As sensing evolves, so will the sensors themselves. More intelligence is moving from the central processing unit (CPU) into the sensors. For example, by putting an MCU on the sensor itself, one intelligent-motion sensing platform allows the intelligent sensor alone to control and application. Options are being explored to add intelligence while keeping costs down, allowing flexibility, and maximizing energy efficiency.