Using MCUs to Create A “Human Interface” with Touch Sense Features
While not a new concept, the availability of enabling technology that can revolutionize the way we interact with electronics and an urgency to incorporate this technology demonstrates the increased importance consumers are placing on interface design.
But what defines good human interface design and how can system designers implement a compelling solution? To begin answering this question, it is helpful to view human interface as simply a set of functional interactions with end users. These interactions can be subdivided into two logical groupings: inputs and outputs.
Input events are those in which a user causes, either directly or indirectly, a specific action to be performed. Examples of input events are:
Touch detection – single finger touch, multi-finger touch, finger slides, taps, etc.
External stimulus detection – proximity, motion, hand waving, voice, etc.
Environmental detection – ambient light, temperature, etc.
Physical detection – rotation, inclination, shock, vibration, etc.
With advances in fields such as touch sensors, proximity sensors, ambient light sensors, and accelerometers, the ability and sophistication of devices to accept inputs has dramatically changed the entire human interface landscape.
However, it is equally important to tie input events to a tangible output event because it is the output event that informs the user of an action that has taken place as a result of the input provided.. Examples of output events are:
Switching items on or off – screens, speakers, lights, safety features, etc.
Adjusting controls – volume, backlight, brightness, stabilization, etc.
Providing tactile feedback – auditory (“hear”), visual (“see”), haptic (“feel”), etc.
The types of input events and output responses desired will vary greatly as they are dependant on the type of device being built. Figure 1 shows a visual representation of how a simplified human interface process flow can be envisioned.
Figure 1 - Simplified Human Interface Process Flow
As shown in Figure 1, each of the input events is tied to a specific threshold level that must be met in order to trigger one or more output events. Similarly, each output event is tied to one or more subsystems that will be affected as a result of the input trigger. For example, a handset device may be kept in a sleep-mode for power-saving reasons. However, upon the detection of a touch that exceeds a certain pre-defined pressure threshold, the handset will turn on the screen, provide an auditory confirmation over a speaker, and turn off the screen locking feature. In this example, a single input event has been directly tied to three different output events affecting two different subsystems (screen and speaker).
The good news for designers is that technological innovation has dramatically improved the ability of a device to offer a wide variety of creative input and output choices that can add a significant amount of appeal to an end product. However, this rapidly improving capability comes at a price: design complexity. The sheer numbers of input and output event possibilities are challenging even for the most experienced designer, and it is becoming exceedingly difficult to predict what users may find appealing not just today, but in the future as well. From an implementation perspective, what is really needed is the ability to create an interconnected framework that enables a tight coupling of these input/output interactions while still leaving flexibility to adjust for ever-changing market requirements. One possible solution is to have a range of sensing elements closely tied to a flexible, software-configurable platform. An example of this type of human interface subsystem is shown in Figure 2.
Figure 2 - Human Interface Subsystem
To analyze how a human interface subsystem could be used to create an effective human interface platform, let’s examine the touch sensor.
Traditional human interface designs relied on mechanical buttons as the mechanism by which a user would provide an input to the system. However, more recently the popularity of smart phones, portable gaming devices, personal navigation devices, and other appliances has highlighted a potentially more attractive user interface approach: the touchscreen. Touchscreens are displays that can detect the presence and location of a touch. They enable users to interact directly with the device via the screen itself rather than with mechanical buttons or other indirect devices like a mouse. Many microcontrollers today incorporate embedded circuitry which enables them to be used for touchscreen control applications. The microcontroller itself can be used to establish thresholds, provide noise cancellation to minimize false triggers, and host firmware for many different types of touch inputs such as single touch, multi-finger touch, taps, etc.
Silicon Labs continues its leadership position in analog performance, adding touch sense to its high-speed 8-bit MCUs. This is achieved through a true capacitance-to-digital converter (CDC) enabling robust, accurate and responsive touch sense implementation, yet is easy to configure and use. The F700 and F800 Touchsense MCUs satisfy four key customer requirements:
- Highly responsive and accurate—the combination of high resolution CDC, 40 µs acquisition time and 25 MIPS CPU allow the implementation of high-quality and sophisticated touch sense functions in end-products, even when large arrays of touch sense elements are used.
- Robust performance—the CDC offers best-in-class noise immunity for reliable, worry-free performance in challenging conditions and configurations such as thick laminate overlays, electrical noise, or variances in PCB manufacturing that can affect the capacitance of a switch. A typical signal-to-noise (SNR) of 100 can be achieved in most applications.
- Easy-to-use—an intuitive software GUI allows fast and easy configuration. An API library is provided for all common touch sense configurations such as virtual buttons, wheels and sliders. Auto-scan, threshold presets, and an accumulator allow the CDC to function with little CPU intervention and software overhead.
- Wake-on-touch—the MCU can be placed in power saving modes, yet wake quickly upon a touch sense event, ultimately saving overall system power.
- Industry leading number of inputs—supports up to 32 touch sensing inputs.
Through the use of an interconnected subsystem that is closely tied together via a flexible, software-configurable platform, it is possible to create an effective human interface implementation that is both functional and scalable. As market requirements change or as new ideas emerge, firmware can be adjusted to quickly and easily implement these changes without the need to re-architect the entire system. In addition, by leaving some GPIO pins on the microcontroller reserved for future use, it is also possible to quickly make hardware additions while leaving the core architecture the same.
In summary, by using a scalable architectural framework, it is possible for system designers to create touch sense solutions that create a truly “human interface.” Silicon Laboratories offers a suite of human interface products to enable high performance, highly accurate sensing.
For more Silicon Labs QuickSense information, Please email: email@example.com