1. Introduction
The latest advances in artificial intelligence (AI) are leading to increased interaction between humans and smart robots; we are also witnessing the rapid development of the metaverse.[1] More immersive and convenient human-machine interaction (HMI) devices are thus urgently needed to facilitate natural and continuous interactions in both the real physical world and online virtual ones. [2-4] Because the human hand is capable of performing complex tasks and executing elaborate movements, hand-gesture recognition provides a natural arena for the development of more advanced HMI.[5] Many types of equipment, including cameras,[6,7] electromyographic (EMG) sensor-based armbands,[8,9] and data gloves,[10-14] have been used to acquire hand-movement information. With the help of these technologies, several real-time and accurate hand-gesture-recognition devices have been developed.[15] However, they have major drawbacks: vision-based devices require a line-of-site optical path between cameras and hands, surface EMG signals are difficult to detect and always overwhelmed in noise, and glove-type devices are uncomfortable to wear.
By contrast, wristband-type devices are comfortable and inexpensive. More importantly, they have no impact on a user’s day-to-day operations and can significantly improve the user experience by their remarkable imperceptibility.[16] Shull et al. have described a gesture-recognition wristband (GRW) equipped with ten modified barometric-pressure sensors that can recognize up to ten different hand gestures recognition and estimate finger angles.[17] Liang et al. demonstrated a wristband consisting of five PDMS-encapsulated capacitive pressure sensors; three gestures could be correctly recognized with an accuracy higher than 90%.[18] Recently, Tan et al. developed a GRW with eight sensing units, and each sensing unit contained a triboelectric nanogenerator (TENG) and piezoelectric nanogenerator (PENG).[16] Combined with a machine-learning algorithm, the wristband achieved letter-by-letter recognition of sign-language actions with a maximum recognition accuracy of 92.6%. Although these wristbands show great promise for application in HMI, their practical use may be restricted by electromagnetic interference (EMI), crosstalk noise, and complexity in data processing.
Using photons instead of electrons as signal carriers is an ideal strategy to address the EMI and crosstalk issues. For this reason, flexible optical waveguides, including fiber Bragg gratings, polymer optical fibers, and optical micro/nanofibers, have attracted increasing interest for use in tactile sensors,[19-22] data gloves,[23,24] and HMI devices.[25] Optical micro/nanofibers with diameters close to or below the vacuum wavelength of visible or near-infrared light can offer engineerable waveguiding properties, making them attractive for applications in ultrasensitive sensors with small footprints.[26] Already, optical micro/nanofiber-based sensors have demonstrated high sensitivity, fast response, and a tunable working range for pressure sensing,[27,28] strain sensing,[29,30] and bending-angle monitoring.[31-33]
A highly sensitive sensor for obtaining mechanical information from the surface muscles of the hand would improve the accuracy of a GRW and reduce the number of required sensing units. A single gesture typically generates different signals at different sensors; the gesture can be recognized by combining basic signal-processing methods with a machine-learning algorithm. However, in a GRW, sensors are not attached directly to the user’s skin: a wristband tends to slip as the fingers move, potentially resulting in a significant change in the sensing signals. Therefore, location insensitivity of the mechanical sensors is a key factor in the success of a GRW. Nevertheless, most previous optical-micro/nanofiber-based pressure sensors have been in filmy structures where the sensing signal experiences a critical-position dependent response.
To overcome this problem, we propose and demonstrate a flexible optical-nanofiber sensor with a soft liquid sac structure that can effectively mitigate the impact of stimulus position on the response of the pressure sensor without sacrificing sensitivity. This sensor can be used for precise acquisition of the arterial pulse with negligible position drift. Furthermore, we develop a GRW with only three such optical-nanofiber pressure sensors. Using the support-vector machine (SVM) machine-learning model, we decode the signals from the three sensors; the proposed GRW achieves a maximum hand-gesture-recognition accuracy of 94% for testers with different physiques. As a proof of concept, a robotic hand was successfully controlled by different testers through hand gestures, indicating the excellent adaptability of the GRW. This study offers promise of major advances in the tactile interfaces needed for a comfortable and immersive user experience in the metaverse.
2. Results and Discussion
Figure 1a shows the configuration of the proposed GRW. It consists of three nanofiber-based pressure-sensor units (
NFPSUs) placed at different locations on a person’s wrist to capture the mechanical signals generated by finger movements. Multiple groups of muscles control finger movements; most of the muscle bellies and tendons are near the superficial epidermis of the wrist.
[34] Therefore, when a finger moves, the corresponding groups of finger muscle contract or relax, causing deformations in the skin surface of the wrist. The skin deformations associated with different hand gestures are collected by placing NFPSUs at different locations on the wrist. With the assistance of machine-learning algorithms, a hand gesture can be recognized by decoding the multichannel sensing signals corresponding to it.