New, electrically active smart skins can quickly decipher human hand movements, typing, sign language, and even the shape of familiar objects, even with limited data.
Developed at Stanford University, new smart skins allow people to type on an invisible keyboard, identify objects by touch, and let users communicate with apps with hand gestures in immersive environments. may predict the future.
In a paper just published in a journal Nature electronics Researchers describe a new type of stretchy, biocompatible material that can be sprayed on the back of the hand like sunscreen spray. Embedded in the mesh is a tiny electrical network that senses stretching and bending of the skin, allowing researchers to use his AI to interpret a myriad of everyday tasks from hand movements and gestures. increase. The researchers say it could have applications and implications in a wide range of fields, including gaming, sports, telemedicine, and robotics.
So far, several promising methods are being actively explored, such as measuring muscle electrical activity using wristbands and wearable gloves, enabling a variety of hand tasks and gestures. . However, these devices are bulky as they require multiple sensory components to identify movements in every joint. Additionally, we need to collect a large amount of data per user and task in order to train the algorithm. These challenges make it difficult to adopt such devices in everyday electronics.
This work is the first practical approach where the format is lean enough and the functionality is adaptable enough to work for essentially all users, even with limited data. Current technology requires multiple sensor components to read each knuckle of the finger, and they are bulky. New devices also take a leaner approach to software to enable faster learning. Such accuracy could be the key to conveying highly detailed motion in virtual reality applications for a more realistic experience.
The innovation that enables this is a sprayable, electrically sensitive mesh network embedded in polyurethane. It’s the same durable yet stretchy material used to make skateboard wheels and protect hardwood floors from damage. The mesh consists of millions of gold-coated silver nanowires that touch each other to form dynamic electrical pathways. This mesh is electrically active, biocompatible, breathable, and stays put unless rubbed with soap and water. It conforms snugly to the creases and creases of the human finger that wears it. Then simply attach a lightweight Bluetooth module to the mesh to wirelessly transfer signal changes.
“When a finger is bent or twisted, the nanowires within the mesh are contracted or stretched, altering the electrical conductivity of the mesh. You can know exactly how it’s working,” explains Zhenan Bao, KK Lee Professor of Chemical Engineering and senior author of the study.
The researchers chose a direct-to-skin spray approach so that the mesh would be supported without a substrate. This key engineering decision eliminated unwanted motion artifacts and allowed us to use a single trace of conductive mesh to generate finger articulation information.
The spray-on nature of the device allows it to fit hands of all sizes and shapes, but opens up the possibility that the device can be adapted to the face to capture subtle emotional cues. This could enable new approaches to computer animation or lead to new avatar-driven virtual meetings with more realistic facial expressions and hand gestures.
Then machine learning takes over. The computer monitors patterns of conductivity change and maps those changes to specific physical tasks and gestures. For example, when you type X on your keyboard, the algorithm learns to recognize its task from the changing pattern of conductivity. A physical keyboard is no longer necessary once the algorithm is properly trained. The same principle can also be used to recognize sign language, or to recognize objects by tracing their outer surface.
And while existing technologies are computationally intensive, requiring humans to label vast amounts of data, and manual labeling as needed, the Stanford University team can do much more. We have developed a computationally efficient learning scheme.
“We brought with it an aspect of human learning that rapidly adapts to a task in just a handful of trials, known as ‘meta-learning’. It can quickly recognize new hand tasks and users,” said Kyun Kyu “Richard” Kim, a postdoctoral researcher in Bao’s lab and lead author of the study.
“Furthermore, the remarkably simple approach to this complex task means that nanomesh captures the subtle details of the signal, so less data can be used to reduce computational processing time,” added Kim. The precision with which the device can map the subtle movements of a finger is he one of the key features of this innovation.
The researchers created a prototype that can recognize simple objects by touch and even predictive two-handed input with an invisible keyboard. The algorithm could input William Shakespeare’s “There is no heritage so rich as honesty”, William Ernest’s Henry’s poem “Invictus”, “I am the ruler of fate, I am the captain of souls”. rice field.
keyboard input video link
Original: Spray-on smart skins use AI to quickly understand manual work
Than: Stanford University | Korea Advanced Institute of Science and Technology