Top 10 AI Tools for People with Disabilities

  • NavCog for the Blind:Developed by IBM Research and Carnegie Mellon University, this software provides navigation for the blind through voice and vibration. It processes the environment into a 3D spatial model, transmitted to the blind person’s ears via ultrasound, offering precise positioning and guidance. Additionally, it has a facial scanning feature that can inform the blind person about the emotions of people around them.
  • AI Mobility Aids:These include smart canes, electric wheelchairs, and exoskeletons, which can help people with disabilities walk and move better. For instance, smart canes combine motion sensors and navigation chips to detect ground unevenness, help visually impaired individuals avoid obstacles, and provide navigation directions.
  • AI Visual Aids:Devices like smart glasses and braille displays can help visually impaired individuals better perceive their surroundings and enhance their independent living capabilities.
  • Smart Voice Assistants:Such as Siri (Apple), Google Assistant (Google), Alexa (Amazon), and Microsoft Cortana (Microsoft), these assistants can help users complete various tasks, such as sending text messages, checking the weather, setting reminders, etc., which are very convenient for people with limited mobility.
  • AI Hearing Aids:Including smart hearing aids and speech recognition systems, these can help people with hearing impairments better understand and communicate.
  • AI Health Monitoring:Through wearable devices and sensors, AI can monitor the physiological parameters of people with disabilities in real time, such as heart rate and blood pressure, and provide timely health advice and alerts.
  • WheelNav (Wheelchair Navigation):A navigation tool specifically designed for people with limb disabilities, it can help them travel more conveniently.
  • Environmental Recognition and Prompting Systems: Using AI technology to identify obstacles, stairs, elevators, etc., in the environment, and remind people with disabilities through voice or vibration, helping them avoid danger.
  • Smart Readers:Through OCR technology to recognize text content, and through speech synthesis technology to read the text aloud, helping visually impaired people read books, newspapers, etc.
  • AI Social Assistants: Using natural language processing technology to help people with disabilities communicate more smoothly in social situations, improving their social skills.

Tsinghua Develops Artificial Larynx to Help Patients 'Speak'

  • According to “FUTURE Vision,” on February 23rd of this year, Professor Ren Tianling from the Integrated Circuit School of Tsinghua University and his team made significant progress in intelligent speech interaction. Their developed wearable artificial larynx can perceive multimodal mechanical signals related to vocalization in the throat for speech recognition. It relies on thermoacoustic effects to play corresponding sounds, providing a new technical approach for speech recognition and interaction systems. The research results were published under the title “Hybrid Modality Speech Recognition and Interaction Using Wearable Artificial Larynx” in the “Nature” AI journal “Nature Machine Intelligence.”

    The graphene-based smart wearable artificial larynx developed by Ren Tianling’s team has high sensitivity to low-frequency muscle movements, mid-frequency esophageal vibrations, and high-frequency sound wave information compared to commercial microphones and piezoelectric films. It also possesses speech perception capabilities resistant to noise and can play sounds through thermoacoustic effects.

    Notably, this device utilizes artificial intelligence models to recognize and synthesize signals perceived by the artificial larynx for speech recognition and synthesis, achieving high-precision recognition of basic speech elements (phonemes, tones, and words), as well as recognition and reproduction of unclear speech from laryngeal cancer patients. Ren Tianling mentioned in an interview with Interface News that the most prominent feature of the third-generation intelligent graphene artificial larynx compared to the previous two generations is its ability to maximize the restoration of the user’s original voice.

    Experimental results show that the hybrid modality speech signals collected by the artificial larynx can recognize basic speech elements (phonemes, tones, and words) with an average accuracy of 99.05%. At the same time, the artificial larynx’s noise resistance performance is significantly better than that of microphones, maintaining recognition capabilities even under environmental noise of over 60dB. By integrating AI models, the artificial larynx can recognize everyday vocabulary spoken by laryngectomy patients with an accuracy exceeding 90%. The recognized content is synthesized into speech and played on the artificial larynx, preliminarily restoring the patient’s ability to communicate verbally.

    However, Ren Tianling’s team also stated that there is still significant room for optimization and expansion of the artificial larynx, such as improving the quality and volume of sound, increasing the diversity and expression of speech, and combining other physiological signals and environmental information to achieve more natural and intelligent speech interaction.

    Regarding the mass production of the latest generation of artificial larynx, Ren Tianling stated, “We have already solved the core technical issues, and what remains is how to create an optimized product form for the artificial larynx. There is still a way to go from samples to commercialized products.”

    Ren Tianling also told Interface News that the current samples are mainly used by volunteers participating in experimental research, and products worn for long periods still need to undergo medical product certification and other processes.

    In fact, this wearable intelligent artificial larynx device has undergone two iterations before. In 2017, Ren Tianling’s team innovatively proposed a graphene-based transceiver integrated acoustic device: using the piezoresistive effect to receive signals and emitting sound based on thermoacoustic effects, cleverly realizing the integration of sound transmission and reception in a single device. In terms of device fabrication technology, the team adopted a unique laser direct writing technique, which can directly convert low-cost large-area polyimide films into graphic porous graphene materials.

    Graphene is a two-dimensional crystal composed of carbon atoms, isolated from graphite material, with only one atomic layer thickness. It has extremely high electron mobility, excellent mechanical strength, and excellent thermal conductivity. It has high sensitivity to low-frequency muscle movements, mid-frequency esophageal vibrations, and high-frequency sound wave information, as well as speech perception capabilities resistant to noise.

    In August 2020, the team’s research on the second-generation graphene intelligent artificial larynx (WAGT) made significant breakthroughs in device flexibility, integrated sound transmission and reception systems, motion monitoring systems, and lightweight wearability.

    According to the official website of Tsinghua University, the second-generation intelligent graphene artificial larynx integrates sound collection and emission into one, can be directly attached to the throat of laryngectomy patients, and converts different throat movements into corresponding sounds, potentially helping laryngectomy patients to “talk” normally with others. The second-generation intelligent graphene artificial larynx achieved the wearable function of graphene artificial larynx for the first time. In the future, this device will be combined with technologies such as speaker recognition and machine learning to have broad prospects in fields such as speech recognition and home healthcare.

Thank you for getting in touch!

We appreciate you contacting us. One of our colleagues will get back in touch with you soon!Have a great day!

Select Filters to Apply

  • Pricing

  • Features