View
❯
Live Video
❯Facilitating language learning of deaf people
Large Companies
Computer Vision - Gesture Recognition
General Public
Education - Accessibility
Chromosome segmentation and deep classification
Government
Defense & Military - Intelligence & Surveillance
Dialogue-based social care services for people with mental illness, dementia and the elderly living alone
Instant triaging of wounds
Large Companies
Small Companies
Entertainment and Media - Metadata Tagging
Intelligent Video Traffic Monitoring
Education - Automated Content Generators
Large Companies
Small Companies
Entertainment and Media - Subtitle Creation
Autonomous apron truck
Robotic prehension of objects
AI decryption of magnetograms
Government
Defense & Military - Autonomous Vehicle
Improvement of productivity of semiconductor manufacturing
Automated defect classification on product surfaces
Pre-screening of cavity and oral diseases based on 2D digital images
Intelligent technology to control manual operations via video Norma
Automatic classification tool for full size core
Real-time patient support and medical information service applying spoken dialogue system
AI situation explanation service for people with visual impairments
General Public
Education - Smart Learning Content
Autonomous trains (Unattended train operation (UTO))
AI sign language interpretation system for people with hearing impairment
Precision Farming as a Service
General Public
Computer Vision - Object & Activity recognition
Facilitating language learning of deaf people
Computer-aided diagnosis in medical imaging based on machine learning
Social humanoid technology capable of multi-modal context recognition and expression
Autonomous Robot Improves Surgical Precision Using AI
Facilitating language learning of deaf people
Goal:
Improved Customer ExperienceProblem addressed
An avatar and social robot interact with deaf babies for facilitating language
learning.
Scope of use case
Use of advanced and multimodal sensing ability to facilitate a complex task.
Description
The RAVE system is designed as a dual agent that uses a
physical robot and a virtual human to engage 6 m to 12 m old
deaf infants in linguistic interactions. The system was
bolstered by a perception system capable of estimating
infant attention and engagement through thermal imaging
and eye tracking. RAVE has been designed for and
experienced by a unique population (deaf infants) during a
three years of observation and the development of three case
studies.
This system has been successful at soliciting infant attention,
directing attention to the linguistic content, and keeping the
infant engaged for developmentally appropriate lengths of
time. Instances have also been observed of infants copying
robot behaviour, infants producing signs displayed by the
avatar, and infants producing signs that they had observed
the virtual human perform to the non-signing robot agent.
These initial experiences give hope that longer-term
exposure to a system based on this work may be able to
impact long-term learning in this unique population.
Live Video
Audio
AI: Perceive
pattern recognition
AI: Understand