View
❯
Live Video
❯Robotic prehension of objects
Large Companies
Computer Vision - Gesture Recognition
General Public
Education - Accessibility
Chromosome segmentation and deep classification
Government
Defense & Military - Intelligence & Surveillance
Dialogue-based social care services for people with mental illness, dementia and the elderly living alone
Instant triaging of wounds
Large Companies
Small Companies
Entertainment and Media - Metadata Tagging
Intelligent Video Traffic Monitoring
Education - Automated Content Generators
Large Companies
Small Companies
Entertainment and Media - Subtitle Creation
Autonomous apron truck
Robotic prehension of objects
AI decryption of magnetograms
Government
Defense & Military - Autonomous Vehicle
Improvement of productivity of semiconductor manufacturing
Automated defect classification on product surfaces
Pre-screening of cavity and oral diseases based on 2D digital images
Intelligent technology to control manual operations via video Norma
Automatic classification tool for full size core
Real-time patient support and medical information service applying spoken dialogue system
AI situation explanation service for people with visual impairments
General Public
Education - Smart Learning Content
Autonomous trains (Unattended train operation (UTO))
AI sign language interpretation system for people with hearing impairment
Precision Farming as a Service
General Public
Computer Vision - Object & Activity recognition
Facilitating language learning of deaf people
Computer-aided diagnosis in medical imaging based on machine learning
Social humanoid technology capable of multi-modal context recognition and expression
Autonomous Robot Improves Surgical Precision Using AI
Robotic prehension of objects
For:
Customers, 3 rd parties, end users, communityGoal:
Improve Operation EfficiencyProblem addressed
Use reinforcement learning to train the robot to grasp misc. objects in simulation
and transfer this learning to real-life robots.
Scope of use case
Outputting the end effector velocity and rotation vector in response to the view
from a red green blue depth (RGB-D) camera located on a robot's wrist.
Description
It can be very difficult and time-consuming for users to
perform fine movements with a robot arm, like grasping
various household objects. To mitigate this problem,
attempts are made to give users the ability to control the arm
at a higher level of abstraction; thus, rather than specifying
each translation and rotation of the arm, we would like them
to be able to select an object to grasp, and have the arm grasp
it automatically. This requires some degree of computer
41
vision to be able to detect objects in the robots field of view
(a camera would be affixed to its wrist).
With that achieved, we would be able to focus on grasping an
object selected from the detections. Based on current
literature on robotic grasping, one can be tempted to start
from a heuristic, geometric approach; that is, to use a set of
pre-established rules for picking up objects -- for example,
executing pincer grasps from the top along the thinnest
dimension of the object that is not too narrow to be grasped.
Such approaches work reasonably well in conditions that
match the restrictive assumptions on which the rules are
built, but fail when encountering even small deviations from
those conditions (for example, they do not adapt well to
clutter). Attempting to list and plan a proper response to all
such failure cases heuristically would be an exercise in
futility.
In contrast, approaches based on machine learning can
generalize to unforeseen or novel situations, and, as in the
case of object detection, generally perform better than
heuristic solutions. Machine learning-based approaches to
grasping and object manipulation vary widely. At the
simplest level, we can predict the likelihood of grasp success
based on an image patch of an object and a given angle of
approach. Robot control, in such cases, is beyond the scope
of the machine learning model. However, methods can scale
up to end-to-end systems that learn to control the robot at
the level of its joint actuators in response to a visual stimulus
consisting of a birds eye view of the arm and several objects
placed in a bin.
Live Video
Audio
AI: Perceive
Reinforcement learning
AI: Understand