cyberquantic logo header
EN-language img
FR-language img
breadcrumbs icon
Operation

Automate operation - Robotization

Robotic prehension of objects

Large Companies
NLP - Text summarization
For:
Media Intelligence Analysts
Scope:
Using Abstractive Text Summarization to reduce analysis costs for media monitoring.
Goal:
Improve Operation Efficiency
Robotic prehension of objects
For:
Customers, 3 rd parties, end users, community
Scope:
Outputting the end effector velocity and rotation vector in response to the view from a red green blue depth (RGB-D) camera located on a robot's wrist.
Goal:
Improve Operation Efficiency
Adaptable factory
For:
Component suppliers (sensors, actuators), machine builders, system integrators, plant operators (manufacturer)
Scope:
(Semi-)Automatic change of a production systems capacities and capabilities from a behavioural and physical point of view.
Goal:
Improve Operation Efficiency
Empowering autonomous flow meter control - reducing time taken for proving of meters
For:
Process industries; humans
Scope:
Calibration of control devices
Goal:
Other
Device control using AI consisting of cloud computing and embedded system
For:
Equipment users, manufacturers, distributors
Scope:
Learn the user's preferred temperature in each situation for the control of home appliances (air conditioning equipment).
Goal:
Other
Next Century Workforce: Partnering humans & robots to drive efficiency & growth
For:
Financial advisors Bank employees
Goal:
Improved Employee Efficiency
Powering remote drilling command centre
For:
Oil and gas upstream sector; environment, humans
Scope:
Oil and gas upstream (Deployed in 150 oil rigs and 2,5 billion+ data points each)
Goal:
Other
Order-controlled production
For:
Customer, producing companies, broker
Scope:
Automatic distribution of production jobs across dynamic supplier networks
Goal:
Other
Robotic task automation: insertion
For:
Incorrect AI system use; new security threats
Scope:
Robotic assembly
Goal:
Other
Robotic vision scene awareness
For:
Customers, 3 rd parties, end users, community
Scope:
Determining the environment the robot is in and which actions are available to it.
Goal:
Improve Operation Efficiency
Value-based service
For:
Customer (product user), platform provider, service provider, product provider
Scope:
Process and status data from production and product use sources are the raw materials for future business models and services.
Goal:
Other

Robotic prehension of objects

For:
Customers, 3 rd parties, end users, community
Goal:
Improve Operation Efficiency
Problem addressed
Use reinforcement learning to train the robot to grasp misc. objects in simulation
and transfer this learning to real-life robots.
Scope of use case
Outputting the end effector velocity and rotation vector in response to the view
from a red green blue depth (RGB-D) camera located on a robot's wrist.
Description
It can be very difficult and time-consuming for users to
perform fine movements with a robot arm, like grasping
various household objects. To mitigate this problem,
attempts are made to give users the ability to control the arm
at a higher level of abstraction; thus, rather than specifying
each translation and rotation of the arm, we would like them
to be able to select an object to grasp, and have the arm grasp
it automatically. This requires some degree of computer
41
vision to be able to detect objects in the robots field of view
(a camera would be affixed to its wrist).
With that achieved, we would be able to focus on grasping an
object selected from the detections. Based on current
literature on robotic grasping, one can be tempted to start
from a heuristic, geometric approach; that is, to use a set of
pre-established rules for picking up objects -- for example,
executing pincer grasps from the top along the thinnest
dimension of the object that is not too narrow to be grasped.
Such approaches work reasonably well in conditions that
match the restrictive assumptions on which the rules are
built, but fail when encountering even small deviations from
those conditions (for example, they do not adapt well to
clutter). Attempting to list and plan a proper response to all
such failure cases heuristically would be an exercise in
futility.
In contrast, approaches based on machine learning can
generalize to unforeseen or novel situations, and, as in the
case of object detection, generally perform better than
heuristic solutions. Machine learning-based approaches to
grasping and object manipulation vary widely. At the
simplest level, we can predict the likelihood of grasp success
based on an image patch of an object and a given angle of
approach. Robot control, in such cases, is beyond the scope
of the machine learning model. However, methods can scale
up to end-to-end systems that learn to control the robot at
the level of its joint actuators in response to a visual stimulus
consisting of a birds eye view of the arm and several objects
placed in a bin.
perceive frame img
Live Video
Audio
understand frame img
Reinforcement learning
Interested in the same or similar project?
Submit a request and get a free project evaluation.