cyberquantic logo header
EN-language img
FR-language img
breadcrumbs icon
Manufactures & Factories

Robotic Process Automation

Robotic vision scene awareness

Robotic prehension of objects
For:
Customers, 3 rd parties, end users, community
Scope:
Outputting the end effector velocity and rotation vector in response to the view from a red green blue depth (RGB-D) camera located on a robot's wrist.
Goal:
Improve Operation Efficiency
Adaptable factory
For:
Component suppliers (sensors, actuators), machine builders, system integrators, plant operators (manufacturer)
Scope:
(Semi-)Automatic change of a production systems capacities and capabilities from a behavioural and physical point of view.
Goal:
Improve Operation Efficiency
Empowering autonomous flow meter control - reducing time taken for proving of meters
For:
Process industries; humans
Scope:
Calibration of control devices
Goal:
Other
Powering remote drilling command centre
For:
Oil and gas upstream sector; environment, humans
Scope:
Oil and gas upstream (Deployed in 150 oil rigs and 2,5 billion+ data points each)
Goal:
Other
Order-controlled production
For:
Customer, producing companies, broker
Scope:
Automatic distribution of production jobs across dynamic supplier networks
Goal:
Other
Robotic task automation: insertion
For:
Incorrect AI system use; new security threats
Scope:
Robotic assembly
Goal:
Other
Robotic vision scene awareness
For:
Customers, 3 rd parties, end users, community
Scope:
Determining the environment the robot is in and which actions are available to it.
Goal:
Improve Operation Efficiency
Value-based service
For:
Customer (product user), platform provider, service provider, product provider
Scope:
Process and status data from production and product use sources are the raw materials for future business models and services.
Goal:
Other

Robotic vision scene awareness

For:
Customers, 3 rd parties, end users, community
Goal:
Improve Operation Efficiency
Problem addressed
Robustly identify the scene from video and depth sensors. From the scene and
the seen objects, propose actions to the human collaborator.
Scope of use case
Determining the environment the robot is in and which actions are available to
it.
Description
Household robots are expected to navigate a very diverse set
of environments and be able to accomplish different tasks
depending on their position and action set. To meet these
goals, the robots are expected to quickly and accurately
identify the visual context in which they operate and derive
the set of possible actions from this context. They can then
propose relevant actions to the end user so that he/she is not
expected to define the context himself/herself and then sift
through a long list of irrelevant actions.
perceive frame img
Sensor Network - IOT
Live Video
understand frame img
Computer Vision
Decision Tree
Interested in the same or similar project?
Submit a request and get a free project evaluation.