Core components
Manufacturing Process Management System (MPMS)
Description
The MPMS is the collection of subsystems responsible to orchestrate the tasks of agents in the manufacturing processes. Orchestration is dependent on the design of the processes and agents. The MPMS includes the functionality to design processes and describe agents, and execute the processes by deciding on the next activities to be executed and assigning activities to agents. The figure below shows the process management functionality, embodied by the MPMS, as a function of horizontal and vertical integration. Horizontal integration refers to the inter-operability between the manufacturing processes and other management or support processes in the enterprise. Vertical integration refers to the link between the process management and resources located on the factory floor.
Conceptual illustration of the MPMS in relation to the work cells
An important scoping dimension to mention is the distinction between global and local functions. The software aspect of the architecture clearly contains layers corresponding to notions of global and local. Local includes all activities and objects within a single work cell, while anything that crosses work cells is considered global. This is used as a starting point to establish a scoping statement that is not dependent on the physical hierarchy of the manufacturing system.
A manufacturing process consists of activities, events, gateways and connectors. Activities may be sub-processes or tasks. A single process may contain multiple tasks, located and performed in multiple work cells. A task is assigned to and performed by a team of one or more agents. This team may be a virtual team that only exists for the duration of the task execution. A single task is entirely contained within a single work cell, for the duration of the task. For this reason, task is considered the smallest unit of work that appears in the global layer.
The MPMS functionality therefore can be placed in the global layer of the HORSE logical architecture. It consists of three system modules and a single data store within the larger HORSE System (see the figure above).
On the HORSE Design Global level the functionality is roughly divided between the process design and agent design sub-systems. These sub-systems can be used independently as needed for the task at hand.
The Process Design module contains the functionality to (re-)design manufacturing processes. Results of design activities are stored in the Process/Agent Definitions data store. In case of redesign, the input is retrieved from this database. It enables the visual modelling of a manufacturing process, comprising of tasks, events, gateways and connectors, and links the tasks to the task definitions and agent definitions.
The Agent Design module contains the functionality to design manufacturing agents, i.e., describe their relevant characteristics including their competences, authorisations and performance indicators.
A more detailed design of the functionalities of the MPMS Global Design modules can be found in the figure below.
The product definitions data store is contained and populated in external information systems, typically PLM or computer-aided design software. Task / step and cell data is populated in the local layer of the HORSE System.
The HORSE Exec Global sub-system contains the modules involved in execution of manufacturing processes. The figure below shows an elaboration of the MPMS Global Execution modules.
These modules provide the functions used to enact a sequence of tasks, assign agents to those tasks and provide the agents with necessary information. Exception handling and performance tracking modules are also included. Finally, the Production Execution Monitoring module supports real-time monitoring of manufacturing execution in terms of processes, orders, and agents (human and automated).
Together the MPMS modules in the Design global and Execution global support the automated orchestration of the manufacturing process while ensuring horizontal integration of the process activities and vertical control of all agents in the process.
The Hybrid Task Supervisor is the component related to the local execution of a task in a work-cell by both the human operators and the robots. It receives the task execution requests from the MPMS and it keeps track of the progress of the task execution. Tasks are defined through the user-friendly graphical interface available in the HORSE framework.
When a request is received, the Hybrid Task Supervisor retrieves the information related to the considered/matching task in order to activate the autonomous agents in the work-cell. Furthermore, after the processing of a request, this component sends a message to the MPMS global level to notify the start time of the execution of the task involved. A similar notification is sent after the completion of the task, allowing the work-flow of the entire process to continue.
In addition, the Hybrid Task Supervisor allows to keep track of the progress of the task during the execution, receiving also information about anomalies, like obstacles or unexpected humans that block some robot trajectories. In this case the component is responsible to send an alert to the global level.
The HORSE middleware is a software solution supporting HORSE to overcome the heterogeneity of the HORSE software components adopting widely adopted standards. It is realised through a messaging infrastructure with star topology in which the individual components (nodes) communicate with each other through a local broker. The components could be organised in functional domains, each represented by a broker and all brokers communicating with each other through a dispatcher. The JSON formatted messages are exchanged over the WebSockets low-level communication protocol. This allows the implementation of the HORSE Message Node specification as part of every HORSE module, with no additional constraints for the programming language or the execution environment.
Figure 2: HORSE Messaging Middleware components
The message-driven collaboration between the major HORSE components permits the detachment of their implementations from the agreed interfaces. This in turn promotes the continuous development and testing of all components with increasing maturity of the implemented functionality. The biggest benefit of such an approach is that integration of new components in the framework requires only development of WebSockets-based communication client and processing the messages exchanged between the new component and the rest of the HORSE framework.
The HORSE-ROS bridge interface allows the easy communication between native ROS nodes (the Open-Source framework "Robotic Operating System") and nodes using the HORSE middleware.
This interface permits middleware clients to use the full ROS functionality available to native ROS nodes. The forwarding of HORSE events originating at native ROS nodes to middleware nodes is supported as well and it offers a ROS service interface to forwards arbitrarily complex messages.
The HORSE-ROS bridge is a useful interface to connect ROS based components to nodes using the HORSE middleware. For example, the user is allowed to use ROS hardware interfaces to communicate with the other HORSE components. It can be easily used to connect software and hardware components already integrated with ROS to the HORSE framework.
Interface to industrial equipment: HORSE-BOSCH adapter
The HORSE-BOSCH adapter (Figure 3) was developed as a bridge between the HORSE Message Broker and the corresponding Bosch industrial equipment: the Visual Control system, the conveyor belt and a beacon. The module provides support of etherCAT, PLC and OPC-UA. Additional protocols could be easily integrated.
The Bosch Adapter is a set of OSGi components deployable on a networked PC equipped with an EtherCAT Master Card and Java (for the OSGi framework).
Although the component is not necessarily applicable in every use case it is a working example of integration of the HORSE framework with an existing infrastructure and control software of a factory. Thus, it can be used as a base for development of similar interfaces for different applications.
Figure 3: The Bosch adapter and Bosch machines
Augmented Reality for assembly
Description
The Augmented Reality (AR) for assembly component aims to display information which improves on the one hand the efficiency and quality of work (e.g. assembly instructions) and on the other the safety and working conditions (e.g. safety zones). This is applied directly on the assembly table where parts are worked on and supports processing the input from the user (e.g. his or her gestures) and displaying this information on the table.
Using the component requires setting up a workcell consisting of an overhead projector and an RGB-D sensor (e.g. Kinect) used to track the motion of the operator. The proper operation of the component requires calibration of the workcell components relative positions and defining the overlays to be displayed as well as the reactions to user actions (e.g. using virtual buttons displayed on the assembly table). The component is fully integrated with the HORSE middleware messaging system.
Demonstration
This component was initially developed for the TRI use-case. However, it has been already successfully transferred to other applications.
Figure 4: The AR for assembly
Augmented Reality for quality inspection
The Augmented Reality for quality inspection component was also developed for assisting the human operators of a factory in efficient visual quality check of the handled part. The component is responsible for projecting additional information (e.g. highlighting the inspection points) directly on the part held by the robot or placed in a known position. The functionalities of the component are provided as a set of ROS actions triggered via the HORSE-ROS bridge (Section 0). In case the robot is used to manipulate the part the robot control and AR are synchronized by the Hybrid Task Supervisor.
Figure 5: An exemplary part with a control point (label) highlighted
In order to use the component in a different use case it is necessary to set up a workcell with an overhead projector, a camera and, optionally, a robot arm. This needs to be followed by an optical and spatial calibration of the elements of the workcell and setting up the overlays to be project and, again optionally, robot arm positions.
Demonstration
This component was demonstrated in the BOSCH use case.
Collision detection and avoidance
Description
The Collision Detection and Prevention ensures safety during any human-robot collaboration in a shared workspace.
This component can be used in every use-case that involves the need of a human operator into the robot workspace, in order to identify and avoid upcoming collisions and guarantee better efficiency fostering the robot to work in areas away from obstacles.
Factory automation has revolutionized manufacturing over the last years, but there is still a large set of manufacturing tasks that are tedious or strenuous for humans to perform. Some of these tasks, such as electronics or aircraft assembly, are difficult to automate because they require workers to collaborate in close proximity and adapt to each other’s decisions and motions, which robots cannot currently do. Rather than automating such tasks fully (which may not be possible and/or cost-effective), HORSE consortium believes that human-robot collaboration enables safe and effective task execution while reducing tedium and strain of the human.
For example, mobile manipulators can supply different work stations with parts and perform standard assembly tasks, while human workers perform more complex tasks in the same workspace.
To allow for such shared human-robot workspaces in cluttered environments, robots have to be able to avoid collisions with static and dynamic obstacles while they are executing their original tasks. This involves both the monitoring of the robot environment to detect obstacles and the motion control that has to be able to avoid collisions while moving the robot along reference trajectories determined in a high level planning layer in order to fulfil the robot task.
At the basis of the HORSE Collision Detection and Prevention component is the GPU-Voxels framework that can be used for monitoring and planning applications in 3D and performs all computationally expensive calculations on the GPU. GPU-Voxels is a novel approach to live environment representations, in fact most similar approaches are not voxel-based and not capable of offering similar level of detail and response times. This component allows the robot to automatically switch from its currently executed plan to a new one, when dynamic changes in the environment prohibit further progress towards the current goal, avoiding idle waits for the clearance recovery.
Demonstration
This component has been demonstrated in the FZI Competence center. To request a demo please contac tFZI. Contact information to be found in contacts.
Situation Awareness
Description
Smart factories could significantly increase production time and improve operators’ working conditions in the manufacturing industry. They involve the collaboration without fences of robots and humans, whose safety needs to be ensured. Specifically, safety stops must be avoided because they may considerably slow down the production (safety protocol verification, re-launching the production line, etc.).
HORSE project provides a solution through a situation awareness mechanism to prevent from safety stops and adapt the agents’ behaviors when a critical situation is detected.
The situation awareness mechanism of HORSE framework takes into account all the data related to the agents to predict a hazard, warn the operator and revise the robot's task accordingly. This module is hardware independent and is configured with the agents and the sensors participating to the process.
Example of application scenario
In a use case of deployment of a mobile base (AGV), one essential issue is to guaranty the safety of the operators who are in the same space of the robot. As shown in figure below (on the left side) there is a situation where a collision may occur between a human agent leaving a workcell and a mobile base entering into the same workcell. The mobile base is able to detect collisions but this will lead to an emergency stop which will slow down the task. The situation awareness gathers all the data in the environment including the operator and the robot positions. The situation awareness mechanism adapts the robot behavior to avoid a collision (scenario B on the right side).
Figure 6: Example of application scenario
How does it work?
The situation awareness module (shown in the figure below) is decomposed into two HORSE components: Event Processing and Global safety guard. The Event processing is able to detect critical events and the global safety guard relies on a reasoning system and a planner in order to generate a new action plan for the appropriate agents.
Figure 7: Situation awareness mechanism.
- Data are gathered from the devices and the agents participating in the workcell;
- A critical event is raised whether an anomaly may occur;
- Relevant information from the environment is collected by the Global Safety Guard where a reasoning about the environment is done;
- An action plan is generated to the concerned agents.
For further information, please contact CEA. Contact information to be found in contacts.
Visual inspection with deep neural networks (DIANNE)
Description
Visual quality inspection of manufactured parts is a strenuous and error-prone task, as it is very repetitive while requiring constant attention of the operator. DIANNE is framework that deploys deep neural networks for automating this process. The system is constantly trained and updated using feedback from the operators. Better quality control yields less faulty products shipped to the customers and less loss of incorrectly thrown away products. Also, an A.I. system will be more consistent in his decisions, independent on concentration level or time of the day.
Demonstration
The system was demonstrated at Ophardt Belgien, which produces aluminium shrouds for soap dispensers. We demonstrated the deracking of a shroud, capturing the quality control images and showing the quality, as seen on the pictures below. Our vision system for detecting blacklines achieved an accuracy of 97% on an unseen test set.
The following functions were demonstrated:
- Cobot to derack shrouds from a rack, and put those on an evaluation conveyor belt.
- Cobot to fetch shrouds from the evaluation conveyor, hold them under a light, and capture images with a machine vision camera.
- Evaluate the camera images, and show the results on screen to the operator so he/she can approve or correct.
The demonstration consisted of the following integrations:
- Kuka iiwa program for deracking the shrouds
- UR3 and camera controlled via ROS for capturing the images
- DIANNE, a framework for deploying deep neural networks for quality control.
6D MIMIC Framework (programming by demonstration)
Description
6D MIMIC Framework is a programming by demonstration component developed in C#. The 6D MIMIC programming solution ultimately delivers the possibility of transferring human expertise to industrial robots. This framework introduces a new means for human motion tracking based on an innovative marker together with a collection of routines that allows a real-time interface with the industrial robot. For programming the robot there is no knowledge needed about programming concepts.
The process of transferring the human know-how to the robot consists in the coating expert, with the robot stopped, spraying a prototype object having the developed teaching solution, a non-intrusive marker coupled to the coating gun. The system, through a set of stereoscopic perception sensors, is capable of acquiring the pose of the marker and compute the corresponding robotic trajectories, thus transferring by this mean the coating skill to the industrial robot.
Demonstration
Under the HORSE experiment FLEXCoating, the system was developed to address the set of functional requirements associated with a collaborative coating operation between an operator and a robot. The achievement of these functional requirements were validated in a realistic industrial environment according to a list of KPI, namely:
- Production Quality: reflecting a visual qualitative evaluation, taking into consideration the uniform distribution of the coating material as well as its thickness. It was verified in the industrial demonstration that the production quality of the collaborative coating cell was improved in up to 30%, according to the subject part, when in comparison with the fully manual/fully automated approach, as the human operator could correct mistakes produced by the robot.
- Throughput Increase: The overall throughput increase was estimated in around 15%, due to the increased throughput achieved in complex parts, which means a significant increase in the competitiveness of the end-user.
- Optimization of operator’s usage: The resulting collaborative coating cell allows operators to focus on where their effectiveness is more enhanced. In specific, by allowing operators to concentrate on operations which cannot be effectively automated, there is a optimization of the operator’s time allocation, estimated by the end-user to be, in average, around 10% for the overall process.
Dissemination Video: https://www.youtube.com/watch?v=MWhkrRbPB_o
Technical Video: https://www.youtube.com/watch?v=3IZLhLpHyHE
Object Localisation Pipeline
Description
The Object Localization Pipeline is a software component developed in C++ with a Robot Operating System (ROS) wrapper that implements the functionality of localizing an object as a ROS Action. This module communicates with the Photoneo PhoXi 3D perception system through a modified ROS package that acts as the interface for the perception system to ROS1. This ROS package then communicates with the PhoXi Control2 application through shared memory. The PhoXi Control is the main application (driver) to operate the PhoXi 3D scanner.
Demonstration
Under the HORSE experiment FLEXCoating, the system was developed to address the set of functional requirements associated with a collaborative coating operation between an operator and a robot. The achievement of these functional requirements were validated in a realistic industrial environment according to a list of KPI, namely:
- Production Quality: reflecting a visual qualitative evaluation, taking into consideration the uniform distribution of the coating material as well as its thickness. It was verified in the industrial demonstration that the production quality of the collaborative coating cell was improved in up to 30%, according to the subject part, when in comparison with the fully manual/fully automated approach, as the human operator could correct mistakes produced by the robot.
- Throughput Increase: The overall throughput increase was estimated in around 15%, due to the increased throughput achieved in complex parts, which means a significant increase in the competitiveness of the end-user.
- Optimization of operator’s usage: The resulting collaborative coating cell allows operators to focus on where their effectiveness is more enhanced. In specific, by allowing operators to concentrate on operations which cannot be effectively automated, there is a optimization of the operator’s time allocation, estimated by the end-user to be, in average, around 10% for the overall process.
Dissemination Video: https://www.youtube.com/watch?v=MWhkrRbPB_o
Technical Video: https://www.youtube.com/watch?v=3IZLhLpHyHE
Shift Manager
Description
The Shift Manager is a plugin for the Camunda Engine allows you to assign tasks in a manual station to a human worker (registered in the system) or to select a robot program to be executed in an automated station.
System delineation
- Context: In flexible production system, many different variants are being produced on the same line. Each requiring a different set of tasks to be executed either manually or automated via robots+tool.
- Scope: The plugin allows the shift manager to a select the product variant to be produced, assign manual tasks to the workers assigned to the shift as well as select the corresponding robot programs for the robot stations
- Operational overview: Systems requires an interface to the database where the robot programs are stored
System functions
- Select worker from list of workers
- Select robot program from list of robot programs
- Bypass station, e.g. if it is down for repairs
The plugin itseld is actually the composition of two plugins.
Plugin 'Assembly' is responsible for:
- Fetching horse_station table
- Fetching dnb programs from each station using jrosbridge and presisting them to horse_program table
- Parsing the tasks from the deployed process model and presisting them to horse_task table. Human/robot tasks are identified by checking against database (horse_station.auto)
- Executing the process model according to the configurations provided by the plugin. Delegating tasks to Hybrid Task Supervisor on each station is done via horse middleware
Plugin 'task-assignment-plugin' is responsible for:
- Fetching workers from Camunda database
- Fetching tasks, programs and stations from our postgres database
- Providing GUI to change the task assignments (worker for manual tasks and program for automatic tasks) or skip a station
- Persisting user inputs back into the postgres database (which is fetched by the process application to determine which worker/program is associated to a certain task)
- Providing tables for historical data including start/end time as well as duration for each executed task during current process execution
Demonstration
The Shift Manager was demonstrated in the context of the HORSE experiment GuidedSafety. The following scenarios were tested succesfully:
- Initialization of a subassembly line consisting of 1 manual station and 2 automated station
- Reconfiguring sub assembly line for different product variant
- Bypassing the 1st automated station, due to an error. Prerequisite: The 2nd automated station is capable of performing all operations of the 1st automated station. Note: Cycle time increased accordingly.