The MPMS is the collection of subsystems responsible to orchestrate the tasks of agents in the manufacturing processes. Orchestration is dependent on the design of the processes and agents. The MPMS includes the functionality to design processes and describe agents, and execute the processes by deciding on the next activities to be executed and assigning activities to agents. The figure below shows the process management functionality, embodied by the MPMS, as a function of horizontal and vertical integration. Horizontal integration refers to the inter-operability between the manufacturing processes and other management or support processes in the enterprise. Vertical integration refers to the link between the process management and resources located on the factory floor.
Conceptual illustration of the MPMS in relation to the work cells
An important scoping dimension to mention is the distinction between global and local functions. The software aspect of the architecture clearly contains layers corresponding to notions of global and local. Local includes all activities and objects within a single work cell, while anything that crosses work cells is considered global. This is used as a starting point to establish a scoping statement that is not dependent on the physical hierarchy of the manufacturing system.
A manufacturing process consists of activities, events, gateways and connectors. Activities may be sub-processes or tasks. A single process may contain multiple tasks, located and performed in multiple work cells. A task is assigned to and performed by a team of one or more agents. This team may be a virtual team that only exists for the duration of the task execution. A single task is entirely contained within a single work cell, for the duration of the task. For this reason, task is considered the smallest unit of work that appears in the global layer.
The MPMS functionality therefore can be placed in the global layer of the HORSE logical architecture. It consists of three system modules and a single data store within the larger HORSE System (see the figure above).
On the HORSE Design Global level the functionality is roughly divided between the process design and agent design sub-systems. These sub-systems can be used independently as needed for the task at hand.
The Process Design module contains the functionality to (re-)design manufacturing processes. Results of design activities are stored in the Process/Agent Definitions data store. In case of redesign, the input is retrieved from this database. It enables the visual modelling of a manufacturing process, comprising of tasks, events, gateways and connectors, and links the tasks to the task definitions and agent definitions.
The Agent Design module contains the functionality to design manufacturing agents, i.e., describe their relevant characteristics including their competences, authorisations and performance indicators.
A more detailed design of the functionalities of the MPMS Global Design modules can be found in the figure below.
The product definitions data store is contained and populated in external information systems, typically PLM or computer-aided design software. Task / step and cell data is populated in the local layer of the HORSE System.
The HORSE Exec Global sub-system contains the modules involved in execution of manufacturing processes. The figure below shows an elaboration of the MPMS Global Execution modules.
These modules provide the functions used to enact a sequence of tasks, assign agents to those tasks and provide the agents with necessary information. Exception handling and performance tracking modules are also included. Finally, the Production Execution Monitoring module supports real-time monitoring of manufacturing execution in terms of processes, orders, and agents (human and automated).
Together the MPMS modules in the Design global and Execution global support the automated orchestration of the manufacturing process while ensuring horizontal integration of the process activities and vertical control of all agents in the process.
The Hybrid Task Supervisor is the component related to the local execution of a task in a work-cell by both the human operators and the robots. It receives the task execution requests from the MPMS and it keeps track of the progress of the task execution. Tasks are defined through the user-friendly graphical interface available in the HORSE framework.
When a request is received, the Hybrid Task Supervisor retrieves the information related to the considered/matching task in order to activate the autonomous agents in the work-cell. Furthermore, after the processing of a request, this component sends a message to the MPMS global level to notify the start time of the execution of the task involved. A similar notification is sent after the completion of the task, allowing the work-flow of the entire process to continue.
In addition, the Hybrid Task Supervisor allows to keep track of the progress of the task during the execution, receiving also information about anomalies, like obstacles or unexpected humans that block some robot trajectories. In this case the component is responsible to send an alert to the global level.
The HORSE middleware is a software solution supporting HORSE to overcome the heterogeneity of the HORSE software components adopting widely adopted standards. It is realised through a messaging infrastructure with star topology in which the individual components (nodes) communicate with each other through a local broker. The components could be organised in functional domains, each represented by a broker and all brokers communicating with each other through a dispatcher. The JSON formatted messages are exchanged over the WebSockets low-level communication protocol. This allows the implementation of the HORSE Message Node specification as part of every HORSE module, with no additional constraints for the programming language or the execution environment.
Figure 2: HORSE Messaging Middleware components
The message-driven collaboration between the major HORSE components permits the detachment of their implementations from the agreed interfaces. This in turn promotes the continuous development and testing of all components with increasing maturity of the implemented functionality. The biggest benefit of such an approach is that integration of new components in the framework requires only development of WebSockets-based communication client and processing the messages exchanged between the new component and the rest of the HORSE framework.
The HORSE-ROS bridge interface allows the easy communication between native ROS nodes (the Open-Source framework "Robotic Operating System") and nodes using the HORSE middleware.
This interface permits middleware clients to use the full ROS functionality available to native ROS nodes. The forwarding of HORSE events originating at native ROS nodes to middleware nodes is supported as well and it offers a ROS service interface to forwards arbitrarily complex messages.
The HORSE-ROS bridge is a useful interface to connect ROS based components to nodes using the HORSE middleware. For example, the user is allowed to use ROS hardware interfaces to communicate with the other HORSE components. It can be easily used to connect software and hardware components already integrated with ROS to the HORSE framework.
The HORSE-BOSCH adapter (Figure 3) was developed as a bridge between the HORSE Message Broker and the corresponding Bosch industrial equipment: the Visual Control system, the conveyor belt and a beacon. The module provides support of etherCAT, PLC and OPC-UA. Additional protocols could be easily integrated.
The Bosch Adapter is a set of OSGi components deployable on a networked PC equipped with an EtherCAT Master Card and Java (for the OSGi framework).
Although the component is not necessarily applicable in every use case it is a working example of integration of the HORSE framework with an existing infrastructure and control software of a factory. Thus, it can be used as a base for development of similar interfaces for different applications.
Figure 3: The Bosch adapter and Bosch machines
The Augmented Reality (AR) for assembly component aims to display information which improves on the one hand the efficiency and quality of work (e.g. assembly instructions) and on the other the safety and working conditions (e.g. safety zones). This is applied directly on the assembly table where parts are worked on and supports processing the input from the user (e.g. his or her gestures) and displaying this information on the table.
Using the component requires setting up a workcell consisting of an overhead projector and an RGB-D sensor (e.g. Kinect) used to track the motion of the operator. The proper operation of the component requires calibration of the workcell components relative positions and defining the overlays to be displayed as well as the reactions to user actions (e.g. using virtual buttons displayed on the assembly table). The component is fully integrated with the HORSE middleware messaging system.
This component was initially developed for the TRI use-case. However, it has been already successfully transferred to other applications.
Figure 4: The AR for assembly
The Augmented Reality for quality inspection component was also developed for assisting the human operators of a factory in efficient visual quality check of the handled part. The component is responsible for projecting additional information (e.g. highlighting the inspection points) directly on the part held by the robot or placed in a known position. The functionalities of the component are provided as a set of ROS actions triggered via the HORSE-ROS bridge (Section 0). In case the robot is used to manipulate the part the robot control and AR are synchronized by the Hybrid Task Supervisor.
Figure 5: An exemplary part with a control point (label) highlighted
In order to use the component in a different use case it is necessary to set up a workcell with an overhead projector, a camera and, optionally, a robot arm. This needs to be followed by an optical and spatial calibration of the elements of the workcell and setting up the overlays to be project and, again optionally, robot arm positions.
This component was demonstrated in the BOSCH use case.
The Collision Detection and Prevention ensures safety during any human-robot collaboration in a shared workspace.
This component can be used in every use-case that involves the need of a human operator into the robot workspace, in order to identify and avoid upcoming collisions and guarantee better efficiency fostering the robot to work in areas away from obstacles.
Factory automation has revolutionized manufacturing over the last years, but there is still a large set of manufacturing tasks that are tedious or strenuous for humans to perform. Some of these tasks, such as electronics or aircraft assembly, are difficult to automate because they require workers to collaborate in close proximity and adapt to each other’s decisions and motions, which robots cannot currently do. Rather than automating such tasks fully (which may not be possible and/or cost-effective), HORSE consortium believes that human-robot collaboration enables safe and effective task execution while reducing tedium and strain of the human.
For example, mobile manipulators can supply different work stations with parts and perform standard assembly tasks, while human workers perform more complex tasks in the same workspace.
To allow for such shared human-robot workspaces in cluttered environments, robots have to be able to avoid collisions with static and dynamic obstacles while they are executing their original tasks. This involves both the monitoring of the robot environment to detect obstacles and the motion control that has to be able to avoid collisions while moving the robot along reference trajectories determined in a high level planning layer in order to fulfil the robot task.
At the basis of the HORSE Collision Detection and Prevention component is the GPU-Voxels framework that can be used for monitoring and planning applications in 3D and performs all computationally expensive calculations on the GPU. GPU-Voxels is a novel approach to live environment representations, in fact most similar approaches are not voxel-based and not capable of offering similar level of detail and response times. This component allows the robot to automatically switch from its currently executed plan to a new one, when dynamic changes in the environment prohibit further progress towards the current goal, avoiding idle waits for the clearance recovery.
This component has been demonstrated in the FZI Competence center. To request a demo please contac tFZI. Contact information to be found in contacts.
Smart factories could significantly increase production time and improve operators’ working conditions in the manufacturing industry. They involve the collaboration without fences of robots and humans, whose safety needs to be ensured. Specifically, safety stops must be avoided because they may considerably slow down the production (safety protocol verification, re-launching the production line, etc.).
HORSE project provides a solution through a situation awareness mechanism to prevent from safety stops and adapt the agents’ behaviors when a critical situation is detected.
The situation awareness mechanism of HORSE framework takes into account all the data related to the agents to predict a hazard, warn the operator and revise the robot's task accordingly. This module is hardware independent and is configured with the agents and the sensors participating to the process.
Example of application scenario
In a use case of deployment of a mobile base (AGV), one essential issue is to guaranty the safety of the operators who are in the same space of the robot. As shown in figure below (on the left side) there is a situation where a collision may occur between a human agent leaving a workcell and a mobile base entering into the same workcell. The mobile base is able to detect collisions but this will lead to an emergency stop which will slow down the task. The situation awareness gathers all the data in the environment including the operator and the robot positions. The situation awareness mechanism adapts the robot behavior to avoid a collision (scenario B on the right side).
Figure 6: Example of application scenario
How does it work?
The situation awareness module (shown in the figure below) is decomposed into two HORSE components: Event Processing and Global safety guard. The Event processing is able to detect critical events and the global safety guard relies on a reasoning system and a planner in order to generate a new action plan for the appropriate agents.
Figure 7: Situation awareness mechanism.
- Data are gathered from the devices and the agents participating in the workcell;
- A critical event is raised whether an anomaly may occur;
- Relevant information from the environment is collected by the Global Safety Guard where a reasoning about the environment is done;
- An action plan is generated to the concerned agents.
For further information, please contact CEA. Contact information to be found in contacts.