Manufacturing Capstone Project

The relationships between various projects towards a manufacturing capstone, is captured as Table 1. The different projects at the NTU IoX center can be viewed as different modules of a Smart Manufacturing framework. They are shown in first column and first row of Table 1. The data represents the inferred meta-data by a particular module and this flows to another module. Each row acts as a source module while each column represents a destination module. The data-flows are a consolidation of the Project-Relationship diagrams proposed by the NTU PIs in collaboration with the Intel Champions.
 
<Manufacturing Capstone: Real-time Task Assistance>
The real-time task assistance can be accomplished in 2 phases as described below:
1.    Training Phase:
During this phase, an expert worker demonstrates the task under consideration to a novice employee. Data from the various sensing modalities like Vision, Audio, Speech, RFID and IMU signals are collected during this time and the various inference engines\ models are built using that data.
(i)      The expert wears the Midas touch glove and the subtle gestures while performing the task are captured through IMU signals from the glove. The major steps are also captured through the Action Summarization block which models the video data.
(ii)     The objects she interacts with, are captured through the RFID signals in Midas touch glove and Object recognition in the video data captured by the shoulder camera, fixed roof camera and a moving Drone’s camera (DynaCollect). The object recognition happens through Distributed Intelligence (Cascaded DNN).
(iii)     The dialogue between the expert and the novice is modelled through Interactive Dialogue Learning module.
(iv)      Social computing is used for understanding the “knowledge units” which represent the most critical steps for performing the task. These knowledge units can be represented by video and haptics data.
(v)     The recorded action states, object states, haptics along with the insights provided by the knowledge keeper are used for training the sequence prediction engine of ‘Learning to Act from Demonstration’, which learns both the policy and rewards function.
(vi)     The device states which affect the task under consideration are seamlessly communicated to the worker through the visual communications module of Natural and Seamless Interaction project. The configuration of device and ambient states while performing specific steps of the task are captured through Interactive Intentional Programming module of the NSI project. The specific questions and answers regarding the interpreted device states are also used for Interactive Dialogue Learning.
(vii)    The task is repeated multiple times by several novice workers under the supervision of an expert and a huge dataset is collected for training the different models. These include the dialogue model, object recognition model, knowledge model and the action recognition models. These models are improved through Interactive Machine Learning by involving the expert to provide additional feedback.
 
 
2.    Deployment Phase:
During this phase, all the technologies developed at the NTU IoX center will work in collaboration towards providing real-time assistance to the novice worker. Imagine a fabrication process that consists of a series of tasks. Each task may involve (i) Interacting with a set of movable objects and tools (ii) Interacting with fixed equipment and gadgets (iii) specific hand movements and actions like lifting, pushing, rotating and (iv) controlling the environment for conditions like temperature, pressure etc. The various technologies help in closely monitoring the states of these entities, make smart decisions and guide the worker as follows:
(i)      The objects and tools are recognized through the cascaded DNN project of Distributed Intelligence.
(ii)      In order to search for objects in a larger set of remote regions, the Dynacollect project helps in deploying drones and streaming video to the edge sensor from these locations.
(iii)      Each atomic task of a complex procedure is recognized through the action recognition module.
(iv)      The subtle hand movements and hand operated objects are recognized through the Midas touch glove.
(v)      The current object and action states are input to the Learning to Act from Demonstration module which uses Reinforcement learning to map the current state to a task guidance in order to maximize a set of rewards. The expected set of object and action states for correctly performing the process were captured during the Training Phase and act as the rewards function. The task guidance could be specific hand movements, getting specific objects or setting specific environmental conditions.
(vi)      Once the next guidance step is inferred it can be communicated to the worker in several ways:
            a. Through vibro-tactile mode in Midas Touch
            b. Through dialogue mode
            c.  Through visual display
            d.  Through guided light blobs from drones
(vii)     The inferred next step can also be used for programming device states through the IIP module of Natural and Seamless Interaction block.
(viii)     Finally, the worker can also continuously interact with the smart agent to get guidance through the Interactive Dialogue Learning module.