SIG Learning

SIG Chair: Yuh-Jye Lee
 -Professor, Dept. of Applied Mathematics, National Chiao Tung University
Three sub-projects pertain to SIG Learning:
(1)      InterActive Machine Learning
-PI: Hsing-Kuo Kenneth Pao, CoPI: Yuh-Jye Lee, Lin-Lin Chen
We focus on a special kind of ACB research for the smart factory applications. In the smart factory environment, we let machines interact with two kinds of humans: masters and novices where masters will teach machines to help machines to become smarter and on the other hand, machines can teach novices automatically, called machine tutoring to save human efforts from the masters. In this study, we discuss machine tutoring of two kinds, the skill transfer that is realized by a Taiko-drum playing and the knowledge transfer from a robot assembly case. IMU sensing is adopted to detect and recognize human actions as well as personal styles and others.
(2)      Learning in Compressed Domain Development of Light Learning Engine in Fog Computing Environments
-PI: An-Yeu (Andy) Wu
Light-Weight Compressed Biometric Worker Identification through Compressive Analysis:
     - New application of compressed-domain biometric worker identification instead of conventional features of face, fingerprint. (e.g., in clean room and iFactory)
     - The first compressed-domain alignment framework aiming to: 1) reduce the number of training data, 2) while increase the accuracy.
     - Create a multimodality user identification system that uses both ECG and PPG to improve performance.
 (3)     Interactive Dialogue Learning
 -PI: Richard Tzong-Han Tsai, CoPI: Hung-Yi Lee
We design an interactive dialogue system to help user through Meccanoid 2.0 robot assembly task. When user encounters any problem during the task, he can easily get answers from our system by asking questions. Our system can correctly classify user’s intent when the question is step-independent, but the error may occur when the answer of the question should be different according to the step of the assembly task. Therefore, we incorporate visual data to solve this problem. Given the text of user question and user current image of assembling, our system would classify user’s intent and output the correct answer.