Introduction
This project proposes a learning from demonstration (LfD) system that allows robots to be not only taught by human via demonstration but also adjusted by themselves. Moreover, a multimodal perception approach is provided so that the robotic arm can perform flexible automation without complicated coding by professionals. In addition, an edge AI chip is implemented to increase the overall performancesflexibility, expanding the potential of the system to be incorporated with other applications. In the future, we will not only focus on vision-based tasks but also consider other scenarios where voice is required to be learned in the LfD system.