Human-Machine Interaction (HMI) is a study of interactions between humans and machines. Human Machine Interaction is a multidisciplinary field with contributions from Human-Computer interaction (HCI), Human-Robot Interaction (HRI), Robotics, Artificial Intelligence (AI), Humanoid robots and exoskeleton control.
Inertial Measurement Units (IMU’s) of Xsens provide accurate and reliable 3D orientation, 3D acceleration, 3D rate of turn and 3D magnetic field data. For HMI purposes, IMU’s can be used as a stand-alone product or with multiple synchronized IMU’s. The MTw Awinda contains a number of 3D motion trackers of your choice + a Software Development Kit (SDK). The SDK gives you the possibility to access 3D orientation, 3D acceleration, 3D rate of turn and the 3D magnetic field data of every motion tracker. With a special sensor fusion algorithm Xsens is able to calculate very accurate 3D orientation of each motion tracker. A unique wireless radio protocol takes care of a very accurate time synchronization of less than 10 microseconds. Therefore the MTw Awinda is a flexible solution as you can choose how many MTw’s you want to integrate into your application.
The Xsens MVN Analyze provides Motion Capture for human movement without constraining the freedom of movement and without occlusions. The fast set up and calibration allows the user to use the product for data acquisition within 10 minutes. The MVN Analyze software is capable of outputting and live streaming of human kinematic data and raw Motion Tracker data, allowing the user to use this data for real-time control of robots, exoskeletons and humanoids. Low latency live streaming of joint position, accelerations, angular velocity, 3D orientation and center of mass is directly available within the software.
The MTw Awinda and the Xsens MVN (Analyze) allows the user to (live) stream 3D kinematic or IMU data to their own application. This data allows users to apply typical HMI techniques like posture recognition, motion classification, motion segmentation, motion learning models, motion synchronization, command recognition and activity level recognition.