![](https://static.wixstatic.com/media/3fa244_1913c8a7980d4ba8a9997325b6c93fa3~mv2.jpg/v1/fill/w_110,h_110,al_c,q_80,usm_0.66_1.00_0.01,blur_2,enc_auto/3fa244_1913c8a7980d4ba8a9997325b6c93fa3~mv2.jpg)
What is Intelligent Human Motion?
Intelligent Human Motion™ (IHM) is a targeted application of Edge ML for real-time motion and activity recognition on edge devices. IHM works similar to image and speech recognition: sensor signal patterns are analyzed and compared in real-time to trained models to determine a user's context such as motion and posture. This contextual data is obtained from commercially available sensor data from smartphones, wearables, and AR/MR/VR headsets, but has also been cross-compiled for custom hardware platforms.
![ihm_model_signals copy.png](https://static.wixstatic.com/media/3fa244_78d38c38bbfa4bcaa8c2639101be19c7~mv2.png/v1/fill/w_274,h_192,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/ihm_model_signals%20copy.png)
How is IHM different?
IHM algorithms maximize classification accuracy while minimizing computational complexity. IHM comes with a database of pre-trained motion models ready to use for your application. Additionally, with MLOPs you can quickly create your own motion models. Training can occur offline or directly on your edge device.
![](https://static.wixstatic.com/media/3fa244_78026fa7720a4a528ac577e4a6f32ede~mv2.png/v1/fill/w_170,h_353,al_c,q_85,enc_auto/3fa244_78026fa7720a4a528ac577e4a6f32ede~mv2.png)
What are some features of IHM?
· Specialized Models: IHM runs multiple models simultaneously. Each model represents a particular motion type and each model can be computed in parallel. Because each model is only responsible for a single classifier it is small in both memory footprint and computational load.
![ihm_feature_models 2.png](https://static.wixstatic.com/media/3fa244_e0bb5e9df9474ad090403ba7cf3d61a2~mv2.png/v1/fill/w_180,h_141,al_c,q_85,usm_0.66_1.00_0.01,blur_3,enc_auto/ihm_feature_models%202.png)
· Stacked Models: Unlike Deep Neural Networks, a new motion classifier can be added to the model set in real-time without affecting the current models or requiring a network to be retrained for the added motion type output.
![ihm_feature_enable 2.png](https://static.wixstatic.com/media/3fa244_3b55766ed9f9471088ca8bed3fe427c6~mv2.png/v1/fill/w_75,h_104,al_c,q_85,usm_0.66_1.00_0.01,blur_3,enc_auto/ihm_feature_enable%202.png)
· Minimal Data Requirements: Training a motion model requires very little data. Often less than a minute of data is needed to train a model capable of 95% accuracy.
![](https://static.wixstatic.com/media/3fa244_7c07cb487c564a7facc7e949cd5a65b6~mv2.png/v1/fill/w_158,h_99,al_c,q_85,usm_0.66_1.00_0.01,blur_3,enc_auto/3fa244_7c07cb487c564a7facc7e949cd5a65b6~mv2.png)
· Attitude Independent: The IHM algorithm does not require the user to hold or wear the device in a particular location or orientation. Specialized filters are used to pre-process the data to reduce the effects of device orientation.
How can I try IHM?
IHM is available today and ready for integration in your product. We have a mobile application in closed beta for demonstration. Please get in touch with our business team to apply for a product demonstration today.
![ihm_models_screenshots copy 2.png](https://static.wixstatic.com/media/3fa244_ccc4e3fc2bb2405099be30f2396293cd~mv2.png/v1/fill/w_68,h_102,al_c,q_85,usm_0.66_1.00_0.01,blur_2,enc_auto/ihm_models_screenshots%20copy%202.png)