04
04
04
HaptMR
HaptMR
HaptMR
Blending Tactile Realities for Semantic-Aware Mixed Reality 语义感知的混合现实触觉交互
Blending Tactile Realities for Semantic-Aware Mixed Reality 语义感知的混合现实触觉交互
Blending Tactile Realities for Semantic-Aware Mixed Reality 语义感知的混合现实触觉交互



Description
Description
HaptMR is a second-order haptic interaction framework that integrates embedded edge AI and real-time semantic analysis to enhance tactile experiences in Mixed Reality.
HaptMR 是一套结合了嵌入式边缘AI与实时语义分析的二阶触觉交互框架,为混合现实环境提供更丰富的触觉体验。
HaptMR is a second-order haptic interaction framework that integrates embedded edge AI and real-time semantic analysis to enhance tactile experiences in Mixed Reality.
HaptMR 是一套结合了嵌入式边缘AI与实时语义分析的二阶触觉交互框架,为混合现实环境提供更丰富的触觉体验。
Keywords
Keywords
Haptic Feedback 触觉反馈
Haptic Feedback 触觉反馈
Human–Computer Interaction 人机交互
Human–Computer Interaction 人机交互
Mixed Reality 混合现实
Mixed Reality 混合现实
Edge AI 边缘人工智能
Multimodal Interaction 多模态交互
Year
Year
2020
2020






Technical Details
Technical Details
HaptMR (presented at HCII 2021) advances the conventional Mixed Reality (MR) experience by introducing a semantic-aware haptic feedback system. Unlike traditional solutions that merely simulate touch when users interact with virtual objects, HaptMR introduces “second-order” haptic sensations—users sense different tactile feedback when a virtual object collides with various real-world materials. This unique feature is powered by a distributed architecture: a Hololens 2 for spatial mapping and gesture tracking, an NVIDIA Jetson Nano for deep learning-based material recognition, and a network of fine-tuned vibrotactile actuators embedded in the user’s gloves.
By leveraging real-time computer vision algorithms and robust data exchange over 5GHz Wi-Fi, HaptMR dynamically adjusts vibration intensity and pattern based on both the user’s gestures and the rigidity or hardness of contacted physical objects. The result is an immersive, semantically rich interaction that runs consistently at 60 fps with minimal latency. In user studies involving 18 participants, 77% reported high satisfaction and 61% observed improved intuitiveness in MR interactions. This confirms HaptMR’s potential to redefine multimodal experiences by bridging virtual and physical realms through nuanced tactile feedback.
HaptMR(发表于 HCII 2021)通过引入语义感知触觉反馈系统,为传统混合现实(MR)体验带来了突破性升级。与仅在用户触碰虚拟物体时提供单一触感的传统方法不同,HaptMR 将“二阶”触觉融入系统:当虚拟物体与真实世界中的不同材质碰撞时,用户可感知到相应的差异化触觉。该系统采用分布式架构:Hololens 2 负责空间映射和手势追踪,NVIDIA Jetson Nano 则利用深度学习算法进行材料识别,通过 5GHz Wi-Fi 连接并与嵌入式振动模块配合,实现精准的多层级触觉输出。
借助实时计算机视觉算法和高效数据传输,HaptMR 能根据用户的手势动作以及所接触物体的硬度或刚度,动态调整振动强度和模式。该系统在 60 帧的稳定渲染和极低延迟下运行。用户测试覆盖 18 位参与者,77% 对系统评价甚高,61% 认为整体交互的直观性显著提升。HaptMR 成功地通过精细化的触觉反馈在虚拟与物理世界间架设桥梁,为多模态交互体验树立了新的标杆。
HaptMR (presented at HCII 2021) advances the conventional Mixed Reality (MR) experience by introducing a semantic-aware haptic feedback system. Unlike traditional solutions that merely simulate touch when users interact with virtual objects, HaptMR introduces “second-order” haptic sensations—users sense different tactile feedback when a virtual object collides with various real-world materials. This unique feature is powered by a distributed architecture: a Hololens 2 for spatial mapping and gesture tracking, an NVIDIA Jetson Nano for deep learning-based material recognition, and a network of fine-tuned vibrotactile actuators embedded in the user’s gloves.
By leveraging real-time computer vision algorithms and robust data exchange over 5GHz Wi-Fi, HaptMR dynamically adjusts vibration intensity and pattern based on both the user’s gestures and the rigidity or hardness of contacted physical objects. The result is an immersive, semantically rich interaction that runs consistently at 60 fps with minimal latency. In user studies involving 18 participants, 77% reported high satisfaction and 61% observed improved intuitiveness in MR interactions. This confirms HaptMR’s potential to redefine multimodal experiences by bridging virtual and physical realms through nuanced tactile feedback.
HaptMR(发表于 HCII 2021)通过引入语义感知触觉反馈系统,为传统混合现实(MR)体验带来了突破性升级。与仅在用户触碰虚拟物体时提供单一触感的传统方法不同,HaptMR 将“二阶”触觉融入系统:当虚拟物体与真实世界中的不同材质碰撞时,用户可感知到相应的差异化触觉。该系统采用分布式架构:Hololens 2 负责空间映射和手势追踪,NVIDIA Jetson Nano 则利用深度学习算法进行材料识别,通过 5GHz Wi-Fi 连接并与嵌入式振动模块配合,实现精准的多层级触觉输出。
借助实时计算机视觉算法和高效数据传输,HaptMR 能根据用户的手势动作以及所接触物体的硬度或刚度,动态调整振动强度和模式。该系统在 60 帧的稳定渲染和极低延迟下运行。用户测试覆盖 18 位参与者,77% 对系统评价甚高,61% 认为整体交互的直观性显著提升。HaptMR 成功地通过精细化的触觉反馈在虚拟与物理世界间架设桥梁,为多模态交互体验树立了新的标杆。






Highlights
Highlights
Second-Order Haptic Experience: Goes beyond simple virtual object feedback to deliver tactile variations when virtual items interact with diverse real-world surfaces.
Robust Architecture: Integrates Hololens 2 for spatial awareness, Jetson Nano for edge AI inference, and a multi-channel haptic driver system for precise vibration control.
Real-Time Material Recognition: Employs deep learning to identify object rigidity, achieving an overall accuracy of 77.5% in hardness identification.
Enhanced User Engagement: 77% of participants rated the system highly, citing a more natural and immersive MR interaction.
Adaptive Performance: Maintains 60fps for stable rendering and haptic updates, ensuring minimal latency and seamless user experience.
二阶触觉体验:不仅在用户触摸虚拟物体时提供反馈,也在虚拟物体与不同现实材质碰撞时呈现差异化触感。
稳健的系统架构:整合了 Hololens 2(空间感知)、Jetson Nano(边缘AI推断)以及多通道触觉驱动模块,实现高精度触感控制。
实时材料识别:通过深度学习算法识别物体硬度,硬度识别准确率达 77.5%。
提升用户沉浸度:77% 的用户对系统给予高度好评,认为它显著增强了 MR 场景中的交互体验自然度和沉浸感。
自适应性能:以 60fps 稳定运行并提供触觉反馈,确保低延迟与无缝的用户体验。
Second-Order Haptic Experience: Goes beyond simple virtual object feedback to deliver tactile variations when virtual items interact with diverse real-world surfaces.
Robust Architecture: Integrates Hololens 2 for spatial awareness, Jetson Nano for edge AI inference, and a multi-channel haptic driver system for precise vibration control.
Real-Time Material Recognition: Employs deep learning to identify object rigidity, achieving an overall accuracy of 77.5% in hardness identification.
Enhanced User Engagement: 77% of participants rated the system highly, citing a more natural and immersive MR interaction.
Adaptive Performance: Maintains 60fps for stable rendering and haptic updates, ensuring minimal latency and seamless user experience.
二阶触觉体验:不仅在用户触摸虚拟物体时提供反馈,也在虚拟物体与不同现实材质碰撞时呈现差异化触感。
稳健的系统架构:整合了 Hololens 2(空间感知)、Jetson Nano(边缘AI推断)以及多通道触觉驱动模块,实现高精度触感控制。
实时材料识别:通过深度学习算法识别物体硬度,硬度识别准确率达 77.5%。
提升用户沉浸度:77% 的用户对系统给予高度好评,认为它显著增强了 MR 场景中的交互体验自然度和沉浸感。
自适应性能:以 60fps 稳定运行并提供触觉反馈,确保低延迟与无缝的用户体验。



























Credits
Advisors
Prof. Gudrun Klinker
Dr. Maximilian Koch
Appendix
Read our paper presented at HCII 2021
@inproceedings{10.1007/978-3-030-77599-5_18, author = {Zhang, Yueze and Liang, Ruoxin and Sun, Zhanglei and Koch, Maximilian}, title = {HaptMR: Smart Haptic Feedback for Mixed Reality Based on Computer Vision Semantic}, year = {2021}, isbn = {978-3-030-77598-8}, publisher = {Springer-Verlag}, address = {Berlin, Heidelberg}, url = {https://doi.org/10.1007/978-3-030-77599-5_18}, doi = {10.1007/978-3-030-77599-5_18}, booktitle = {Virtual, Augmented and Mixed Reality: 13th International Conference, VAMR 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings}, pages = {242–258}, numpages = {17}, keywords = {Mixed reality, Interacted force feedback, Deep learning for computer vision} }
Credits
Advisors
Prof. Gudrun Klinker
Dr. Maximilian Koch
Appendix
Read our paper presented at HCII 2021
@inproceedings{10.1007/978-3-030-77599-5_18, author = {Zhang, Yueze and Liang, Ruoxin and Sun, Zhanglei and Koch, Maximilian}, title = {HaptMR: Smart Haptic Feedback for Mixed Reality Based on Computer Vision Semantic}, year = {2021}, isbn = {978-3-030-77598-8}, publisher = {Springer-Verlag}, address = {Berlin, Heidelberg}, url = {https://doi.org/10.1007/978-3-030-77599-5_18}, doi = {10.1007/978-3-030-77599-5_18}, booktitle = {Virtual, Augmented and Mixed Reality: 13th International Conference, VAMR 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings}, pages = {242–258}, numpages = {17}, keywords = {Mixed reality, Interacted force feedback, Deep learning for computer vision} }