IntBot Introduces Hardware-Agnostic Social Intelligence Engine for Humanoid Robots at GTC 2026

18 March 2026 | News

Multi-platform deployment and NVIDIA-powered vision-language models enable real-time, socially intelligent human-robot interaction across diverse environments.
Image Courtesy: Public Domain

Image Courtesy: Public Domain

IntBot Inc., a developer of socially intelligent robotics systems, today announced that its General Social Intelligence Engine – IntEng – now supports multiple humanoid and service robot platforms from different hardware vendors, marking a significant step toward hardware-agnostic deployment of socially intelligent robots in real-world environments.

Live Demonstrations at NVIDIA GTC 2026

At NVIDIA GTC 2026, IntBot will deploy three robots built on three different robotic hardware platforms, all powered by the IntBot Social Intelligence Engine. These robots will provide unscripted information assistance and demonstrate real-time human-robot interaction capabilities with conference participants, showcasing IntBot's approach to embedding social intelligence into physical AI systems.

The robots will operate throughout the conference venue, serving as interactive information assistants for attendees and demonstrating several real-world roles:

  • Front Desk Information Concierge assisting visitors with navigation and event information
  • Mobile Engagement Robot roaming the conference floor and interacting with attendees
  • Training Assistance Robot helping attendees navigate training sessions and answer questions

Hardware-Agnostic Social Intelligence

IntBot's General Social Intelligence Engine is designed as a hardware-platform-agnostic software stack, enabling robotics manufacturers and system integrators to embed socially intelligent capabilities into a wide range of robot form factors.

The system integrates several core capabilities:

  • Multimodal perception of speech, visual cues, and human behavior
  • Social scene understanding in dynamic environments
  • Context-aware conversational interaction
  • Embodied behavior and expression control

By separating social intelligence software from robot hardware, IntBot aims to accelerate the adoption of robots in environments where human interaction is essential, including hospitality, transportation hubs, healthcare facilities, and public venues.

First Edge Deployment of Cosmos Reason-2 Vision-Language Model

At GTC 2026, IntBot will also showcase the first edge deployment of the NVIDIA Cosmos Reason-2 Vision-Language Model (VLM) within its robotics stack.

Running directly on robot edge compute systems, the model enables robots to perform real-time scene understanding, allowing them to interpret complex human environments such as crowded conference spaces.

This capability allows robots to:

  • Identify human activities and social cues
  • Understand spatial context within dynamic scenes
  • Support situational awareness for human-robot interaction

By running advanced VLM models directly on edge hardware, IntBot demonstrates how large multimodal models can power real-world robotics applications with low latency and improved privacy.

 

Subscribe to our newsletter

Monthly digest of what's new and exciting from us.

We'll never share your email with anyone else.
Follow Our Channel
Subscribe on YouTube