Inbolt and FANUC Redefine Automation for Moving Assembly Lines with AI-Driven 3D Vision

26 May 2025 | Expert Insight

Albane Dersey, Co-founder & COO of Inbolt, shares how real-time 3D guidance is eliminating the last frontier in automotive automation—unlocking precision, adaptability, and scalability.

 

 

In an exclusive conversation with Robotics Business News, Albane Dersey, Co-founder and COO of Inbolt, discusses how the integration of Inbolt’s AI-powered 3D vision system with FANUC’s advanced robotics is transforming automation on moving assembly lines. Tackling long-standing challenges like line vibrations, speed inconsistencies, and dynamic part positioning, this solution is already gaining traction with industry giants like Stellantis and General Motors. Dersey sheds light on the breakthrough’s real-time precision, adaptability across industries, and how it’s setting a new standard for scalable, intelligent manufacturing.


 

What specific challenges in automating moving assembly lines does the integration of Inbolt's AI-powered 3D vision with FANUC robots address, and how does it overcome them?

The integration of Inbolt's AI-powered 3D vision with FANUC robots effectively addresses several challenges in automating moving assembly lines. Traditionally, systems relied on encoders and 2D cameras to target parts, but this approach is problematic due to the unpredictable speed and movement of the assembly line. The lack of a consistent line speed, occasional stops, and vibrations all contribute to poor precision and repeatability. Furthermore, the parts on the line may not always move in a straight line, adding complexity. Inbolt overcomes these issues by providing advanced 3D vision, which allows the robot to adapt to the dynamic environment of a moving line. This technology enhances accuracy and precision, eliminating the need for costly, complex solutions like mounting robots onto stationary machines or indexing the entire system. By streamlining the process, Inbolt makes the automation of moving assembly lines not only feasible but cost-effective.

 

Can you elaborate on how Inbolt's real-time 3D guidance and FANUC's streaming motion capabilities work together to enable precise operations on continuously moving parts?

Inbolt’s real-time 3D guidance system works in conjunction with FANUC's Stream Motion capabilities to enable precise operations on continuously moving parts. Our system continuously estimates the position and orientation of a part relative to the robot. By integrating with FANUC's streaming motion, we take control of the robot in real time, sending precise coordinates to the robot's tool center point (TCP). This high-frequency communication allows for rapid adjustments, enabling the robot to respond seamlessly to any unplanned events, such as changes in line speed, stops, or resumption of movement, as well as any swinging or shifting of parts. This ensures consistent accuracy and adaptability, even in dynamic environments.

 

How does this new solution impact the cycle time and overall efficiency of tasks such as screw insertion, bolt rundown, and glue application compared to traditional methods?

Our solution does not impact the cycle time or overall efficiency of tasks like screw insertion, bolt rundown, or glue application compared to traditional methods. To be fully transparent, there is no additional processing time introduced by the vision system, which is central to our technology. The main advantage is that our solution is fully transparent, meaning it integrates smoothly with existing systems without altering line speed or takt time. However, it's worth noting that unlike human workers, who can quickly switch between multiple tasks, robots are typically optimized for single operations at a time. As a result, tasks like bolt rundown or gluing are performed efficiently, but multitasking between operations is not typically done by the robot.

 

What feedback have early adopters like General Motors and Stellantis provided regarding the implementation and performance of this integrated system?

Early adopters like General Motors and Stellantis have expressed that our integrated system is exactly what they’ve been searching for over the past two decades. They consider it a breakthrough, describing it as the 'last frontier' of automation. These companies have emphasized the necessity of automating their lines without sacrificing productivity. Stellantis, in particular, has validated the technology, confirming that it performs exceptionally well in a real-world production environment.

 

In terms of scalability, how adaptable is this solution for manufacturers with varying production needs and existing infrastructure?

In terms of scalability, our solution is highly adaptable to manufacturers with varying production needs and existing infrastructure. One of our main unique selling points is that anyone can program our vision system, it's not limited to moving assembly lines. The only input we require is a 3D model, making it flexible for different production setups. Additionally, the robot base can be mobile, allowing it to be positioned wherever it’s needed most. Our vision system is capable of working with a wide range of parts, not just a single model, and can switch between different CAD models, ensuring seamless operation across diverse production environments.

 

What measures have been taken to ensure the system's reliability and accuracy in environments with part variations or imperfect conditions?

To ensure the system's reliability and accuracy in environments with part variations or imperfect conditions, we have implemented several key measures. First, our system is agnostic to lighting conditions, meaning it performs consistently regardless of the environment. Additionally, we can absorb part variations compared to the CAD model, ensuring adaptability to real-world discrepancies. Our AI can also be fine-tuned based on the specific application and system needs. Importantly, our mean time between failures (MTBF) data demonstrates 100% reliability in real-world operations, further validating the system's robustness.

 

Looking ahead, what are the plans for expanding this technology to other industries or applications beyond automotive manufacturing?

Looking ahead, our plans for expanding this technology focus on both new industries and global markets. While we are currently focused on discrete manufacturing, including automotive, aerospace, and household goods, we are also exploring opportunities in electronics assembly. Geographically, we are targeting expansion into the United States, Asia, with offices planned for Japan and soon Korea. Additionally, we aim to evolve vertically within industries, continuously increasing the value we provide through our technology.

 

 

 

 

Subscribe to our newsletter

Monthly digest of what's new and exciting from us.

We'll never share your email with anyone else.

Most Read