TESLA‘s unveiling of its Optimus robot in October was met with a rightful dose of skepticism, as many of the capabilities the robots demonstrated were found to be powered by human teleoperation rather than large language models (LLMs). However, despite the human assistance in this case, the scenes that were displayed aren’t that far away from being produced by AI-powered, humanoid robots.
AI is already reinventing robotics intelligence and the entire lifecycle of machines while ushering in a third generation of the robotics industry. These strides will only accelerate in 2025 as NVIDIA rolls out its widely anticipated Jetson Thor, a new AI-powered computing platform made for humanoid robotics that is capable of performing complex tasks and interacting safely and naturally with other people and robots.
From turbocharging production to enabling us to finally solve some of robotics’ biggest challenges, such as dexterous hands, we’re looking at a tenfold leap in robotics innovation with the latest AI breakthroughs. Meanwhile, AI-driven simulations of the countless scenarios robots encounter are slashing development hours and costs while making humanlike machines more realistic than science fiction.
However, to understand how this innovation will finally enable us to unlock the true promise of robots in the near future, we must first understand where the robotics industry is coming from.
Computing Breakthroughs Also underpinned the First Periods of Robot Proliferation
What we know as industrial robots have been around for half a century. With the dawn of the first computer age in the 1970s, all-electric machines and robotic arms were introduced to the market. The following 40 years of robotics were nearly singularly focused on high-cost and high-value industrial robots, which were precise but inflexible machines.
While these robots were fairly limited in scope, we saw industrial robot use in the U.S. go from just over 200 in 1970 to 4,000 by 1980 and on its way to a million as we entered the next millennium. This generation of robots was made of pick-and-place machines from the likes of FANUC America Corporation, KUKA, and ABB, which revolutionized assembly lines with things like spot welding. Due to their million-dollar-plus price tags and rigid programming, they were also confined to structured environments.
Then, around 2010, we really welcomed the next generation of robots. The introduction of the iPhone in 2007 and the mass production of smartphones made computing cheaper. As computing became cheaper and more integrated into robotics production, you could manufacture cheaper robots. Robotics manufacturers could now build a decent robot for tens to hundreds of thousands of dollars versus a million dollars-plus.
Secondly, with more advanced GPUs on board, robotics manufacturers began to incorporate neural networks for better computer vision and enhanced vision recognition. This opened up the door to collaborative robots (or cobots) that could work more closely alongside humans and, perhaps more importantly, mobility in the form of Automated Guided Vehicles (AGVs) and Autonomous Mobile Robots (AMRs).
In 2012, Amazon saw the mobile future of robots on the wall and acquired Kiva Systems, the maker of AGVs for warehouses, for $775M, which is still the largest acquisition in the robotics industry. Since then, industrial AGVs and AMRs have taken off over the last decade to become a $4B industry. The increasing velocity of AMR usage was displayed by Locus Robotics, a manufacturer of AMRs for warehouse automation, surpassing picking four billion units with its AMRs across the globe last month. It took the company seven years to get to its first billion picks in 2022. Then, just 11 months later, in August 2023, the two-billion-pick milestone was achieved, followed by three billion picks a little over seven months later in April 2024. Now, just six months after that, it has surpassed the four-billion mark.
AI Isn’t Just a Step, It’s a Giant Leap for Robotics Intelligence
While unlocking human collaboration and mobility were significant steps forward for robotics, the next computing paradigm – AI – presents the opportunity for a tenfold leap in robotics innovation and intelligence. LLMs are fundamentally transforming everything from how robots are designed to how they learn and adapt. Traditional robotics followed a linear path: design, test, deploy. Now, AI-first development transforms this paradigm through:
- Real-time learning and adaptation, eliminating the need for constant reprogramming as environments change.
- Rapid simulation of countless scenarios, dramatically reducing development time and cost.
- Enhanced processing of sensory inputs—vision, audio, and touch — enabling more sophisticated responses to complex situations.
This isn’t just theoretical – we’re seeing it in action. Leading robotics companies such as Agility Robotics are now using transformer networks, which also power LLMs. This type of neural network learns context and meaning by tracking relationships in data. It now enables robots to respond to environments in ways that were impossible with traditional robotic programming.
Meanwhile, “simulation to reality” or Sim2Real concepts are being used by robotics manufacturers to simulate environments that train AI models with skills and knowledge that can be brought to real-world robotics applications. The ability to simulate, teach and then repeat enables companies to build very complex robots incredibly fast. Furthermore, computing solutions such as NVIDIA’s IsaacLab, which provides an accelerated GPU environment, means manufacturers can run far more simulations than they could have in the past with a traditional CPU.
Unlocking the Holy Grail of Robotics Engineering
In addition to driving widespread robotics adoption in industrial environments, these advancements could also assist robots in crossing the chasm into residential environments. That’s right, AI-powered robots — from the likes of Physical Intelligence, which just raised $400M from Jeff Bezos and others — will likely enter homes over the next decade with the ability to help out with a wide range of household chores.
However, even more groundbreaking from a robotics engineering standpoint is theories such as Moravec’s paradox – which states that tasks easy for humans are hard for robots, and vice versa – are being upended. That’s because these complex AI systems provide a pathway to solve the most challenging problems of robotics. These challenges include giving robots humanlike eyesight, dexterous manipulation, and heterogeneous pickup.
Today, more than 4 million industrial robots operate globally within factories, but we’ve yet to make progress with robotic hands or manipulators to grasp objects. A two or three-year-old child can still manipulate a bottle and cap better than most robots. And giving a robot a glass, followed by an egg, will still most likely end with yolk on the floor. Google DeepMind is one group tackling dextrous manipulation with AI through its DemoStart solution. Specifically, it uses reinforcement learning methods that provide the AI agents powering a robot with simulated demonstrations to help the robot gain a sense of its hand, finger joints, and fingertip capabilities.
As this type of AI usage helps us solve these long unsolvable robotics problems, it’s finally making it possible to unlock the true promise that robotics has held since the 1970s: providing machines that can do all the things a human can do, and potentially even more. With AI-powered, humanlike capabilities, robots will finally be able to seamlessly work alongside human coworkers to expand our capabilities, while vastly improving productivity in the process.