Nvidia’s Jensen Huang made the biggest headlines at CES earlier this month with two strikingly different forecasts. He first noted that robotics is ready for its ChatGPT moment in 2025, based on the strength of humanoid robotics, which is becoming a ‘multitrillion-dollar’ opportunity. Secondly, he played down the immediate opportunity for quantum computing, saying “truly useful” quantum computing is 15-30 years away. The result was a public selloff of quantum computing stocks. As an active investor in both the robotics and quantum computing markets today, I agree with a lot of what Jensen had to say about robotics, but I think he’s wrong with his quantum computing projections.
In 2025, we’re entering the dawn of a third generation of robotics that is being driven by AI and perhaps best represented by advancements in humanoid robotics. The ability of AI to provide real-time learning, rapid simulation, and enhanced sensory inputs will lead to making significant strides on one of the last unsolved robotics problems over the next year — dexterous manipulation. In recent years, humanoid robots have advanced from squeezing arms to lifting stuff, and today, you have a few fingers on some robots. The next step with AI and reinforcement learning methods will simulate demonstrations to enable tomorrow’s robots to learn how to operate fully functional hands with joints and even fingertip capabilities. Mastering touch will be one of the final robotics pieces, and thanks to AI developments – many of which have been driven by Nvidia – we are now ready to address this in the year ahead.
However, while robotics has its ChatGPT moment, I believe quantum computing is entering a phase in 2025 that mirrors where AI was roughly five years ago. For a very long time before its breakthrough, many questioned AI’s feasibility. Yet, in the background, researchers and companies on the front lines were making exponential progress year-over-year on algorithms. Eventually, the hardware and computing power caught up with that progress, and everything aligned for takeoff. We’re seeing that same exponential growth pattern in quantum computing as we address the significant barriers to large-scale quantum computing: errors and instability.
Each year, we’re achieving higher qubit counts on quantum computers in development while simultaneously finding ways — through approaches like photonic quantum and quantum error correction — to make these systems more resilient. Small-scale quantum computing is already showing practical use cases, and riding the momentum from Google’s recent breakthrough, over the next 12-24 months, we’ll be entering the early innings of practical quantum computing applications. This progress couldn’t come at a more crucial time as we approach the end of Moore’s Law in classical computing. Quantum computing represents one of the few viable solutions to meet our skyrocketing computational needs.
When Jensen said ‘truly useful,’ he’s most likely speaking about large-scale quantum computing becoming a big piece of the overall computing market and crossing the chasm to mainstream adoption. I’d still plot that at a decade, not decades to come, and at that point, quantum computing will be transformative rather than just ‘useful’ across multiple industries. Take drug development, for instance. Currently, simulating molecular interactions for new therapeutics is practically impossible, even with our most powerful supercomputers. What would take thousands of years of classical computing time could potentially be reduced to hours with quantum computers. This capability would dramatically increase the success rate of drug trials and significantly accelerate the development of new medicines.
Similar revolutionary advances are possible in materials science and chemistry. Whether developing new battery technologies, protective coatings, or other chemical innovations, quantum computing could enable us to simulate and predict outcomes that are currently discoverable only through lengthy trial and error processes. These applications aren’t just incrementally better – they represent improvements of several orders of magnitude over our current capabilities.
While skeptics question whether we’re overhyping quantum computing’s potential, their skepticism typically centers not on whether quantum computers can solve these problems but rather – like Jensen and even Mark Zuckerberg just did – on the timeline to practical implementation. I understand some of that trepidation around quantum’s actual arrival date, even if I don’t think we’re going to see decades-long delays in getting to large-scale quantum computing. This current transitionary period for quantum computing is a big one as we move from physics to practical engineering. I would even draw a parallel to the transition from analog to digital in the 1950s and 60s that gave birth to Silicon Valley. Just as that era saw the movement from physics to engineering with the development of CMOS and semiconductor chips, we’re now witnessing quantum computing’s evolution.
If we get that evolution right, the return on investment for quantum computing is not just compelling, it’s world-changing, given our growing computational needs. While initial quantum computers will solve a specific set of problems, their capabilities will continuously expand as the technology matures. DARPA has already outlined progressive sets of applications that will become feasible as quantum computers advance. Looking ahead 50 years, we may be solving problems that we can’t even conceptualize today.
As we hit the physical limits of classical computing while our computational needs continue to accelerate – driven by AI, climate modeling, and the explosion of sensor data – quantum computing isn’t just an interesting scientific pursuit; it’s a real-world technology. It’s becoming an essential next step in the evolution of computing. In fact, BCG projects quantum computing could create up to $850B in economic value by 2040, and I believe even that might be conservative given the technology’s transformative potential.