Unity Software, the Underdog Poised to Power the Physical AI Revolution

Unity Software, the Underdog Poised to Power the Physical AI Revolution

Wall Street loves writing obituaries too early.

Back in 2020, for example, Unity Software (U) was one of the hottest IPOs in tech. Investors couldn’t get enough of the pitch: a rendering engine that would power the future of 3D content – most notably, the “metaverse” and other immersive worlds. Almost overnight, the company was valued in the tens of billions, reaching a $40 billion-plus market cap within three months of going public.

But the hype burned out just as quickly. The tech required for a good experience – like Meta‘s (META) VR headsets – is expensive; not to mention cumbersome. Enthusiasts were let down by clunky interfaces, poor integration, and underwhelming virtual worlds. And the metaverse became the butt of a joke. 

Meta lost billions on the endeavor. And Unity got tossed into the bin of “yesterday’s fads.”

But here’s the thing: Wall Street may have buried Unity too soon… Because as it turns out, Unity wasn’t built for the metaverse but what comes after. And that next stage is only just now coming into view.

Let me explain.

The Three Waves of AI – and Why Unity Software Hits the Third

To see why Unity may be on the cusp of a comeback, you need to understand the three waves of AI.

Wave One: Large Language Models (LLMs) 

This was the “ChatGPT moment”; when AI went mainstream and was cemented into the daily lives of workers and students around the world. LLMs proved AI could read, write, converse – and improve.

This wave’s big winners were companies like OpenAI, Anthropic, and Microsoft (MSFT).

Wave Two: Agentic AI 

Now, if the first wave taught AI to talk, the second wave is teaching it to do. Agentic systems are designed with the ability to perceive, reason, and act autonomously to achieve a goal, rather than just responding to direct commands or questions.

In other words, this wave is all about having AI execute tasks across software, websites, and workflows, all on its own. 

The winners here are shaping up to be the companies building the most capable AI agents and orchestration layers. In the enterprise space, think Salesforce (CRM) and ServiceNow (NOW).

Wave Three: Physical AI 

This is the one Wall Street is barely paying attention to yet. Physical AI involves just what it says on the tin: bringing intelligence into the physical world via humanoid robots, augmented reality (AR) glasses, holographic displays, ambient assistants embedded into our own environment. It’s where AI stops being something you look at on a screen and becomes something you live with in the real world.

And if you believe – as we firmly do – that Physical AI will define the next decade of computing, then Unity has a shot at becoming its operating system…

Why Physical AI Needs a Real-Time 3D Operating Engine

Think about your phone for a moment. The whole design behind iOS and Android – the icons, app grids, touch inputs, etc. – was built for the mobile internet era. That was an age where you needed screens to click and tap your way around the web.

But increasingly, AI doesn’t work that way. It’s moving toward ambient intelligence. Now AI listens, observes, and anticipates. You don’t need to tap any icons to unlock value; you just talk or gesture and let AI act.

That means the old smartphone form factor is mismatched with the AI age. And in our view, the natural replacement is spatial computing: glasses that project holographic displays into the real world, robots that perceive environments in 3D, immersive overlays that blend digital and physical.

Imagine something akin to “Iron Man” tech: Jarvis in your ear, holograms floating in midair, interfaces that live in the physical space, not on screens.

To build this world, you need a rendering engine that can:

  • Create lifelike 3D environments in real time.
  • Handle physics, light, and motion in a spatially accurate way.
  • Work across devices, platforms, and hardware ecosystems.

That’s Unity.

admin