Sure, you probably heard that Facebook is changing its name to Meta, in a hard pivot toward a new trend in tech that’s coming in hot. However, the metaverse “in the real” is being built by many powerful players that are enabling fascinating new technologies that will advance not only the human experience, but also cross-team collaboration and simulation for various industries that could benefit greatly from cutting-edge development tools and rich environments. The emerging enterprise metaverse will allow us a better visualization of the future, and what could be mankind’s next greatest achievement.

One of the heaviest hitters in AI, autonomous machines, and what the company calls the “Omniverse,” is NVIDIA. The Silicon Valley GPU powerhouse has transformed over the years into a full stack hardware, software and platforms company maniacally focused on AI, simulation and the metaverse. In fact, some say the metaverse will be realized as perhaps the next “Zoom meeting” platform, or a rebirth of the Internet itself; a place where a digital presence is more than just telepresence and where we can interact with our own powerful native tools, removing the boundaries of time zones, office spaces and other limitations of the physical world. If that sounds fascinating, then yes, when lofty goals become a reality, it really is the stuff of science fiction. And that’s exactly the kind of stuff that NVIDIA just rolled out in its Fall GTC keynote this morning.

NVIDIA Omniverse Gets Personal With AI Avatars And Digital Twins

NVIDIA Omniverse Avatar - CEO Jensen Huang
NVIDIA Omniverse Avatar – CEO Jensen Huang NVIDIA

NVIDIA’s Omniverse platform is essentially a simulation environment for connecting native software tools and team-based collaboration in physically-accurate virtual worlds. Think of mainstream software suites like Blender, Cinema 4D, Wacom and the 3D printing app SketchUp, but connected in an environment for designers such that teams can visualize their creation in real-time and across physical boundaries. That’s the big iron incarnation anyway, but NVIDIA’s Omniverse also offers applicability across a myriad of mainstream consumer use cases like restaurants, banking, hospitality and more, and for these markets a relatable avatar can really come in handy. In that regard, NVIDIA just announced that the “dawn of intelligent virtual assistants has arrived.” “Omniverse Avatar combines NVIDIA’s foundational graphics, simulation and AI technologies to make some of the most complex real-time applications ever created,” noted NVIDIA CEO Jensen Huang in the company’s Fall GTC keynote. Huang then demonstrated a number of examples of Omniverse Avatar, from customer support applications to video conferencing and a restaurant kiosk, where customers were able to interact with an avatar that understood an order request of a burger, fries and drinks. The demo was powered by NVIDIA AI software and its Megatron 530B natural language model, which the company claims is the world’s largest of its kind. In addition, Omniverse Avatar incorporates NVIDIA’s Riva speech recognition SDK that can process speech across multiple languages like English, German, French and Spanish, as well as its Merlin recommendation engine, Metropolis computer vision analytics and Audio2Face, a 2D and 3D AI rendered facial animation engine. All told, the demo was an impressive, dare I say cute and a fascinating, forward-looking vision of what a real metaverse construct will some day bring, to not only the company’s enterprise early adopter customers but to mainstream consumers as well, who will also be able to experience it some day in the not-so distant future.

Ericsson's 5G Network Digital Twin Simulation In NVIDIA Omniverse
Ericsson’s 5G Network Digital Twin Simulation In NVIDIA Omniverse NVIDIA

In yet another application example of Ominverse, Huang highlighted how the platform can also produce “digital twins” of factories, warehouses, robots, and even things like 5G edge telecom networks. Ericsson in fact announced its digital twin 5G radio and antenna network simulation, in-which the location of buildings, trees and other structures affects network performance and can be simulated for optimal results before actually deployment in the real world.

Drive Replicator Sim, Concierge And Chauffer Aim To Make Self Driving Cars Even More Convenient

NVIDIA DRIVE Concierge Parking Animation
NVIDIA DRIVE Concierge Parking Animation NVIDIA

Another natural application for NVIDIA’s Ominverse is its DRIVE Sim for autonomous vehicle training. Self-driving models need to be trained with millions of miles of real-world roads and traffic conditions, which of course is time-consuming and expensive. In fact, road training tends to stymie autonomous vehicle AI network model development. Augmenting real world data collection with synthetic data helps autonomous vehicle system engineers accelerate development and produce more accurate results. However, the synthetic data that self-driving car systems train on needs to be accurate to what’s experienced in the real world, otherwise things can get a Grand Theft Auto kind of ugly, obviously.

A good example of this would be NVIDIA’s PathNet DNN for its DRIVE autonomous vehicle platform that detects lane spacing between cars, which can be tricky when a vehicle is not centered in its lane. Training for this model can also be dangerous and actually against self-driving car data collection policies in some jurisdictions, because driving uncentered in a lane is not good for other human drivers on the road. However, as NVIDIA notes, Omniverse DRIVE Replicator allows its AI models to “see what humans can’t,” by training the PathNet neural network on millions of synthetic scenarios of off-centered driving paths, just like us flawed humans can occasionally try to traverse when we’re not paying attention.

Finally, NVIDIA’s vision for autonomous vehicle experiences now also includes AI assistants and for interaction and to aid in autonomous driving optimization itself. Today, the company has announced DRIVE Concierge and DRIVE Chauffer, built on its Omniverse Avatar platform, in an effort to evolve the experience inside self-driving vehicles. You can think of Concierge as an Omniverse Avatar-powered digital assistant (pictured above) that helps with hands-off guidance for anything from phone calls and dinner reservations, to infotainment services, and alerts for safety and security. NVIDIA notes you’ll be able to have a natural conversation with the Concierge avatar and it will be personalized to the driver and all occupants as well, recognizing anyone in the vehicle. The Concierge also provides valet parking services and a guardian feature using in-vehicle cameras to make sure driver attention is at the ready when required. Couple the Concierge with NVIDIA’s Chauffer — an AI-powered driving platform based on the company’s DRIVE SDK for full level 4 autonomy of both highway and city driving — and NVIDIA’s autonomous vehicle technologies are shaping up to be some of the most powerful tools for the self-drive cars and robotaxi services of the future. In fact, the company also announced that its DRIVE Orin platform has chalked up numerous new design wins beyond just names like Mercedes-Benz and Volvo, but also bus and truck manufacturers and even high-end sports cars like Lotus.

Jetson AGX Orin Advances AI Robotics And Autonomous Machines

NVIDIA Jetson AGX Orin Processor Module And Dev Kit
NVIDIA Jetson AGX Orin Processor Module And Dev Kit NVIDIA

Beyond metaverse simulation environments and autonomous vehicles, however, lives the intelligent edge and AI-powered robotics. NVIDIA also announced its next-gen Jetson AGX Orin platform that the company claims is “the world’s most powerful and energy-efficient AI supercomputer for robotics, autonomous machines, medical devices and other forms of embedded computing at the edge.” Jetson AGX Orin is an evolution of its Jetson platform powered by the company’s Ampere GPU architecture, along with a 12-core Arm Cortex-A78 CPU complex and 32GB of LPDDR5 memory for up to 200 TOPS (Tera OPS or trillion operations per second) of INT8 precision AI compute power. NVIDIA claims Jetson AGX Orin offers six times the processing power of its previous gen Jetson AGX Xavier platform for robotics and equates it to having a GPU-powered AI server in the palm of your hand.

Applications for the Jetson AGX Orin platform range from manufacturing, to warehousing, healthcare, agriculture, autonomous vehicles, smart cities and more – basically any number of device types and industries that need machine vision and robotic assistance with connected intelligence on board.

NVIDIA expects to ship DRIVE AGX Orin in the first quarter of 2022 and it will eventually power the new DRIVE Concierge and Chauffer features in future autonomous vehicles as well. NVIDIA didn’t offer a time frame for when Concierge-enabled vehicles will hit the market, though Chauffer technology is expected to arrive in 2024. Further, the company’s various new Ominverse products and services are already available to developers in an early access program, along with an Omniverse Enterprise subscription service that has moved to general availability today, starting at $9K per year.

My high-level take-away from this dizzying array of announcements from NVIDIA is that this company is advancing machine learning, artificial intelligence, robotics and autonomous vehicle technologies at a rate currently unmatched in the markets it serves. And in fact it’s creating new market opportunities, as it pioneers innovative use cases that previously weren’t possible before these enabling advancements were brought to fruition, not only by its powerful hardware solutions, but its full stack software and services portfolio as well.