According to Jensen Huang, CEO of the technology company, Digits has the new GB10 Grace Blackwell superchip, which offers a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models. One petaflop, it should be noted, is equivalent to one quadrillion operations per second and has the capacity to handle complex calculations and large data sets.
With Digits, users can develop and run inference on models using their own desktop system, and then deploy those models to the accelerated cloud or data center infrastructure.
For Huang, this milestone is relevant, since AI will be present in applications in all industries and will allow its next-generation technology to reach millions of developers. “Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to participate in and shape the AI era,” the executive commented.
In design, this system is quite similar to a Mac Mini computer, but in power it is capable of processing 200 billion parameters, while its price will be $3,000 and will be available starting in May of this year.
Grace Blackwell AI, supercomputing for everyone
With the Grace Blackwell architecture, the executive specified, companies and researchers can prototype, tune and test models on local Project Digits systems running the Linux-based Nvidia DGX operating system, and then deploy them without on Nvidia DGX Cloud, instances of accelerated cloud or data center infrastructure.
This allows developers to prototype AI and then scale it across cloud or data center infrastructure, using the same Grace Blackwell architecture and Nvidia AI Enterprise software platform.
Additionally, Digits users can access a library of software for experimentation and prototyping, including software development kits, orchestration tools, frameworks and models available in the Nvidia NGC catalog and the Nvidia Developer portal.
It is a tool aimed at a fairly specific audience, that is, people who want to develop their own AI models independently of large technology companies that have giant infrastructure to carry out their processes. In the case of universities, for example, it can be significant for the processing of large data sets necessary for research projects.
One of the main trends in the AI industry is AI agents, which can also be created with Grace Blackwell, as well as Nvidia NIM microservices, which are available for research, development and testing through the Developer Program of the company.
The company that last year was at times the best valued in the world also presented its open large language models Llama Nemotron and the vision language models Cosmos Nemotron, which can supercharge AI agents in any accelerated system .
This technology will be able to promote the new era of autonomous agents, which are specialized digital assistants to help people solve complex problems and automate repetitive tasks.
Nvidia accelerates development of humanoid robotics and physical AI
To address the demand for robotics in the industrial and manufacturing sectors, Nvidia launched a collection of robot foundation models, data pipelines and simulation frameworks to accelerate development efforts for next-generation humanoid robots.
This collection was named Isaac GR00T Blueprint and is intended to generate exponentially large synthetic motion data to train your humanoids using imitation learning.
Imitation learning, a subset of robot learning, allows robots to acquire new skills by observing and imitating human demonstrations by experts. Collecting these large, high-quality data sets in the real world is tedious, expensive, and time-consuming.
Therefore, implementing the Isaac GR00T plan for synthetic motion generation allows developers to easily generate exponentially large synthetic data sets from only a small number of human demonstrations.
It should be noted that in this initiative, Apple Vision Pro can be used to capture human actions in a digital twin. These human actions are imitated by a robot in the simulation and recorded for use as ground truth.
The GR00T-Mimic workflow then multiplies the captured human demonstration into a larger synthetic motion data set. Finally, the workflow built on Nvidia’s Omniverse and Cosmos platforms expands this data set through domain randomization and 3D upscaling.
Finally, the company also announced Cosmos, a platform featuring a family of open, pre-trained world-based models designed specifically to generate physics-aware videos and world states for physics AI development. Together, the company concluded, Isaac GR00T, Omniverse and Cosmos will help physical AI and humanoid innovation take a leap forward.
Add Comment