Jan. 8 (Portaltic/EP) –
He Nvidia CEO Jensen Huang has assured that its Artificial Intelligence (AI) chips evolve and improve their performance faster than contemplated in Moore’s Law, which will reduce the costs of using AI models, while advocating for a future based on super-intelligent AI to carry out all types of tasks and promote robotics.
Moore’s Law, devised by Intel co-founder Gordon Moore in 1965, predicts the evolution of computing power every two years. Specifically, it is a law that, based on advances in the technological sector, predicted that the number of transistors used on a chip would double approximately every two yearswith a minimal increase in cost.
That is, as Intel itself explains on your websitehow many The more transistors or components there are in a device, the cost per device is reducedwhile the performance increases. Thus, it is a relevant approach for the advancement of the technology sector, since it has promoted multiple improvements in the capacity of computing devices, as well as a reduction in their cost.
Taking this into account, the CEO of Nvidia has spoken out, referring to Moore’s Law and has assured that Nvidia’s AI chips are “progressing much faster” that this prediction and, therefore, the AI systems advance at their own pace.
This is what Huang said in statements to TechCrunch within the framework of the CES 2025 technology fair, who has indicated that, for example, its latest processor designed for data centers, the GB200 NVL72 superchipallows to obtain 30x more performance for LLM inference workloadscompared to the capacity that the previous generation H100 chip would provide.
As Huang explained, this is because Nvidia can currently build the architecture, chip, system, libraries and algorithms at the same time. “If we do that, we can move faster than Moore’s Law, because we can innovate across the board“, he declared.
In this framework, the executive director is committed to a dedicated law for the advancement of AI systemswhich is based on three scaling points, such as pre-training, post-training and testing time computing.
He pre-workout refers to the initial phase where AI models learn patterns derived from large amounts of data. For his part, the post-workout relates to the refinement of model responses through human feedback. As for the test time computationis the reasoning of the model, the inference phase with which it makes the decision about what response it should return to the users.
The latter allows the AI model to think in more detail the answer to every user question, however, is also the most expensive part when using AI models.
According to Huang, just as Moore’s Law “was very important in the history of computing because it reduced computing costs,” the evolution in inference will have the same effect, where Nvidia increases performance with its processors. “The same will happen with inference, where we increase performance and as a result the cost of inference will be lower“said the manager.
With this approach, models such as the one recently presented by OpenAI o3, that uses more computing power during inference to think more about its answers, may be less expensive over timethanks to chips like the one mentioned above from Nvidia, GB200 NVL72.
That is, Nvidia’s goal is continue developing higher performance chips to power AI tasks, in order to generate more affordable prices when using this technology on a daily basis and in technological innovations. “The direct and immediate solution to test-time computing, both in terms of performance and affordability, is to increase our computing capacity,” he noted.
A FUTURE WITH SUPERINTELLIGENT AI
In addition to all this, during a meeting with the press at CES, to which Engadget has had access, the CEO of Nvidia has also predicted a future with “superintelligent AI” when carrying out various tasks on a daily basis.
Specifically, Huang has announced that users will have superintelligent AI services that will allow writing, analyzing problems, managing supply chain planning, write ‘software’ or design chips. Furthermore, he has also pointed out that will boost the robotics industry with advances such as those recently announced in Omniverse and its new Cosmos World Foundation platform.
In this regard, asked if the smart robots They can end up harming humans Given these advances, Huang has stressed that this technology “can be used in many ways” and that it does not have to be harmful, but rather “it is humans” who can use it in harmful ways.
“I believe that machines are machines,” he stated while pointing out that AI and intelligent robots will be on the side of humans because “they are going to be built in that way.”
Add Comment