Oct. 21 (Portaltic/EP) –
IBM has announced its most advanced large language model (LLM) family to date, Granite 3.0, accompanied by an update to Watsonx Code Assistant and Granite Time pre-training models.
Granite 3.0 brings together a set of LLMs developed for different purposes, which the technology company summarizes in a general purpose or language and with a mix of experts (MoE) architecture. Also others with a focus on safety and guardrails, that is, with capabilities to detect risk and damage.
All of these models are “compact and versatile,” designed to “precisely fit enterprise data and integrate seamlessly into various business environments or workflows.”
These models have been trained with more than 12 billion tokens in data extracted from twelve natural languages and 116 programming languages, using a two-stage training method, as reported by the company in a press release.
With 1 billion parameter (1B), 2 billion (2B), and 8 billion parameter (8B) variants, IBM has clarified that the 8B and 2B LLMs will support an extended 128K context window and multimodal document understanding capabilities. at the end of the year.
According to IBM, it is its “most advanced family of AI models to date,” with the ability to “outperform or match similarly sized models from leading model vendors in many academic and industry benchmarks, making it that demonstrates solid performance, transparency and security.
The company has accompanied these models with an update to its Granite Time pre-training models, which are trained with three times more data to multiply their performance in test tests by three.
All Granite 3.0 and Granite Times models are available from HuggingFace under a permissive Apache 2.0 free software license, although some variants will also be available from partners such as Qualcomm, AWS and Nvidia.
IBM has also announced the next generation of Watsonx Code Assistant powered by Granite models to provide programming assistance with languages such as C, C++, Go, Java, and Python.
Add Comment