Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Make no mistake about it, enterprise AI is big business, especially for IBM.
IBM already has a $2 billion book of business related to generative AI and it’s now looking to accelerate that growth. IBM is expanding its enterprise AI business today with the launch of the third generation of Granite large language models (LLMs). A core element of the new generation is the continued focus on real open source enterprise AI. Going a step further, IBM is ensuring that models can be fine-tuned for enterprise AI, with its InstructLab capabilities.
The new models announced today include general purpose options with a 2 billion and 8 billion Granite 3.0. There are also Mixture-of-Experts (MoE) models that include Granite 3.0 3B A800M Instruct, Granite 3.0 1B A400M Instruct, Granite 3.0 3B A800M Base and Granite 3.0 1B A400M Base. Rounding out the update, IBM also has a new group with optimized guardrail and safety options that include Granite Guardian 3.0 8B and Granite Guardian 3.0 2B models. The new models will be available on IBM’s watsonX service, as well as on Amazon Bedrock, Amazon Sagemaker and Hugging Face.
“As we mentioned on our last earnings call, the book of business that we’ve built on generative AI is now $2 billion plus across technology and consulting,” Rob Thomas, senior vice-president and chief commercial officer at IBM, said during a briefing with press and analysts. “As I think about my 25 years in IBM, I’m not sure we’ve ever had a business that has scaled at this pace.”
How IBM is looking to advance enterprise AI with Granite 3.0
Granite 3.0 introduces a range of sophisticated AI models tailored for enterprise applications.
IBM expects that the new models will help to support a range of enterprise use cases including: customer service, IT automation, Business Process Outsourcing (BPO), application development and cybersecurity.
The new Granite 3.0 models were trained by IBM’s centralized data model factory team that is responsible for sourcing and curating the data used for training.
Dario Gil, Senior Vice President and Director of IBM research, explained that the training process involved 12 trillion tokens of data, including both language data across multiple languages as well as code data. He emphasized that the key differences from previous generations were the quality of the data and the architectural innovations used in the training process.
Thomas added that what’s also important to recognize is where the data comes from.
“Part of our advantage in building models is data sets that we have that are unique to IBM,” Thomas said. “We have a unique, I’d say, vantage point in the industry, where we become the first customer for everything that we build that also gives us an advantage in terms of how we construct the models.”
IBM claims high performance benchmarks for Granite 3.0
According to Gil, the Granite models have achieved remarkable results on a wide range of tasks, outperforming the latest versions of models from Google, Anthropic and others.
“What you’re seeing here is incredibly highly performant models, absolutely state of the art, and we’re very proud of that,” Gil said.
But it’s not just raw performance that sets Granite apart. IBM has also placed a strong emphasis on safety and trust, developing advanced “Guardian” models that can be used to prevent the core models from being jailbroken or producing harmful content. The various model size options are also a critical element.
“We care so deeply, and we’ve learned a lesson from scaling AI, that inference cost is essential,” Gil noted. “That is the reason why we’re so focused on the size of the category of models, because it has the blend of performance and inference cost that is very attractive to scale use cases in the enterprise.”
Why real open source matters for enterprise AI
A key differentiator for Granite 3.0 is IBM’s decision to release the models under the Open Source Initiative (OSI) approved Apache 2.0 open-source license.
There are many other open models, such as Meta’s Llama in the market, that are not in fact available under an OSI-approved license. That’s a distinction that matters to some enterprises.
“We decided that we’re going to be absolutely squeaky clean on that, and decided to do an Apache 2 license, so that we give maximum flexibility to our enterprise partners to do what they need to do with the technology,” Gil explained.
The permissive Apache 2.0 license allows IBM’s partners to build their own brands and intellectual property on top of the Granite models. This helps foster a robust ecosystem of solutions and applications powered by the Granite technology.
“It’s completely changing the notion of how quickly businesses can adopt AI when you have a permissive license that enables contribution, enables community and ultimately, enables wide distribution,” Thomas said.
Looking beyond generative AI to generative computing
Looking forward, IBM is thinking about the next major paradigm shift, something that Gil referred to as – generative computing.
In essence, generative computing refers to the ability to program computers by providing examples or prompts, rather than explicitly writing out step-by-step instructions. This aligns with the capabilities of LLMs like Granite, which can generate text, code, and other outputs based on the input they receive.
“This paradigm where we don’t write the instructions, but we program the computer, by example, is fundamental, and we’re just beginning to touch what that feels like by interacting with LLMs,” Gil said. “You are going to see us invest and go very aggressively in a direction where with this paradigm of generative computing, we’re going to be able to implement the next generation of models, agentic frameworks and much more than that, it’s a fundamental new way to program computers as a consequence of the Gen AI revolution.”