Dell has expanded its hardware offerings with the inclusion of support for the Llama 2 models in its Dell Validated Design for Generative AI and on-site generative AI solutions. Meta introduced Llama 2 in July, garnering support from several cloud services, including Microsoft Azure, AWS, and Google Cloud. However, Dell’s initiative stands out as it facilitates the deployment of the open-source Large Language Model (LLM) within on-site infrastructure.
The company’s embrace of Llama 2 extends beyond just customer support; Dell is integrating the model into its own operations as well.
This move is seen as advantageous for Meta, allowing for a deeper understanding of enterprise utilization of Llama, and contributing to the enhancement of Llama functionalities.
Bringing Llama 2 to the Enterprise
Dell’s senior vice president of AI strategy, Matt Baker, views the adoption of Llama 2 as a critical step toward realizing Dell’s ambition of integrating AI with enterprise data management.
Baker shared with VentureBeat that this integration allows for sophisticated applications to be developed adjacent to existing data using the open-source model, even with Llama 2’s extensive capabilities of up to 70 billion parameters.
Dell has also been a proponent of Nvidia’s NeMo framework, assisting organizations in crafting generative AI applications. Now, with Llama 2, Dell is not only providing another choice for clients but also advising on the necessary hardware for Llama 2 deployment and aiding in the development of applications leveraging the open-source LLM.
Dell’s application of Llama 2 includes experimental and operational deployment, notably in enhancing its knowledge base articles with a chatbot interface to streamline information retrieval.
While Dell profits from the hardware and professional services associated with generative AI, Baker emphasizes that Dell is not commercializing Llama 2, which remains accessible as open-source software.
Meta’s thoughts
On the other hand, Meta, with Joe Spisak at the helm of generative AI open-source initiatives, is optimistic about the partnership with Dell. The success of Llama 2 is evident, with a reported 30 million downloads in the past month. The LLM isn’t just a standalone offering but a core component of a broader generative AI stack, which includes the PyTorch machine learning framework also developed by Meta.
Spisak highlighted that Llama 2’s versatility is being recognized across various sectors, from cloud services optimizing LLM benchmarks to hardware manufacturers integrating the model into new devices. He stressed the significance of on-premises deployment capabilities, which cater to data privacy concerns.
According to Spisak, partnerships like the one with Dell are invaluable as they enable the Llama development community to gain insights into enterprise needs and scalability, fostering an environment conducive to further advancements in subsequent Llama models and a more secure, open ecosystem.
Read related articles: