SecurityBrief Canada - Technology news for CISOs & cybersecurity decision-makers
Story image

HPE & NVIDIA expand AI Factory portfolio with new enterprise tools

Today

Hewlett Packard Enterprise has expanded its collaboration with NVIDIA to strengthen its AI Factory portfolio, aiming to support the entire AI lifecycle for enterprises, service providers, and public sector organisations.

The enhanced offerings in the portfolio include HPE Private Cloud AI, which is a turnkey solution combining HPE's infrastructure and NVIDIA AI technologies, aiming to provide a comprehensive platform for AI development and deployment.

This solution now features integration with NVIDIA AI Enterprise, allowing for the support of feature branch model updates and the NVIDIA Enterprise AI Factory validated design.

"Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers," said Antonio Neri, President and Chief Executive Officer, HPE. "By co-engineering cutting-edge AI technologies elevated by HPE's robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organisation, no matter where they are on their AI journey."

"Together, we are meeting the demands of today, while paving the way for an AI-driven future."

Jensen Huang, Founder and Chief Executive Officer of NVIDIA, commented: "Enterprises can build the most advanced NVIDIA AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI."

"Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data centre to the cloud and the edge."

The updated HPE Private Cloud AI solution now supports feature branch updates from NVIDIA AI Enterprise, which include AI frameworks, microservices for pre-trained models, and software development kits (SDKs). This feature aims to allow developers to test and validate software features and optimisations for AI workloads, alongside existing support for production branch models with built-in guardrails.

The objective is to provide businesses of all sizes the ability to develop and deploy agentic and generative AI applications while adopting a multi-layered approach to safety and security.

Additionally, HPE Private Cloud AI will support the NVIDIA Enterprise AI Factory validated design, extending its use for both agentic and generative AI workloads.

The partnership has also resulted in advancements in storage for AI workloads. The HPE Alletra Storage MP X10000 introduces a SDK designed to integrate with the NVIDIA AI Data Platform reference design.

This integration aims to deliver accelerated data performance and intelligent orchestration for agentic AI, enabling streamlined data pipelines for ingestion, inference, training, and continuous learning across NVIDIA-accelerated infrastructure.

Main advantages of this integration are expected to include flexible inline data processing, vector indexing, metadata enrichment, and remote direct memory access (RDMA) transfers between GPU memory, system memory, and HPE storage. This setup enables customers to adapt capacity and performance according to workload needs and facilitates seamless unification between storage and intelligence layers for real-time data access.

Advancements have also been made in server offerings, with the HPE ProLiant Compute DL380a Gen12 server available to order with up to ten NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

This server, which previously ranked highly in several MLPerf Inference: Datacentre v5.0 benchmarks, is aimed at supporting workloads such as agentic multimodal AI inference, physical AI tasks, model fine tuning, and applications in design and graphics. Features include advanced cooling options, enhanced security with post-quantum cryptography readiness and FIPS 140-3 Level 3 certification, and automated operations management for server lifecycles.

Additional benchmark results were highlighted, including strong performances by the HPE ProLiant Compute DL384 Gen12 server and the HPE Cray XD670 server, both of which achieved leading rankings in several industry-standard AI and computer vision tests.

On the software side, HPE OpsRamp Software is being updated to support the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

This solution is intended to provide IT teams with tools for full-stack AI infrastructure observability, workflow automation, analytics, and event management. Deep integration with NVIDIA infrastructure will enable detailed monitoring and optimisation of AI workloads, with metrics for GPU performance, job scheduling, and resource usage, as well as predictive analytics for resource allocation and cost optimisation.

The availability of these features will roll out over the coming months, with HPE Private Cloud AI's enhanced support, the HPE Alletra Storage MP X10000 SDK, and new server configurations being introduced by Summer 2025.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X