MSI Enterprise Platform Solutions stand as a leading global hardware provider.

The entirety of MSI's server products are developed in-house, showcasing a profound commitment to addressing customer needs and aligning with market demands. This commitment is underscored by a strong emphasis on design and manufacturing excellence.

NEWS & EVENTS

news
Product News
/March 16, 2026

MSI Launches XpertStation WS300 on NVIDIA DGX Station Architecture

San Jose, California – Mar 16, 2026 – MSI today announced the launch of XpertStation WS300 on NVIDIA DGX Station Architecture, a next-generation deskside AI supercomputer built to support the accelerating demands of large language models (LLMs), generative AI, and advanced data science workflows. Powered by NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, supporting up to 748GB of large coherent memory and dual 400GbE networking, the platform extends advanced AI infrastructure capabilities into a compact deskside deployment model and is available for order starting today. “MSI has a strategic vision to advance AI-first computing,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “With NVIDIA, we are defining the next era of AI infrastructure, bridging centralized performance and distributed innovation, and enabling organizations to move from experimentation to production with greater speed, scale, and confidence.” Bringing Data-Center AI to the Desktop XpertStation WS300 integrates up to 748GB of large coherent memory, combining high-bandwidth HBM3e GPU memory and LPDDR5X CPU memory into a unified domain to enable efficient CPU-GPU data sharing for large-scale model training and fine-tuning. With dual 400GbE connectivity powered by NVIDIA ConnectX-8 SuperNIC, the platform delivers up to 800Gb/s of aggregate networking bandwidth to support distributed AI workloads and multi-node scalability. High-speed PCIe Gen5 and Gen6 NVMe storage accelerates dataset ingestion and AI data pipelines, ensuring sustained compute utilization during intensive training and inference operations. Combined with full support for NVIDIA AI Software Stack, the platform provides an integrated hardware-software foundation for seamless AI development and deployment from desktop to data center. Expanding AI Workflows from Development to Deployment XpertStation WS300 supports the full AI lifecycle, from large-scale model training and data-intensive analytics to real-time inference and emerging physical AI and robotics workloads. The platform enables organizations to accelerate deep learning models, process massive datasets efficiently, and execute complex AI workloads locally with high-throughput performance. The system can also function as a centralized AI compute node for collaborative fine-tuning and on-demand deployment, providing teams greater operational flexibility while maintaining control over proprietary data and intellectual property. By extending data-center-class performance to the deskside, XpertStation WS300 allows organizations to move AI initiatives from experimentation to production with infrastructure-level consistency and reliability. Supporting Autonomous AI Agents NVIDIA NemoClaw is an open-source stack installing OpenShell runtime with a policy-controlled sandbox that enables autonomous AI agents to operate continuously more safely. Running OpenShell on XpertStation WS300, developers can run trillion-parameter models locally with up to 20 petaFLOPS of AI compute and 748GB of memory, enabling always-on AI agents at the deskside without relying on cloud infrastructure.

news
Product News
/March 2, 2026

MSI Unveils Scalable AI-RAN with NVIDIA AI Aerial Solutions to Accelerate 5G and Beyond at MWC 2026

One Unified Architecture for O-RAN, Private 5G, and vRAN BARCELONA, Spain – March 2, 2026 – MSI, a leading global provider of high-performance server solutions, is showcasing its latest AI-vRAN solutions at MWC 2026 (Booth #5A61) to address the evolving requirements of next-generation telecom networks. MSI’s AI-vRAN stack, integrated with GPU server solutions, is designed to bring AI capabilities into the core of network operations, supporting growing telecommunications workloads and accelerating the transition toward AI-powered mobile networks. “AI is rapidly reshaping the telecommunications landscape, and MSI is focused on helping operators build scalable network architectures that support diverse applications, including voice, data, video streaming, and AI workloads, through a unified infrastructure,” said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. “This AI-powered vRAN architecture delivers greater flexibility for 5G and future 6G deployments while accelerating time-to-market for new services.” To support AI-driven telecom networks, MSI delivers a unified AI-vRAN platform designed to simplify deployment across O-RAN, private 5G, and virtual RAN environments. This unified architecture enables operators to adopt AI-powered network functions while maintaining consistency across distributed and centralized network infrastructures. At MWC 2026, MSI highlights its CG480-S6053 and CG290-S3063 platforms, purpose-built to address diverse AI-vRAN deployment requirements. The CG480-S6053 is designed to support dense GPU configurations for compute-intensive AI inference and acceleration workloads. Meanwhile, the CG290-S3063 is optimized for space and power-efficient deployments in a 2U chassis, making it well suited for edge and distributed network environments. Both platforms offer flexible configurations of GPUs, NICs, DPUs, and storage to help operators optimize performance, scalability, and cost efficiency across a consistent infrastructure. Alongside these systems, the 2U CX271-S4056 platform (HE SKU) is designed to support balanced data processing for AI workloads, addressing scenarios where compute efficiency and system balance are critical. A key capability of MSI’s AI-vRAN solution is dynamic GPU allocation, which enables resources to be flexibly assigned between 5G communication workloads and AI workloads based on real-time demand. By supporting both Base Station and Edge AI functions within the same infrastructure, the platform enables simultaneous processing of RAN and AI workloads while improving overall compute utilization. Integrated software support further ensures AI services can be efficiently deployed and operated at the network edge. MSI offers a scalable range of GPU-accelerated server configurations, from 2 to 8 GPUs, to address diverse deployment requirements across wireless access, metro edge, core network, and centralized cloud environments. This flexibility allows telecom operators to align system configurations with specific workload demands while optimizing performance and operational efficiency. Built on the NVIDIA MGX architecture, MSI’s AI platforms have demonstrated reliability and scalability through widespread adoption in AI data centers worldwide. MSI is now extending this proven architectural approach into the telecom domain, applying data center–class computing principles to support the evolving performance, scalability, and efficiency requirements of next-generation mobile networks. Supporting Resources: Watch the MSI’s 4U & 2U NVIDIA MGX AI platform, built on NVIDIA accelerated computing to deliver the performance for tomorrow’s AI and telecom workloads.