MSI Enterprise Platform Solutions stand as a leading global hardware provider.

The entirety of MSI's server products are developed in-house, showcasing a profound commitment to addressing customer needs and aligning with market demands. This commitment is underscored by a strong emphasis on design and manufacturing excellence.

NEWS & EVENTS

news
Product News
/March 24, 2026

MSI Highlights Cloud, AI, and Enterprise Server Platforms at CloudFest 2026

Rust, Germany – March 24, 2026 – At CloudFest 2026 (Booth #H08+H09), MSI showcases a server portfolio spanning hyperscale cloud platforms, NVIDIA-accelerated AI systems, and enterprise servers for hybrid cloud environments. The lineup features multi-node and ORv3 architectures for hyperscale deployments, GPU-dense platforms for AI training and inference, and high-memory enterprise systems for virtualization and data-intensive workloads, enabling cloud service providers and enterprises to deploy scalable, high-density infrastructure for cloud, AI, and enterprise workloads. “Cloud infrastructure is evolving rapidly as AI services and data-intensive workloads continue to expand,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “Service providers and enterprises need platforms that can scale efficiently while maintaining performance and operational flexibility across next-generation data center environments.” Scalable Compute Platforms for Cloud Infrastructure The cloud platform portfolio supports hyperscale and service provider deployments, offering deployment flexibility across 2U 4-node, 2U 2-node, and ORv3 21-inch server architectures, with platforms powered by AMD EPYC™ and Intel® Xeon® 6 processors. These platforms optimize compute density and rack-level efficiency across large-scale cloud workloads. Built on DC-MHS (Data Center Modular Hardware System) architecture, the platforms improve serviceability and hardware interoperability, allowing operators to streamline maintenance and reduce operational complexity across large-scale deployments. For hyperscale environments, MSI also provides an ORv3-compliant 21-inch platform designed for Open Compute racks and 48V rack power architectures, enabling improved compute density, power efficiency, and seamless rack-scale integration. The CD270-S3071-X4 is a 2U 4-node platform designed for high-density cloud deployments. Each node supports a single Intel Xeon 6 processor (up to 400W TDP) with 12 DDR5 DIMM slots across 12 memory channels, and 3 front U.2 NVMe bays, enabling high compute density and efficient scaling for cloud-scale workloads. AI Platforms for Scalable Cloud AI Infrastructure NVIDIA-accelerated AI platforms extend the portfolio from large-scale training clusters to local AI development environments, spanning a 4U GPU server, a 2U GPU server, and an AI development workstation. Designed for cloud service providers and AI operators, the systems combine NVIDIA GPUs, high-bandwidth memory architectures, PCIe 5.0/6.0 connectivity, and high-speed networking to deliver the throughput and scalability required for data center-scale AI workloads. The CG481-S6053 is a 4U NVIDIA MGX server powered by dual AMD EPYC™ 9005 processors, supporting up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs and 8 400Gb QSFP112 networking ports. With PCIe 6.0 GPU connectivity and high-bandwidth networking, the platform is optimized for large-scale AI training clusters and model fine-tuning environments. The CG290-S3063 is a 2U NVIDIA MGX server built on Intel Xeon 6 processors, supporting up to 4 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. Its balanced GPU density and compact footprint make it well suited for scalable inference and distributed AI deployments across cloud and edge infrastructure. Completing the lineup, the XpertStation WS300, based on the NVIDIA DGX Station architecture, integrates an NVIDIA Grace CPU paired with an NVIDIA Blackwell Ultra GPU, featuring a 748GB coherent memory pool and 2×400GbE networking powered by NVIDIA ConnectX®-8 SuperNICs. The system enables developers and AI teams to develop, prototype, fine-tune, and validate AI models locally before deploying workloads to production-scale AI infrastructure. Enterprise Servers for Hybrid Cloud Infrastructure Enterprise platforms designed for hybrid cloud environments support enterprise data center deployments. The lineup spans 1U and 2U systems powered by AMD EPYC 9005 and Intel Xeon 6 processors, combining high core-count CPUs, DDR5 memory, PCIe 5.0 expansion, and high-density NVMe storage to support virtualization, databases, and cloud-native workloads. The CX270-S5062 is a 2U dual-socket Intel Xeon 6 platform supporting 32 DDR5 DIMMs, flexible NVMe storage configurations of up to 24 front U.2 NVMe drives, and 2 PCIe 5.0 double-wide GPUs, delivering strong compute performance and storage density for virtualization and enterprise database workloads. The CX271-S4056 (-HE SKU) is a 2U single-socket AMD EPYC 9005 platform with 24 DDR5 DIMMs, 2 PCIe 5.0 double-wide GPUs, and NVMe storage, delivering high memory bandwidth and balanced performance for enterprise applications and cloud infrastructure services. The CX171-S4056 is a 1U AMD EPYC 9005 platform supporting 24 DDR5 DIMMs and up to 12 front NVMe drives, enabling high compute density for space-efficient data center deployments.

news
Product News
/March 18, 2026

MSI Accelerates Enterprise AI with NVIDIA MGX Servers and DGX Workstations at GTC 2026

From liquid-cooled AI training platforms to desktop AI supercomputing, MSI expands its portfolio to power next-gen generative AI, HPC, and real-time video analytics. San Jose, CA – March 17, 2026 – MSI, a global leader in high-performance server solutions, today unveils its latest AI infrastructure portfolio built on NVIDIA’s modular architectures, including the NVIDIA MGX platform and NVIDIA DGX Station technology. Designed to accelerate AI training, large-scale inference, HPC, edge, and next-generation data center workloads, MSI’s expanded lineup delivers exceptional scalability, performance density, and deployment flexibility. Scalable AI Infrastructure Built on NVIDIA MGX Architecture Leveraging the modular design of NVIDIA MGX architecture, MSI has developed a comprehensive portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. The NVIDIA MGX architecture enables flexible CPU selection, high-capacity memory configurations, and seamless integration of high-speed networking — empowering enterprises to deploy infrastructure tailored to diverse workload requirements, from data center deployments to edge applications. MSI’s platforms based on NVIDIA MGX support both Intel and AMD CPU options, up to 32 DDR5 DIMM slots, and as many as eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs. With advanced thermal engineering, including both liquid cooling and optimized air-cooling designs, these systems are purpose-built to sustain peak performance under the most demanding AI workloads. The CG480-S5063 represents MSI’s flagship 4U server based on NVIDIA MGX, featuring dual Intel® Xeon® 6 processors, eight dual-width PCIe GPU slots, and 32 DDR5 DIMM slots for maximum memory scalability. Supporting up to twenty PCIe Gen5 E1.S NVMe bays, the system delivers ultra-fast data throughput and expansive storage capacity. Expanding the NVIDIA MGX family, the CG481-S6053 integrates dual AMD EPYC™ 9005 Series processors to maximize core density and I/O bandwidth. Featuring eight PCIe 5.0 GPU slots, 24 DDR5 DIMM slots and eight high-bandwidth 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs, this platform is designed for compute-intensive AI clusters, HPC simulations, and multi-tenant enterprise AI deployments. The CG681-S6093 is a 6U liquid-cooled AI platform built on a dual-socket architecture, supporting eight dual-width PCIe GPU slots and eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs. It delivers exceptional performance and efficiency for high-density AI data center deployments. To reinforce its commitment to next-generation AI computing, MSI is showcasing NVIDIA’s Vera CPU option, highlighting its ongoing innovation in data-driven, AI-powered infrastructure solutions. Accelerating Computer Vision and Video Analytics The NVIDIA MGX architecture is particularly well-suited for computer vision and real-time video analytics applications. With high GPU throughput and ultra-fast networking, MSI’s AI platforms enable real-time multi-camera processing for smart cities, industrial inspection, and advanced surveillance systems. At GTC 2026, MSI is demonstrating AI-powered smart video search and automated summarization capabilities, showcasing how enterprises can extract actionable intelligence from massive volumes of live and archived video data. Data Center AI Performance at the Desk: MSI XpertStation WS300 For AI developers, researchers, and data scientists requiring data center-class performance in a workstation form factor, MSI announces the availability of the XpertStation WS300 beginning March 16. Built upon the NVIDIA DGX Station architecture, the WS300 is powered by NVIDIA Grace Blackwell Ultra Desktop Superchip and features 748GB of large coherent memory. Equipped with dual 400GbE networking ports via NVIDIA ConnectX-8 SuperNICs and a robust 1600W ATX power supply, the system delivers unprecedented AI compute capability directly at the deskside with a plug-and-play supercomputing feature. The XpertStation WS300 enables advanced AI model development, fine-tuning, inference, data science workflows, and complex simulations — bringing data center-level acceleration to enterprise developers and research teams. With its expanded NVIDIA MGX server portfolio and NVIDIA DGX-powered workstation platform, MSI continues to deliver scalable, high-performance AI infrastructure, empowering organizations to accelerate innovation from the data center to the edge and the desktop.