NVIDIA MGX
AMD EPYC 9005
Intel Xeon 6
AMD EPYC 4005
Elevate Success with Every Operation!

MSI Enterprise Platform Solutions stand as a leading global hardware provider.

The entirety of MSI's server products are developed in-house, showcasing a profound commitment to addressing customer needs and aligning with market demands. This commitment is underscored by a strong emphasis on design and manufacturing excellence.

NEWS & EVENTS

news
Product News
/November 18, 2025

MSI Accelerates AI and Data Center Innovation with Next-Gen Server Solutions at SC25

ST. LOUIS, Missouri – November 18, 2025 – MSI, a leading global provider of high-performance server solutions, showcases its next-generation computing and AI innovations at SuperComputing 2025 (SC25), Booth #205. MSI introduces its ORv3 rack solution and a comprehensive portfolio of power-efficient, multi-node, and additional AI-optimized platforms built on NVIDIA MGX and desktop NVIDIA DGX, designed for high-density environments and mission-critical workloads. These modular, scalable and rack-scale solutions are engineered for maximum performance, energy efficiency, and flexibility, enabling modern and next-generation data centers to accelerate deployment and scale with ease. "Through close collaboration with industry leaders AMD, Intel, and NVIDIA, MSI continues to drive innovation across the data center ecosystem," said Danny Hsu, General Manager of the Enterprise Platform Solutions at MSI. "Our goal is to deliver scalable, energy-efficient infrastructure that empowers customers to accelerate AI development and next-generation computing with performance, reliability, and flexibility at scale." Scaling Data Center Performance — From DC-MHS Architecture to Rack Solutions MSI’s data center building blocks are developed on the DC-MHS (Datacenter Modular Hardware System) architecture, spanning host processor modules, Core Compute servers, Open Compute servers, and AI computing servers. This modular design standardizes hardware components, BMC architecture, and form factors, simplifying operations and reducing deployment complexity. With EVAC CPU heatsink support, data centers can maintain thermal efficiency while rapidly adapting to the growing demands of AI, analytics, and compute-intensive workloads. MSI’s modular approach empowers operators to deploy next-generation infrastructure faster and achieve time-to-market value. ORv3 Rack — Designed for Next-Generation Data Centers MSI’s ORv3 21” 44OU rack is a fully validated, integrated solution that combines power, thermal, and networking systems to streamline engineering and accelerate deployment in hyperscale environments. Featuring sixteen CD281-S4051-X2 2OU DC-MHS servers, the rack utilizes centralized 48V power shelves and front-facing I/O, maximizing space for CPUs, memory, and storage while maintaining optimal airflow and simplifying maintenance. Single-Socket AMD EPYC™ 9005 Server in ORv3 Architecture: CD281-S4051-X2: 2OU 2-node server with 12 DDR5 DIMM slots and 12 E3.S 1T PCIe 5.0 x4 NVMe bays per node DC-MHS Core Compute Servers — High-Density, Scalable Data Center Solutions MSI’s Core Compute platforms maximize rack density and resource efficiency by integrating multiple compute nodes into a single high-density chassis. Each node is powered by either AMD EPYC 9005 Series processors (up to 500W TDP) or Intel® Xeon® 6 processors (up to 500W/350W TDP). Available in 2U 4-node and 2U 2-node configurations, these platforms deliver exceptional thermal performance and scalability for today’s data centers. Single-Socket AMD EPYC 9005 Servers CD270-S4051-X4: 2U 4-node server with 12 DDR5 DIMM slots and 3 PCIe 5.0 x4 U.2 NVMe bays per node. CD270-S4051-X2: 2U 2-node server with 12 DDR5 DIMM slots and 6 PCIe 5.0 x4 U.2 NVMe bays per node. Single-Socket Intel Xeon 6 Servers CD270-S3061-X4: 2U 4-node server with 16 DDR5 DIMM slots and 3 PCIe 5.0 x4 U.2 NVMe bays per node. CD270-S3071-X2: 2U 2-node server with 12 DDR5 DIMM slots and 6 PCIe 5.0 x4 U.2 NVMe bays per node. DC-MHS Enterprise Servers — High-Efficiency Platforms for Cloud Workloads Built on the DC-MHS architecture, MSI’s enterprise server platforms deliver exceptional memory capacity, extensive I/O options, and high TDP CPU compatibility to handle demanding cloud, virtualization, and storage applications. Supporting both AMD EPYC 9005 Series and Intel Xeon 6 processors, these modular solutions provide flexible performance for diverse data center workloads. Single-Socket AMD EPYC 9005 Servers CX271-S4056: 2U server with 24 DDR5 DIMM slots and configurations of 8 or 24 PCIe 5.0 U.2 NVMe bays CX171-S4056: 1U server with 24 DDR5 DIMM slots and 12 PCIe 5.0 U.2 NVMe bays. Dual-Socket Intel Xeon 6 Servers CX270-S5062: 2U server with 32 DDR5 DIMM slots and configurations of 8 or 24 PCIe 5.0 U.2 NVMe bays. CX170-S5062: 1U server with 32 DDR5 DIMM slots and 12 PCIe 5.0 U.2 NVMe bays. Next-Generation AI Solutions Accelerated by NVIDIA MSI introduces a new era of AI computing solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures. The lineup includes AI servers and AI station supporting the latest NVIDIA Hopper GPUs, NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, and NVIDIA Blackwell Ultra GPUs, engineered to meet diverse deployment needs, from large-scale data center training to edge inferencing and AI development on the desktop. MSI’s AI servers are purpose-built for high-performance computing and AI workloads. The 4U AI platforms offer flexible configurations with both Intel Xeon and AMD EPYC processors, supporting up to 600W GPUs for maximum performance. These platforms are ideal for large language models (LLMs), deep learning training, and NVIDIA Omniverse workloads. AI Servers CG481-S6053: Dual AMD EPYC 9005 CPUs, eight PCIe 5.0 x16 FHFL dual-width GPU slots, 24 DDR5 DIMMs, eight 2.5-inch U.2 NVMe bays, and eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs. CG480-S5063: Dual Intel Xeon 6 CPUs, eight PCIe 5.0 x16 FHFL dual-width GPU slots, 32 DDR5 DIMMs, and twenty PCIe 5.0 E1.S NVMe bays. CG290-S3063: 2U AI server powered by a single Intel Xeon 6 CPU with 16 DDR5 DIMMs and four FHFL dual-width GPU slots (up to 600W each), ideal for edge computing and small-scale inference deployments. AI Station For developers demanding data center-level performance in a workstation form factor, the MSI AI Station CT60-S8060 brings the power of the NVIDIA DGX Station to the desktop. Built with the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip and up to 784GB of unified memory, it delivers unprecedented compute performance for developing, training, and deploying large-scale AI models all from the deskside. Supporting Resources: Discover how MSI’s OCP ORv3-compatible nodes deliver optimized performance for hyperscale cloud deployments. Watch the MSI’s 4U & 2U NVIDIA MGX AI platform, built on NVIDIA accelerated computing to deliver the performance for tomorrow’s AI workloads.

news
Product News
/November 17, 2025

MSI Unveils Next-Gen Systems Based on NVIDIA MGX and Desktop NVIDIA DGX AI Platforms at Supercomputing 2025

Expanding AI Innovation from Data Center to Desktop — Showcasing New NVIDIA MGX Servers, AI Station, and EdgeXpert Supercomputer MSI, a global leader in high-performance computing, will unveil its next generation of AI computing solutions at Supercomputing 2025 (SC25), held at the America’s Center in St. Louis, Missouri. Built on NVIDIA MGX™, NVIDIA DGX Station™, and NVIDIA DGX Spark™ reference architectures, the lineup is powered by NVIDIA Hopper GPUs, NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, NVIDIA Blackwell Ultra GPUs, as well as the Grace Blackwell GB10 Superchip. Together, these platforms deliver data-center-class performance across diverse AI workloads—from large-scale model training and simulation to edge inferencing and desktop AI development. AI Servers: Scalable Performance for Every AI Workload MSI’s AI servers are purpose-built for large language models (LLMs), deep learning, and NVIDIA Omniverse workloads, offering flexible configurations with both Intel® Xeon® and AMD EPYC™ processors, supporting up to 600 W GPUs for maximum performance and scalability. CG481-S6053 (4U, AMD Platform) Dual AMD EPYC 9005 CPUs, eight FHFL PCIe 5.0 dual-width GPU slots, 24 DDR5 DIMMs, eight U.2 NVMe bays, and eight 400 GbE ports powered by NVIDIA ConnectX-8 SuperNICs, enabling high-bandwidth AI clusters. CG480-S5063 (4U, Intel Platform) Dual Intel Xeon 6 CPUs, eight FHFL dual-width GPU slots, 32 DDR5 DIMMs, and twenty PCIe 5.0 E1.S NVMe bays, optimized for deep learning training and fine-tuning workloads. CG290-S3063 (2U) Single Intel Xeon 6 CPU, 16 DDR5 DIMMs, and four dual-width GPU slots (up to 600 W each), ideal for compact edge computing and small-scale inferencing deployments. Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions, stated: “As AI workloads grow in complexity, MSI’s latest platforms based on NVIDIA MGX enable customers to scale efficiently while maximizing GPU bandwidth and computational density to meet the demands of the AI era.” AI Station CT60-S8060: Data-Center Power at the Desktop For developers and researchers demanding data-center-level compute performance in a workstation form factor, MSI introduces the AI Station CT60-S8060, built on NVIDIA DGX Station™ architecture and powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip. With up to 784 GB of unified memory, this platform delivers DGX-class performance for developing, training, and deploying large-scale AI models directly from the desktop. Danny Hsu added: “The AI Station brings data-center performance to individual creators and researchers, empowering next-generation AI innovation at every level.” EdgeXpert Personal AI Supercomputer Now Available Starting October 15 The MSI EdgeXpert Personal AI Supercomputer became officially available on October 15, 2025, through MSI’s official website and authorized distributors. Powered by NVIDIA GB10 Grace Blackwell Superchip with 128 GB of unified memory, EdgeXpert delivers professional-grade AI performance in a compact 1.2-liter form factor, designed for education, research, and enterprise AI labs. David Wu, General Manager of MSI’s Customized Product Solutions BU, commented: “EdgeXpert bridges the gap between AI research and real-world deployment, empowering developers and educators with secure, high-performance, and affordable AI computing right on their desktop.” Event Details Event: Supercomputing 2025 (SC25) Date: November 17–20, 2025 Location: America’s Center, St. Louis, MO, USA Booth: #205 MSI AIoT: https://www.msi.com/to/aiot MSI AIoT Facebook: https://www.facebook.com/MSIAIoT MSI AIoT LinkedIn: https://www.linkedin.com/showcase/msi-aiot MSI Global YouTube: https://www.youtube.com/user/MSI

news
Product News
/October 14, 2025

Scaling Cloud and AI: MSI Highlights ORv3, DC-MHS, and MGX Solutions at 2025 OCP Global Summit

San Jose, California – Oct 14, 2025 – At 2025 OCP Global Summit (Booth #A55), MSI, a leading global provider of high-performance server solutions, highlights the ORv3 21” 44OU rack, OCP DC-MHS platforms, and GPU servers built on NVIDIA MGX architecture, accelerated by the latest NVIDIA Hopper GPUs and NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These solutions target hyperscale, colocation, and AI deployments, delivering the scalability and efficiency required for next-gen data centers. On Oct 16, MSI Product Marketing Manager Chris Andrada will also present an Expo Hall session titled “Pioneering the Modern Datacenter with DC-MHS Architecture.” “Our focus is on helping datacenter operators bridge the gap between rapidly advancing compute technologies and real-world deployment at scale. By integrating rack-level design with open standards and GPU acceleration, we aim to simplify adoption, reduce complexity, and give the industry a stronger foundation to support the next wave of AI and data-driven applications,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. ORv3 Rack-Scale Integration MSI’s ORv3 21” 44OU Rack comes fully validated with integrated power, thermal, and networking, reducing engineering effort and deployment time for hyperscale environments. With 16 dual-node servers, centralized 48V power shelves, and all front-facing I/O, operators gain more space for CPUs, memory, and storage while keeping airflow clear for efficient cooling. The CD281-S4051-X2 2OU 2-node DC-MHS server supports a single AMD EPYC™ 9005 CPU up to 500W TDP per node, each node with 12 DDR5 DIMM slots, 12 front E3.S PCIe 5.0 NVMe drives, and 2 PCIe 5.0 x16 slots for balanced compute, storage, and expansion. This combination provides dense performance for cloud and analytics, delivered in a rack system that can be deployed faster and serviced entirely from the cold aisle. Standardization with OCP DC-MHS Server & Motherboards MSI’s DC-MHS portfolio offers standardized server and HPM designs across Intel® Xeon® 6 and AMD EPYC 9005 processors for CSPs and hyperscale data centers. With standardized DC-SCM modules, these platforms reduce firmware effort and enable cross-vendor interoperability. Available in M-FLW, DNO-2, and DNO-4 form factors, they provide a consistent path to deploy next-gen CPUs without redesigning entire systems. With support for DDR5 high-bandwidth memory, PCIe 5.0 for accelerators and I/O, and front-service NVMe bays, DC-MHS systems include options such as the CX270-S5062 2U Intel Xeon 6 platform or modular HPMs, which let customers align CPU power, memory density, and drive configurations to workload needs, from cloud clusters to hyperscale data centers. Intel HPMs include the D3071 (DNO-2 single-socket, 12 DIMM slots), D3061 (DNO-2 single-socket, 16 DIMM slots), and D3066 (DNO-4 single-socket, 16 DIMM slots). AMD HPMs include the D4051 (DNO-2 single-socket, 12 DIMM slots) and the D4056 (DNO-4 single-socket, 24 DIMM slots for higher capacity). GPU Density with NVIDIA MGX Built on the NVIDIA MGX modular architecture, MSI’s GPU servers accelerate AI workloads across training, inference, and simulation with support for the latest NVIDIA Hopper GPUs and NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. The CG481-S6053 (4U) integrates dual AMD EPYC 9005 CPUs, 8 FHFL PCIe 6.0 GPU slots, 24 DDR5 DIMM slots, and 8×400G Ethernet networking via NVIDIA ConnectX-8 SuperNICs, ideal for large-scale AI training clusters requiring maximum GPU density and bandwidth. The CG290-S3063 (2U) features a single Intel Xeon 6 CPU, 4 FHFL PCIe 5.0 GPU slots, and 16 DDR5 DIMM slots, providing a compact, efficient system optimized for AI inference and fine-tuning in space-sensitive environments. Supporting Resources: Discover how MSI’s OCP ORv3-compatible nodes deliver optimized performance for hyperscale cloud deployments. Watch the MSI’s 4U & 2U NVIDIA MGX AI platform, built on NVIDIA accelerated computing to deliver the performance for tomorrow’s AI workloads.

Subscribe Now

Subscribe to our newsletter to receive the latest news and updates.

Please check the box if you would like to receive our latest news and updates. By clicking here, you consent to the processing of your personal data by [Micro-Star International Co., LTD.] to send you information about [MSI’s products, services and upcoming events]. Please note that you can unsubscribe from the MSI Newsletters here at any time. Further details of our data processing activities are available in the MSI Privacy Policy.