MSI Enterprise Platform Solutions stand as a leading global hardware provider.

The entirety of MSI's server products are developed in-house, showcasing a profound commitment to addressing customer needs and aligning with market demands. This commitment is underscored by a strong emphasis on design and manufacturing excellence.

NEWS & EVENTS

news
Product News
/June 11, 2025

MSI Powers AI’s Next Leap for Enterprises at ISC 2025

Hamburg, Germany – June 10, 2025 – MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI’s AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations. “As AI workloads continue to grow and evolve toward inference-driven applications, we’re seeing a significant shift in how enterprises approach AI deployment,” said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. “With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes. Built on the NVIDIA MGX modular architecture, MSI’s AI servers deliver a powerful and flexible foundation for accelerated computing, tailored to meet the evolving needs of diverse AI workloads. The CG480-S5063, MSI’s latest 4U AI server, is purpose-built for high-performance tasks such as large language model (LLM) training, deep learning, and fine-tuning. It supports dual Intel® Xeon® 6 processors and features eight FHFL dual-width GPU slots, compatible with NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition, with support for GPUs up to 600W. Equipped with 32 DDR5 DIMM slots and 20 PCIe 5.0 E1.S NVMe bays, the CG480-S5063 ensures exceptional memory bandwidth and lightning-fast data throughput. Its modular architecture and expansive storage design make it a future-ready platform, ideal for next-generation AI deployments that demand unmatched performance and scalability. The CG290-S3063 is a 2U AI server platform powered by the NVIDIA MGX modular architecture, designed to meet the growing demands of AI workloads in enterprise data centers. It supports a single-socket Intel Xeon 6 processor, up to 16 DDR5 DIMM slots, and four FHFL dual-width GPU slots with power support up to 600W—ideal for small-scale inference and lightweight AI workloads. With PCIe 5.0 expansion, four rear 2.5-inch NVMe drive bays, and dual M.2 NVMe slots, the CG290-S3063 offers fast data throughput, flexible storage, and a scalable design for next-generation AI applications. The CX270-S5062, built on the DC-MHS (Datacenter Modular Hardware System) standard, is a 2U server featuring dual Intel Xeon 6 processors designed for demanding enterprise compute workloads. Equipped with 32 DDR5 DIMM slots and up to 24 PCIe 5.0 U.2 NVMe bays, it delivers exceptional memory bandwidth and high-speed storage performance, making it well-suited for virtualization, database management, and other high-performance applications. For hyperscale cloud environments, MSI offers the Open Compute CD281-S4051-X2—a 21” ORv3-compliant 2OU, 2-node server optimized for large-scale deployments. Each node is powered by a single AMD EPYC™ 9005 Series processor supporting up to 500W TDP, equipped with twelve DDR5 DIMM slots and up to twelve PCIe 5.0 E3.S NVMe bays. This configuration delivers outstanding memory bandwidth, dense storage capacity, and fast data transfer. Featuring Extended Volume Air Cooling (EVAC) CPU heatsinks and compatibility with ORv3 48VDC power architecture, the platform offers energy-efficient operation and scalable performance, making it an ideal choice for next-generation cloud data centers. Supporting Resources: Watch the MSI 4U MGX AI platform, built on NVIDIA accelerated computing, deliver the performance you need for tomorrow’s AI workloads. Discover how MSI’s OCP ORv3-compatible nodes deliver optimized performance for hyperscale cloud deployments.

news
Product News
/May 20, 2025

From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025

Taipei, Taiwan – May 20, 2025 – MSI, a global leader in high-performance server solutions, returns to COMPUTEX 2025 (Booth #J0506) with its most comprehensive lineup yet. Showcasing rack-level integration, modular cloud infrastructure, AI-optimized GPU systems, and enterprise server platforms, MSI presents fully integrated EIA, OCP ORv3, and NVIDIA MGX racks, DC-MHS-based Core Compute servers, and the new NVIDIA DGX Station. Together, these systems underscore MSI’s growing capability to deliver deployment-ready, workload-tuned infrastructure across hyperscale, cloud, and enterprise environments. "The future of data infrastructure is modular, open, and workload-optimized,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “At COMPUTEX 2025, we’re showing how MSI is evolving into a full-stack server provider, delivering integrated platforms that help our customers scale AI, cloud, and enterprise deployments with greater efficiency and flexibility.” Full-Rack Integration from Cloud to AI Data Centers MSI demonstrates its rack-level integration expertise with fully configured EIA 19”, OCP ORv3 21”, and AI rack powered by NVIDIA MGX, engineered to power the full range of modern infrastructure, from cloud-native compute to AI-optimized deployments. Pre-integrated and thermally optimized, each rack is deployment-ready and tuned for specific workloads. Together, they highlight MSI’s capability to deliver complete, workload-optimized infrastructure from design to deployment. The EIA rack delivers dense compute for private cloud and virtualization environments, integrating core infrastructure in a standard 19” format. The OCP ORv3 rack features a 21” open chassis that enables higher compute and storage density, efficient 48V power delivery, and OpenBMC-compatible management, ideal for hyperscale and software-defined data centers. The enterprise AI rack with NVIDIA MGX, built on the NVIDIA Enterprise Reference Architecture, enables scalable GPU infrastructure for AI and HPC. Featuring modular units and high-throughput networking powered by NVIDIA Spectrum™-X, it supports multi-node scalable unit deployments optimized for large-scale training, inference, and hybrid workloads. Core Compute and Open Compute Servers for Modular Cloud Infrastructure MSI expands its Core Compute lineup with six DC-MHS servers powered by AMD EPYC 9005 Series processors and Intel Xeon 6 processors in 2U4N and 2U2N configurations. Designed for scalable cloud deployments, the portfolio includes high-density nodes with liquid or air cooling and compact systems optimized for power and space efficiency. With support for OCP DC-SCM, PCIe 5.0, and DDR5 DRAM, these servers enable modular, cross-platform integration and simplified management across private, hybrid, and edge cloud environments. To further enhance Open Compute deployment flexibility, MSI introduces the CD281-S4051-X2, a 2OU 2-Node ORv3 Open Compute server based on DC-MHS architecture. Optimized for hyperscale cloud infrastructure, it supports a single AMD EPYC 9005 processor per node, offers high storage density with twelve E3.S NVMe bays per node, and integrates efficient 48V power delivery and OpenBMC-compatible management, making it ideal for software-defined and power-conscious cloud environments. AMD EPYC 9005 Series Processor-Based Platform for Dense Virtualization and Scale-Out Workloads CD270-S4051-X4 (Liquid Cooling) A liquid cooled 2U 4-Node server supporting up to 500W TDP. Each node features 12 DDR5 DIMM slots and 2 U.2 NVMe drive bays, ideal for dense compute in thermally constrained cloud deployments. CD270-S4051-X4 (Air Cooling) This air-cooled 2U 4-Node system supports up to 400W TDP and delivers energy-efficient compute, with 12 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Designed for virtualization, container hosting, and private cloud clusters. CD270-S4051-X2 A 2U 2-Node server optimized for space efficiency and compute density. Each node includes 12 DDR5 DIMM slots and 6 U.2 NVMe bays, making it suitable for general-purpose virtualization and edge cloud nodes. Intel Xeon 6 Processor-Based Platform for Containerized and General-Purpose Cloud Services CD270-S3061-X4 A 2U 4-Node Intel Xeon 6700/6500 server supporting 16 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Ideal for containerized services and mixed cloud workloads requiring balanced compute density. CD270-S3061-X2 This compact 2U 2-Node Intel Xeon 6700/6500 system features 16 DDR5 DIMM slots and 6 U.2 NVMe bays per node, delivering strong compute and storage capabilities for core infrastructure and scalable cloud services. CD270-S3071-X2 A 2U 2-Node Intel Xeon 6900 system designed for I/O-heavy workloads, with 12 DDR5 DIMM slots and 6 U.2 bays per node. Suitable for storage-centric applications and data-intensive applications in the cloud. AI Platforms with NVIDIA MGX & DGX Station for AI Deployment MSI presents a comprehensive lineup of AI-ready platforms, including NVIDIA MGX-based servers and the DGX Station built on NVIDIA Grace and Blackwell architecture. The MGX lineup spans 4U and 2U form factors optimized for high-density AI training and inference, while the DGX Station delivers datacenter-class performance in a desktop chassis for on-premises model development and edge AI deployment. AI Platforms with NVIDIA MGX CG480-S5063 (Intel) / CG480-S6053 (AMD) The 4U MGX GPU server is available in two CPU configurations, CG480-S5063 with dual Intel Xeon 6700/6500 processors, and CG480-S6053 with dual AMD EPYC 9005 Series processors, offering flexibility across CPU ecosystems. Both systems support up to 8 FHFL dual-width PCIe 5.0 GPUs in air-cooled datacenter environments, making them ideal for deep learning training, generative AI, and high-throughput inferencing. The Intel-based CG480-S5063 features 32 DDR5 DIMM slots and supports up to 20 front E1.S NVMe bays, ideal for memory- and I/O-intensive deep learning pipelines, including large-scale LLM workloads, NVIDIA OVX™, and digital twin simulations. CG290-S3063 A compact 2U MGX server powered by a single Intel Xeon 6700/6500 processor, supporting 16 DDR5 DIMM slots and 4 FHFL dual-width GPU slots. Designed for edge inferencing and lightweight AI training, it suits space-constrained deployments where inference latency and power efficiency are key. DGX Station The CT60-S8060 is a high-performance AI station built on the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, delivering up to 20 PFLOPS of AI performance and 784GB of unified system memory. It also features the NVIDIA ConnectX-8 SuperNIC, enabling up to 800Gb/s networking for high-speed data transfer and multi-node scaling. Designed for on-prem model training and inferencing, the system supports multi-user workloads and can operate as either a standalone AI workstation or a centralized compute resource for research and development teams. Enterprise Platforms Across DC-MHS and Standard Architectures MSI delivers a complete portfolio of enterprise servers and motherboards across both DC-MHS modular infrastructure and standard general-purpose platforms. Supporting the latest Intel Xeon 6 and AMD EPYC 9005/8004/4005 Series processors, these solutions address the full range of compute, storage, and virtualization demands across cloud-native, private cloud, and on-premises environments. DC-MHS Server Systems and HPMs Built on the Open Compute Project’s DC-MHS (Datacenter Modular Hardware System) standard, MSI’s DC-MHS platforms offer cross-vendor interoperability, simplified maintenance, and next-gen datacenter scalability in hyperscale and cloud-native environments. Dual-Socket Intel Xeon 6 Servers: CX270-S5062 (2U) / CX170-S5062 (1U) Single-Socket Intel Xeon 6 Servers: CX271-S3066 (2U) / CX171-S3066 (1U) Single-Socket AMD EPYC 9005 Servers: CX271-S4056 (2U) / CX171-S4056 (1U) Intel DC-MHS HPMs: D5062, D3071, D3061, D3066 AMD DC-MHS HPMs: D4051, D4056 Standard Server Systems and Mainstream Boards MSI’s general-purpose server offerings support a wide range of traditional IT workloads such as storage consolidation, backup, and business-critical databases, offering reliable performance and flexible deployment outside of DC-MHS-defined infrastructures. Dual-Socket AMD EPYC 9005 Servers: S2206 (2U) / S1206 (1U) Single-Socket Intel Xeon 6 Storage Server: CS280-S3065 (2U, 24 SATA bays) Intel Xeon 6 Motherboards: D3065 (E-ATX), D3060 (CEB) Intel Xeon 6300 Motherboards: D1500/D1505 (uATX) AMD EPYC 8004 Motherboard: D4040 (uATX) AMD EPYC 4005 Motherboards: D3051, D3052 (uATX)

news
Product News
/May 19, 2025

MSI Unveils Next-Level AI Solutions Using NVIDIA MGX and DGX Station at COMPUTEX 2025

Taipei, Taiwan – May 19, 2025 – MSI, a leading global provider of high-performance server solutions, unveils its latest AI innovations using NVIDIA MGX and NVIDIA DGX Station reference architectures at COMPUTEX 2025, held from May 20–23 at booth J0506. Purpose-built to address the growing demands of AI, HPC, and accelerated computing workloads, MSI’s AI solutions feature modular, scalable building blocks designed to deliver next-level AI performance for enterprises and cloud data center environments. “AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities,” said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. “With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI’s AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation.” MSI showcases a rack solution built on the NVIDIA Enterprise Reference Architecture, featuring a four-node scalable unit—an MSI AI server built on NVIDIA MGX. Each server is equipped with eight NVIDIA H200 NVL GPUs and enhanced by the NVIDIA Spectrum™-X networking platform to provide leading scalability for AI workloads. This modular architecture can scale up to 32 server systems (eight scalable units), supporting a total of 256 NVIDIA H200 NVL GPUs. MSI’s Enterprise Reference Architecture is optimized for multi-node AI and hybrid applications, designed to tackle today’s most demanding computational challenges and drive the next generation of data center innovation. Leveraging the NVIDIA MGX modular architecture, MSI's AI servers provide a foundation for accelerated computing and meet the diverse demands of AI, HPC, and NVIDIA Omniverse workloads. MSI’s 4U AI server offers configuration options with both Intel and AMD CPUs and is a cutting-edge solution designed for large language models (LLMs), deep learning training, and fine-tuning. The CG480-S5063 AI server platform features dual Intel® Xeon® 6 processors and eight FHFL dual-width GPU slots, supporting both NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition with power capacities of up to 600W. Featuring 32 DDR5 DIMM slots and twenty PCIe 5.0 E1.S NVMe bays, the CG480-S5063 ensures exceptional memory bandwidth and rapid data access. Its modular design and robust storage capabilities make it the perfect choice for next-generation AI and HPC applications, delivering unparalleled performance and scalability. The CG290-S3063 is a 2U AI server platform built on the NVIDIA MGX architecture. It features a single-socket Intel Xeon 6 processor, 16 DDR5 DIMM slots, and four FHFL dual-width GPU slots supporting power capacities of up to 600W. With PCIe 5.0 expansion slots, four rear 2.5-inch NVMe drive bays, and two M.2 NVMe slots, this server delivers an ideal solution for a wide range of AI workloads—from small-scale inferencing to large-scale AI training. MSI's platforms support deployment in enterprise-ready AI environments with the NVIDIA Enterprise AI Factory validated design that provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform with on-premises infrastructure. Designed for enterprise IT, the validated design includes accelerated computing, networking, storage, and software to help deliver faster time-to-value for AI factory deployments while mitigating deployment risks. Also on display is MSI’s AI Station CT60-S8060, featuring the powerful components of the NVIDIA DGX Station, designed to deliver data center-level performance on a personal desktop for AI development, featuring the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, and up to 784GB of coherent memory to accelerate large-scale training and inferencing workloads. Combining state-of-the-art system capabilities with the NVIDIA AI Enterprise software stack, DGX Station is purpose-built for teams that demand the best desktop AI development solution. Supporting Resources: Watch the MSI 4U MGX AI platform, built on NVIDIA accelerated computing, deliver the performance you need for tomorrow’s AI workloads.

Subscribe Now

Subscribe to our newsletter to receive the latest news and updates.

Please check the box if you would like to receive our latest news and updates. By clicking here, you consent to the processing of your personal data by [Micro-Star International Co., LTD.] to send you information about [MSI’s products, services and upcoming events]. Please note that you can unsubscribe from the MSI Newsletters here at any time. Further details of our data processing activities are available in the MSI Privacy Policy.