MSI Enterprise Platform Solutions stand as a leading global hardware provider.

The entirety of MSI's server products are developed in-house, showcasing a profound commitment to addressing customer needs and aligning with market demands. This commitment is underscored by a strong emphasis on design and manufacturing excellence.

NEWS & EVENTS

news
Product News
/April 8, 2026

MSI Advances AI and Enterprise Infrastructure with NVIDIA MGX and DC-MHS Architectures at Japan IT Week Spring 2026

Tokyo, Japan – April 8, 2026 – At Japan IT Week Spring 2026 (Booth #W21-22), MSI presents a comprehensive portfolio of AI and enterprise infrastructure designed to support data-intensive workloads across modern IT environments. From AI training and inference to enterprise applications, MSI enables businesses to deploy right-sized compute with GPU-dense systems, modular NVIDIA MGX platforms, and OCP DC-MHS servers, helping simplify deployment, improve resource utilization, and accelerate AI adoption. “AI is moving from experimentation into core enterprise infrastructure,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “This shift is changing how organizations design, deploy, and scale IT. MSI is focused on helping customers build infrastructure that can support AI as a long-term capability across their operations.” Full-Spectrum AI Infrastructure for Every Workload A full-spectrum AI infrastructure portfolio spans 4U GPU-dense systems, a 2U GPU platform, and a deskside AI workstation, enabling organizations to match compute resources to workloads from large-scale training to inference and local AI development. The CG480-S5063 and CG290-S3063, built on NVIDIA MGX architecture, leverage a modular and standardized design to simplify system integration and accelerate deployment, helping organizations reduce validation effort and achieve faster time-to-revenue. For customers seeking an alternative to NVIDIA MGX, the G4201(-HE) offers an entry-level enterprise platform with flexible PCIe expansion options for compatibility with existing IT environments, making it suitable for organizations starting to integrate AI and data-intensive workloads. Together with the XpertStation WS300 built on NVIDIA DGX Station architecture, MSI delivers a flexible, multi-architecture portfolio that enables efficient AI scaling across development, deployment, and scaling stages. The CG480-S5063 4U GPU server, based on NVIDIA MGX architecture and powered by dual Intel® Xeon® 6 processors, supports up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs with high-bandwidth networking, enabling scalable performance for LLM training and large-scale generative AI workloads, while accelerating deployment through a modular MGX design. The CG290-S3063 2U GPU server, built on NVIDIA MGX architecture with a single Intel Xeon processor 6, supports up to 4 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs, delivering a compact and efficient platform for inference and distributed AI deployments, especially in space-constrained data centers and edge scenarios. The XpertStation WS300, built on NVIDIA DGX Station architecture and powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, features 748GB of coherent memory pool and dual 400GbE networking, enabling developers to train, fine-tune, and run advanced AI models locally with data-center-class performance. The G4201(-HE) 4U server, powered by dual 5th Gen Intel Xeon Scalable processors, supports up to 32 DDR5 DIMM slots and 8 PCIe cards, adopting an industry-standard architecture for compatibility with existing IT environments. It serves as an entry-level enterprise platform for organizations beginning to integrate AI and data-driven applications into established environments. Modular Enterprise Platforms for Scalable IT Infrastructure Built on DC-MHS (Data Center Modular Hardware System) with DC-SCM, these MSI enterprise servers are designed for AI-enabled enterprise, virtualization, and cloud workloads. By separating system management from the host, DC-SCM enables faster CPU transitions and reduces firmware development effort, while the modular DC-MHS design allows independent upgrades and shorter validation cycles. Built on Intel Xeon 6 and AMD EPYC™ 9005 processor platforms, with high core density, DDR5 memory and NVMe storage scaling up to 32 DIMMs and 12 drives, these servers handle data-intensive workloads more efficiently, helping organizations shorten deployment cycles, improve system uptime, and scale AI-enabled services within existing IT environments. The CX270-S5062(-HE) 2U server, powered by dual Intel Xeon 6 processors, supports up to 32 DDR5 DIMMs, 8 U.2 NVMe drives, and GPU expansion capability, enabling balanced performance for virtualization, AI inference, and mixed workloads. The CX271-S3066(-HE) 2U server, based on a single Intel Xeon 6 processor, supports up to 16 DDR5 DIMMs and 8 U.2 NVMe drives, and GPU expansion capability, delivering balanced compute and storage performance for enterprise and data-driven workloads. The CX171-S4056 1U server, powered by a single AMD EPYC 9005 processor, supports up to 24 DDR5 DIMMs and 12 U.2 NVMe drives, enabling space-efficient enterprise deployments with high memory capacity.

news
Product News
/March 24, 2026

MSI Highlights Cloud, AI, and Enterprise Server Platforms at CloudFest 2026

Rust, Germany – March 24, 2026 – At CloudFest 2026 (Booth #H08+H09), MSI showcases a server portfolio spanning hyperscale cloud platforms, NVIDIA-accelerated AI systems, and enterprise servers for hybrid cloud environments. The lineup features multi-node and ORv3 architectures for hyperscale deployments, GPU-dense platforms for AI training and inference, and high-memory enterprise systems for virtualization and data-intensive workloads, enabling cloud service providers and enterprises to deploy scalable, high-density infrastructure for cloud, AI, and enterprise workloads. “Cloud infrastructure is evolving rapidly as AI services and data-intensive workloads continue to expand,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “Service providers and enterprises need platforms that can scale efficiently while maintaining performance and operational flexibility across next-generation data center environments.” Scalable Compute Platforms for Cloud Infrastructure The cloud platform portfolio supports hyperscale and service provider deployments, offering deployment flexibility across 2U 4-node, 2U 2-node, and ORv3 21-inch server architectures, with platforms powered by AMD EPYC™ and Intel® Xeon® 6 processors. These platforms optimize compute density and rack-level efficiency across large-scale cloud workloads. Built on DC-MHS (Data Center Modular Hardware System) architecture, the platforms improve serviceability and hardware interoperability, allowing operators to streamline maintenance and reduce operational complexity across large-scale deployments. For hyperscale environments, MSI also provides an ORv3-compliant 21-inch platform designed for Open Compute racks and 48V rack power architectures, enabling improved compute density, power efficiency, and seamless rack-scale integration. The CD270-S3071-X4 is a 2U 4-node platform designed for high-density cloud deployments. Each node supports a single Intel Xeon 6 processor (up to 400W TDP) with 12 DDR5 DIMM slots across 12 memory channels, and 3 front U.2 NVMe bays, enabling high compute density and efficient scaling for cloud-scale workloads. AI Platforms for Scalable Cloud AI Infrastructure NVIDIA-accelerated AI platforms extend the portfolio from large-scale training clusters to local AI development environments, spanning a 4U GPU server, a 2U GPU server, and an AI development workstation. Designed for cloud service providers and AI operators, the systems combine NVIDIA GPUs, high-bandwidth memory architectures, PCIe 5.0/6.0 connectivity, and high-speed networking to deliver the throughput and scalability required for data center-scale AI workloads. The CG481-S6053 is a 4U NVIDIA MGX server powered by dual AMD EPYC™ 9005 processors, supporting up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs and 8 400Gb QSFP112 networking ports. With PCIe 6.0 GPU connectivity and high-bandwidth networking, the platform is optimized for large-scale AI training clusters and model fine-tuning environments. The CG290-S3063 is a 2U NVIDIA MGX server built on Intel Xeon 6 processors, supporting up to 4 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. Its balanced GPU density and compact footprint make it well suited for scalable inference and distributed AI deployments across cloud and edge infrastructure. Completing the lineup, the XpertStation WS300, based on the NVIDIA DGX Station architecture, integrates an NVIDIA Grace CPU paired with an NVIDIA Blackwell Ultra GPU, featuring a 748GB coherent memory pool and 2×400GbE networking powered by NVIDIA ConnectX®-8 SuperNICs. The system enables developers and AI teams to develop, prototype, fine-tune, and validate AI models locally before deploying workloads to production-scale AI infrastructure. Enterprise Servers for Hybrid Cloud Infrastructure Enterprise platforms designed for hybrid cloud environments support enterprise data center deployments. The lineup spans 1U and 2U systems powered by AMD EPYC 9005 and Intel Xeon 6 processors, combining high core-count CPUs, DDR5 memory, PCIe 5.0 expansion, and high-density NVMe storage to support virtualization, databases, and cloud-native workloads. The CX270-S5062 is a 2U dual-socket Intel Xeon 6 platform supporting 32 DDR5 DIMMs, flexible NVMe storage configurations of up to 24 front U.2 NVMe drives, and 2 PCIe 5.0 double-wide GPUs, delivering strong compute performance and storage density for virtualization and enterprise database workloads. The CX271-S4056 (-HE SKU) is a 2U single-socket AMD EPYC 9005 platform with 24 DDR5 DIMMs, 2 PCIe 5.0 double-wide GPUs, and NVMe storage, delivering high memory bandwidth and balanced performance for enterprise applications and cloud infrastructure services. The CX171-S4056 is a 1U AMD EPYC 9005 platform supporting 24 DDR5 DIMMs and up to 12 front NVMe drives, enabling high compute density for space-efficient data center deployments.