MSI Enterprise Platform Solutions stand as a leading global hardware provider.

The entirety of MSI's server products are developed in-house, showcasing a profound commitment to addressing customer needs and aligning with market demands. This commitment is underscored by a strong emphasis on design and manufacturing excellence.

NEWS & EVENTS

news
Product News
/March 24, 2026

MSI Highlights Cloud, AI, and Enterprise Server Platforms at CloudFest 2026

Rust, Germany – March 24, 2026 – At CloudFest 2026 (Booth #H08+H09), MSI showcases a server portfolio spanning hyperscale cloud platforms, NVIDIA-accelerated AI systems, and enterprise servers for hybrid cloud environments. The lineup features multi-node and ORv3 architectures for hyperscale deployments, GPU-dense platforms for AI training and inference, and high-memory enterprise systems for virtualization and data-intensive workloads, enabling cloud service providers and enterprises to deploy scalable, high-density infrastructure for cloud, AI, and enterprise workloads. “Cloud infrastructure is evolving rapidly as AI services and data-intensive workloads continue to expand,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “Service providers and enterprises need platforms that can scale efficiently while maintaining performance and operational flexibility across next-generation data center environments.” Scalable Compute Platforms for Cloud Infrastructure The cloud platform portfolio supports hyperscale and service provider deployments, offering deployment flexibility across 2U 4-node, 2U 2-node, and ORv3 21-inch server architectures, with platforms powered by AMD EPYC™ and Intel® Xeon® 6 processors. These platforms optimize compute density and rack-level efficiency across large-scale cloud workloads. Built on DC-MHS (Data Center Modular Hardware System) architecture, the platforms improve serviceability and hardware interoperability, allowing operators to streamline maintenance and reduce operational complexity across large-scale deployments. For hyperscale environments, MSI also provides an ORv3-compliant 21-inch platform designed for Open Compute racks and 48V rack power architectures, enabling improved compute density, power efficiency, and seamless rack-scale integration. The CD270-S3071-X4 is a 2U 4-node platform designed for high-density cloud deployments. Each node supports a single Intel Xeon 6 processor (up to 400W TDP) with 12 DDR5 DIMM slots across 12 memory channels, and 3 front U.2 NVMe bays, enabling high compute density and efficient scaling for cloud-scale workloads. AI Platforms for Scalable Cloud AI Infrastructure NVIDIA-accelerated AI platforms extend the portfolio from large-scale training clusters to local AI development environments, spanning a 4U GPU server, a 2U GPU server, and an AI development workstation. Designed for cloud service providers and AI operators, the systems combine NVIDIA GPUs, high-bandwidth memory architectures, PCIe 5.0/6.0 connectivity, and high-speed networking to deliver the throughput and scalability required for data center-scale AI workloads. The CG481-S6053 is a 4U NVIDIA MGX server powered by dual AMD EPYC™ 9005 processors, supporting up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs and 8 400Gb QSFP112 networking ports. With PCIe 6.0 GPU connectivity and high-bandwidth networking, the platform is optimized for large-scale AI training clusters and model fine-tuning environments. The CG290-S3063 is a 2U NVIDIA MGX server built on Intel Xeon 6 processors, supporting up to 4 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. Its balanced GPU density and compact footprint make it well suited for scalable inference and distributed AI deployments across cloud and edge infrastructure. Completing the lineup, the XpertStation WS300, based on the NVIDIA DGX Station architecture, integrates an NVIDIA Grace CPU paired with an NVIDIA Blackwell Ultra GPU, featuring a 748GB coherent memory pool and 2×400GbE networking powered by NVIDIA ConnectX®-8 SuperNICs. The system enables developers and AI teams to develop, prototype, fine-tune, and validate AI models locally before deploying workloads to production-scale AI infrastructure. Enterprise Servers for Hybrid Cloud Infrastructure Enterprise platforms designed for hybrid cloud environments support enterprise data center deployments. The lineup spans 1U and 2U systems powered by AMD EPYC 9005 and Intel Xeon 6 processors, combining high core-count CPUs, DDR5 memory, PCIe 5.0 expansion, and high-density NVMe storage to support virtualization, databases, and cloud-native workloads. The CX270-S5062 is a 2U dual-socket Intel Xeon 6 platform supporting 32 DDR5 DIMMs, flexible NVMe storage configurations of up to 24 front U.2 NVMe drives, and 2 PCIe 5.0 double-wide GPUs, delivering strong compute performance and storage density for virtualization and enterprise database workloads. The CX271-S4056 (-HE SKU) is a 2U single-socket AMD EPYC 9005 platform with 24 DDR5 DIMMs, 2 PCIe 5.0 double-wide GPUs, and NVMe storage, delivering high memory bandwidth and balanced performance for enterprise applications and cloud infrastructure services. The CX171-S4056 is a 1U AMD EPYC 9005 platform supporting 24 DDR5 DIMMs and up to 12 front NVMe drives, enabling high compute density for space-efficient data center deployments.

news
Product News
/March 18, 2026

MSI Accelerates Enterprise AI with NVIDIA MGX Servers and DGX Workstations at GTC 2026

From liquid-cooled AI training platforms to desktop AI supercomputing, MSI expands its portfolio to power next-gen generative AI, HPC, and real-time video analytics. San Jose, CA – March 17, 2026 – MSI, a global leader in high-performance server solutions, today unveils its latest AI infrastructure portfolio built on NVIDIA’s modular architectures, including the NVIDIA MGX platform and NVIDIA DGX Station technology. Designed to accelerate AI training, large-scale inference, HPC, edge, and next-generation data center workloads, MSI’s expanded lineup delivers exceptional scalability, performance density, and deployment flexibility. Scalable AI Infrastructure Built on NVIDIA MGX Architecture Leveraging the modular design of NVIDIA MGX architecture, MSI has developed a comprehensive portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. The NVIDIA MGX architecture enables flexible CPU selection, high-capacity memory configurations, and seamless integration of high-speed networking — empowering enterprises to deploy infrastructure tailored to diverse workload requirements, from data center deployments to edge applications. MSI’s platforms based on NVIDIA MGX support both Intel and AMD CPU options, up to 32 DDR5 DIMM slots, and as many as eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs. With advanced thermal engineering, including both liquid cooling and optimized air-cooling designs, these systems are purpose-built to sustain peak performance under the most demanding AI workloads. The CG480-S5063 represents MSI’s flagship 4U server based on NVIDIA MGX, featuring dual Intel® Xeon® 6 processors, eight dual-width PCIe GPU slots, and 32 DDR5 DIMM slots for maximum memory scalability. Supporting up to twenty PCIe Gen5 E1.S NVMe bays, the system delivers ultra-fast data throughput and expansive storage capacity. Expanding the NVIDIA MGX family, the CG481-S6053 integrates dual AMD EPYC™ 9005 Series processors to maximize core density and I/O bandwidth. Featuring eight PCIe 5.0 GPU slots, 24 DDR5 DIMM slots and eight high-bandwidth 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs, this platform is designed for compute-intensive AI clusters, HPC simulations, and multi-tenant enterprise AI deployments. The CG681-S6093 is a 6U liquid-cooled AI platform built on a dual-socket architecture, supporting eight dual-width PCIe GPU slots and eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs. It delivers exceptional performance and efficiency for high-density AI data center deployments. To reinforce its commitment to next-generation AI computing, MSI is showcasing NVIDIA’s Vera CPU option, highlighting its ongoing innovation in data-driven, AI-powered infrastructure solutions. Accelerating Computer Vision and Video Analytics The NVIDIA MGX architecture is particularly well-suited for computer vision and real-time video analytics applications. With high GPU throughput and ultra-fast networking, MSI’s AI platforms enable real-time multi-camera processing for smart cities, industrial inspection, and advanced surveillance systems. At GTC 2026, MSI is demonstrating AI-powered smart video search and automated summarization capabilities, showcasing how enterprises can extract actionable intelligence from massive volumes of live and archived video data. Data Center AI Performance at the Desk: MSI XpertStation WS300 For AI developers, researchers, and data scientists requiring data center-class performance in a workstation form factor, MSI announces the availability of the XpertStation WS300 beginning March 16. Built upon the NVIDIA DGX Station architecture, the WS300 is powered by NVIDIA Grace Blackwell Ultra Desktop Superchip and features 748GB of large coherent memory. Equipped with dual 400GbE networking ports via NVIDIA ConnectX-8 SuperNICs and a robust 1600W ATX power supply, the system delivers unprecedented AI compute capability directly at the deskside with a plug-and-play supercomputing feature. The XpertStation WS300 enables advanced AI model development, fine-tuning, inference, data science workflows, and complex simulations — bringing data center-level acceleration to enterprise developers and research teams. With its expanded NVIDIA MGX server portfolio and NVIDIA DGX-powered workstation platform, MSI continues to deliver scalable, high-performance AI infrastructure, empowering organizations to accelerate innovation from the data center to the edge and the desktop.

news
Press Release
/March 17, 2026

GTC 2026: MSI Drives End-to-End AI Implementation by Bridging Cloud Computing and Autonomous Edge Inspection

Comprehensive Showcase Features NVIDIA MGX Servers, NVIDIA DGX Workstations, and Digital Twin Validation MSI, a global leader in high-performance server solutions and Edge AI, unveils its comprehensive AI ecosystem at NVIDIA GTC 2026. In addition to launching the servers based on NVIDIA MGX architecture and powered by the NVIDIA Blackwell GPUs, MSI introduces the XpertStation WS300, built on NVIDIA DGX Station architecture. MSI also showcases the OmniGuard smart patrol vehicle, integrated with NVIDIA Alpamayo-R1 Vision-Language-Action (VLA) inference model, demonstrating a complete workflow from AI infrastructure and Digital Twin validation to real-world deployment. High-Performance Foundations: MSI servers based on NVIDIA MGX Based on the modular design of NVIDIA MGX architecture, MSI has engineered a robust portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. These platforms are optimized to accelerate a wide range of AI workloads, from data center deployments to edge applications. This flexible architecture empowers enterprises to tailor CPU, memory, and networking configurations to meet specific performance, scalability, and workload demands. CG480-S5063: A flagship 4U server based on NVIDIA MGX, featuring dual Intel® Xeon® 6 processors, eight dual-width PCIe GPU slots, and 32 DDR5 DIMM slots for exceptional memory scalability. It supports up to 20 PCIe Gen5 E1.S NVMe drives for ultra-high data throughput. CG481-S6053: Powered by dual AMD EPYC™ 9005 Series processors to maximize core density and I/O bandwidth. It integrates eight PCIe 5.0 GPU slots, 24 DDR5 DIMM slots and eight 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs, designed for compute-intensive AI clusters and HPC simulations. CG681-S6093: The 6U liquid-cooled AI platform, designed with a dual-socket architecture and equipped with eight dual-width PCIe GPUs, integrates eight 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs to deliver exceptional performance and efficiency for high-density AI data center deployments. Advanced Thermal Engineering and Future-Ready AI Innovation MSI’s platforms based on NVIDIA MGX integrate liquid cooling and optimized air-cooling designs to sustain Peak Performance under the most rigorous AI workloads. Leveraging exceptional GPU throughput and high-speed connectivity, MSI is demonstrating AI-powered intelligent video search and automated summarization. These capabilities optimize multi-camera analytics for smart cities and industrial inspection, empowering enterprises to rapidly extract actionable intelligence from massive datasets to enhance decision-making efficiency. To reinforce its commitment to next-generation AI computing, MSI is also showcasing NVIDIA Vera CPU option, highlighting its ongoing innovation in data-driven, AI-powered infrastructure solutions. MSI XpertStation WS300: Data Center-Class AI Power at the Desk For researchers requiring massive compute power in a deskside form factor, MSI announced the XpertStation WS300, available starting March 16. Core Architecture: Built on NVIDIA DGX Station architecture, the WS300 is powered by NVIDIA Grace Blackwell Ultra Desktop Superchip and features 748GB of coherent memory. Connectivity and Power: Equipped with dual 400GbE ports via NVIDIA ConnectX-8 and a 1600W ATX power supply, the system delivers unprecedented AI acceleration directly at the desktop with a plug-and-play supercomputing feature. "The growth of Generative AI and LLMs has driven extreme demand for underlying infrastructure," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "Through our platforms based on NVIDIA MGX and XpertStation WS300, MSI is extending data center-level momentum to the developer’s desk, accelerating innovation across the data center, the edge, and the desktop." Real-World Impact: Reducing Deployment Risk with Digital Twins & EdgeXpert MSI utilizes NVIDIA Omniverse libraries and NVIDIA Isaac Sim open simulation framework to create high-precision virtual environments to eliminate uncertainties in physical deployments. Through Sim2Real (Simulation to Reality) technology, OmniGuard patrol vehicle underwent rigorous virtual testing of patrol routes and pedestrian interactions to ensure functional validation before real-world implementation. Core Intelligence: NVIDIA Alpamayo-R1 Autonomous Decision System OmniGuard deeply integrates the Alpamayo, granting the vehicle "thought-and-action" perception to master complex Long-tail scenarios. Testing confirms a 12% increase in navigation planning accuracy and a 35% reduction in off-road rates. Scalable Edge AI Implementation: Models are deployed and monitored via the MSI EdgeXpert platform. The vehicle powered by the NVIDIA Jetson platform for real-time edge processing, ensuring reliable performance in dynamic environments. This architecture can be rapidly extended to smart factories, logistics parks, and public infrastructure. David Wu, General Manager of Customized Product Solutions at MSI, commented: "The value of AI lies in solving real-world pain points. Powered by NVIDIA accelerated computing, Digital Twins, and Alpamayo inference technology, MSI has established a seamless workflow from virtual validation to physical deployment. This not only shortens development cycles but also demonstrates MSI’s strength in driving the comprehensive implementation of Edge AI applications." MSI:https://www.msi.com MSI YouTube:https://www.youtube.com/@MSI MSI Facebook:https://www.facebook.com/MSIAIoT MSI LinkedIn : https://www.linkedin.com/showcase/msi-aiot/ MSI Instagram:https://www.instagram.com/MSI MSI X:https://x.com/msigaming