MSI Enterprise Platform Solutions stand as a leading global hardware provider.

The entirety of MSI's server products are developed in-house, showcasing a profound commitment to addressing customer needs and aligning with market demands. This commitment is underscored by a strong emphasis on design and manufacturing excellence.

NEWS & EVENTS

news
Product News
/March 18, 2026

MSI Accelerates Enterprise AI with NVIDIA MGX Servers and DGX Workstations at GTC 2026

From liquid-cooled AI training platforms to desktop AI supercomputing, MSI expands its portfolio to power next-gen generative AI, HPC, and real-time video analytics. San Jose, CA – March 17, 2026 – MSI, a global leader in high-performance server solutions, today unveils its latest AI infrastructure portfolio built on NVIDIA’s modular architectures, including the NVIDIA MGX platform and NVIDIA DGX Station technology. Designed to accelerate AI training, large-scale inference, HPC, edge, and next-generation data center workloads, MSI’s expanded lineup delivers exceptional scalability, performance density, and deployment flexibility. Scalable AI Infrastructure Built on NVIDIA MGX Architecture Leveraging the modular design of NVIDIA MGX architecture, MSI has developed a comprehensive portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. The NVIDIA MGX architecture enables flexible CPU selection, high-capacity memory configurations, and seamless integration of high-speed networking — empowering enterprises to deploy infrastructure tailored to diverse workload requirements, from data center deployments to edge applications. MSI’s platforms based on NVIDIA MGX support both Intel and AMD CPU options, up to 32 DDR5 DIMM slots, and as many as eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs. With advanced thermal engineering, including both liquid cooling and optimized air-cooling designs, these systems are purpose-built to sustain peak performance under the most demanding AI workloads. The CG480-S5063 represents MSI’s flagship 4U server based on NVIDIA MGX, featuring dual Intel® Xeon® 6 processors, eight dual-width PCIe GPU slots, and 32 DDR5 DIMM slots for maximum memory scalability. Supporting up to twenty PCIe Gen5 E1.S NVMe bays, the system delivers ultra-fast data throughput and expansive storage capacity. Expanding the NVIDIA MGX family, the CG481-S6053 integrates dual AMD EPYC™ 9005 Series processors to maximize core density and I/O bandwidth. Featuring eight PCIe 5.0 GPU slots, 24 DDR5 DIMM slots and eight high-bandwidth 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs, this platform is designed for compute-intensive AI clusters, HPC simulations, and multi-tenant enterprise AI deployments. The CG681-S6093 is a 6U liquid-cooled AI platform built on a dual-socket architecture, supporting eight dual-width PCIe GPU slots and eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs. It delivers exceptional performance and efficiency for high-density AI data center deployments. To reinforce its commitment to next-generation AI computing, MSI is showcasing NVIDIA’s Vera CPU option, highlighting its ongoing innovation in data-driven, AI-powered infrastructure solutions. Accelerating Computer Vision and Video Analytics The NVIDIA MGX architecture is particularly well-suited for computer vision and real-time video analytics applications. With high GPU throughput and ultra-fast networking, MSI’s AI platforms enable real-time multi-camera processing for smart cities, industrial inspection, and advanced surveillance systems. At GTC 2026, MSI is demonstrating AI-powered smart video search and automated summarization capabilities, showcasing how enterprises can extract actionable intelligence from massive volumes of live and archived video data. Data Center AI Performance at the Desk: MSI XpertStation WS300 For AI developers, researchers, and data scientists requiring data center-class performance in a workstation form factor, MSI announces the availability of the XpertStation WS300 beginning March 16. Built upon the NVIDIA DGX Station architecture, the WS300 is powered by NVIDIA Grace Blackwell Ultra Desktop Superchip and features 748GB of large coherent memory. Equipped with dual 400GbE networking ports via NVIDIA ConnectX-8 SuperNICs and a robust 1600W ATX power supply, the system delivers unprecedented AI compute capability directly at the deskside with a plug-and-play supercomputing feature. The XpertStation WS300 enables advanced AI model development, fine-tuning, inference, data science workflows, and complex simulations — bringing data center-level acceleration to enterprise developers and research teams. With its expanded NVIDIA MGX server portfolio and NVIDIA DGX-powered workstation platform, MSI continues to deliver scalable, high-performance AI infrastructure, empowering organizations to accelerate innovation from the data center to the edge and the desktop.

news
Press Release
/March 17, 2026

GTC 2026: MSI Drives End-to-End AI Implementation by Bridging Cloud Computing and Autonomous Edge Inspection

Comprehensive Showcase Features NVIDIA MGX Servers, NVIDIA DGX Workstations, and Digital Twin Validation MSI, a global leader in high-performance server solutions and Edge AI, unveils its comprehensive AI ecosystem at NVIDIA GTC 2026. In addition to launching the servers based on NVIDIA MGX architecture and powered by the NVIDIA Blackwell GPUs, MSI introduces the XpertStation WS300, built on NVIDIA DGX Station architecture. MSI also showcases the OmniGuard smart patrol vehicle, integrated with NVIDIA Alpamayo-R1 Vision-Language-Action (VLA) inference model, demonstrating a complete workflow from AI infrastructure and Digital Twin validation to real-world deployment. High-Performance Foundations: MSI servers based on NVIDIA MGX Based on the modular design of NVIDIA MGX architecture, MSI has engineered a robust portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. These platforms are optimized to accelerate a wide range of AI workloads, from data center deployments to edge applications. This flexible architecture empowers enterprises to tailor CPU, memory, and networking configurations to meet specific performance, scalability, and workload demands. CG480-S5063: A flagship 4U server based on NVIDIA MGX, featuring dual Intel® Xeon® 6 processors, eight dual-width PCIe GPU slots, and 32 DDR5 DIMM slots for exceptional memory scalability. It supports up to 20 PCIe Gen5 E1.S NVMe drives for ultra-high data throughput. CG481-S6053: Powered by dual AMD EPYC™ 9005 Series processors to maximize core density and I/O bandwidth. It integrates eight PCIe 5.0 GPU slots, 24 DDR5 DIMM slots and eight 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs, designed for compute-intensive AI clusters and HPC simulations. CG681-S6093: The 6U liquid-cooled AI platform, designed with a dual-socket architecture and equipped with eight dual-width PCIe GPUs, integrates eight 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs to deliver exceptional performance and efficiency for high-density AI data center deployments. Advanced Thermal Engineering and Future-Ready AI Innovation MSI’s platforms based on NVIDIA MGX integrate liquid cooling and optimized air-cooling designs to sustain Peak Performance under the most rigorous AI workloads. Leveraging exceptional GPU throughput and high-speed connectivity, MSI is demonstrating AI-powered intelligent video search and automated summarization. These capabilities optimize multi-camera analytics for smart cities and industrial inspection, empowering enterprises to rapidly extract actionable intelligence from massive datasets to enhance decision-making efficiency. To reinforce its commitment to next-generation AI computing, MSI is also showcasing NVIDIA Vera CPU option, highlighting its ongoing innovation in data-driven, AI-powered infrastructure solutions. MSI XpertStation WS300: Data Center-Class AI Power at the Desk For researchers requiring massive compute power in a deskside form factor, MSI announced the XpertStation WS300, available starting March 16. Core Architecture: Built on NVIDIA DGX Station architecture, the WS300 is powered by NVIDIA Grace Blackwell Ultra Desktop Superchip and features 748GB of coherent memory. Connectivity and Power: Equipped with dual 400GbE ports via NVIDIA ConnectX-8 and a 1600W ATX power supply, the system delivers unprecedented AI acceleration directly at the desktop with a plug-and-play supercomputing feature. "The growth of Generative AI and LLMs has driven extreme demand for underlying infrastructure," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "Through our platforms based on NVIDIA MGX and XpertStation WS300, MSI is extending data center-level momentum to the developer’s desk, accelerating innovation across the data center, the edge, and the desktop." Real-World Impact: Reducing Deployment Risk with Digital Twins & EdgeXpert MSI utilizes NVIDIA Omniverse libraries and NVIDIA Isaac Sim open simulation framework to create high-precision virtual environments to eliminate uncertainties in physical deployments. Through Sim2Real (Simulation to Reality) technology, OmniGuard patrol vehicle underwent rigorous virtual testing of patrol routes and pedestrian interactions to ensure functional validation before real-world implementation. Core Intelligence: NVIDIA Alpamayo-R1 Autonomous Decision System OmniGuard deeply integrates the Alpamayo, granting the vehicle "thought-and-action" perception to master complex Long-tail scenarios. Testing confirms a 12% increase in navigation planning accuracy and a 35% reduction in off-road rates. Scalable Edge AI Implementation: Models are deployed and monitored via the MSI EdgeXpert platform. The vehicle powered by the NVIDIA Jetson platform for real-time edge processing, ensuring reliable performance in dynamic environments. This architecture can be rapidly extended to smart factories, logistics parks, and public infrastructure. David Wu, General Manager of Customized Product Solutions at MSI, commented: "The value of AI lies in solving real-world pain points. Powered by NVIDIA accelerated computing, Digital Twins, and Alpamayo inference technology, MSI has established a seamless workflow from virtual validation to physical deployment. This not only shortens development cycles but also demonstrates MSI’s strength in driving the comprehensive implementation of Edge AI applications." MSI:https://www.msi.com MSI YouTube:https://www.youtube.com/@MSI MSI Facebook:https://www.facebook.com/MSIAIoT MSI LinkedIn : https://www.linkedin.com/showcase/msi-aiot/ MSI Instagram:https://www.instagram.com/MSI MSI X:https://x.com/msigaming

news
Product News
/March 17, 2026

MSI Accelerates Autonomous AI Agents with NVIDIA AI Software and Models

MSI EdgeXpert and XpertStation platforms enable developers and enterprises to build and deploy next-generation AI agents San Jose, California – Mar 16, 2026 – MSI, a global leader in high-performance server solutions and Edge AI, is collaborating with NVIDIA to accelerate the development of next-generation AI agents using the open source NVIDIA OpenShell runtime. The solution will be supported on MSI’s EdgeXpert and XpertStation platforms, which are built on NVIDIA DGX Spark and NVIDIA DGX Station architectures. MSI and NVIDIA are also working together on NVIDIA NemoClaw — an open source stack that simplifies running OpenClaw always-on assistants, more safely, with a single command. As part of the NVIDIA Agent Toolkit, it installs the NVIDIA OpenShell runtime—a secure environment for running autonomous agents, and open source models like NVIDIA Nemotron. OpenShell is an open-source runtime for building and deploying autonomous, self-evolving agents more safely. Optimized for dedicated AI systems, OpenShell can be deployed on platforms such as DGX Spark and DGX Station to deliver reliable performance for advanced AI agent workloads. MSI EdgeXpert marks the arrival of a pioneering class of systems engineered specifically for the development and execution of AI. Powered by NVIDIA GB10 Grace Blackwell Superchip, the EdgeXpert unleashes up to 1petaflop of AI compute, optimized for persistent agents requiring massive token throughput for inter-model communication. With its 128 GB of coherent unified memory, autonomous agents can natively host models featuring up to 200 billion parameters, reducing reliance on external APIs and slashing per-token cloud expenditures. Through integrated NVIDIA ConnectX-7 NIC technology, users can cluster up to four EdgeXpert units to tackle even more sophisticated agentic workflows. By packing data-center-grade power into a compact, efficient footprint, MSI EdgeXpert and OpenShell stand as the ultimate desktop foundation for always-on AI. MSI XpertStation WS300, built on DGX Station architecture, serves as an ultimate developer platform for OpenShell, enabling developers to build and run large-scale AI models and autonomous, self-learning agents locally and securely at the deskside. Designed as a desktop supercomputer, DGX Station includes NVIDIA GB300 Grace Blackwell Ultra Desktop superchip, featuring a massive 748GB of coherent memory, 20 petaFLOPS of FP4 performance, dual 400GbE connectivity powered by NVIDIA ConnectX-8 SuperNIC and can support 1T parameter models. Developers can build, run, and optimize new solutions around the clock, powered by long-running, autonomous agents with frontier-level intelligence. OpenShell allows any AI agent—including widely used coding agents such as Claude Code, Codex, Cursor, and OpenCode—to run inside a secure development sandbox without requiring code changes. By combining AI productivity with privacy and safety, OpenShell provides critical capabilities for developers building personal AI agents and enterprises deploying AI across their organizations. Visitors can experience live OpenShell demos at the MSI booth#730, explore MSI XpertStation at the NVIDIA Zone and the Exxact Corporation booth#3202, and learn more about EdgeXpert at the Phison Electronics booth#119 during NVIDIA GTC 2026.