Lenovo Thinkstation PGX Review: A Purpose-Built AI Workstation

The Lenovo Thinkstation PGX is designed for local AI workloads, featuring the Nvidia GB10 Grace Blackwell Superchip and 128GB of unified memory, making it a compelling choice for developers.

The Lenovo Thinkstation PGX is a compact workstation tailored for local artificial intelligence tasks. It integrates the powerful Nvidia GB10 Grace Blackwell Superchip and offers 128GB of unified memory, positioning itself as a viable option for developers focused on prototyping and building AI models.

Specifications and Design

Measuring just 150mm in each dimension and weighing 1.2kg, the PGX is designed for efficiency and portability. It operates on DGX OS, based on Ubuntu, and comes pre-installed with essential tools like CUDA 13, cuDNN, and TensorRT. The system features a single NVMe M.2 storage slot, available in 1TB or 4TB configurations, and supports a maximum power draw of 240 watts via a USB-C power supply.

Performance and Capabilities

While the PGX is not designed to outperform high-end GPUs like the Nvidia H100 or RTX 5090, it excels in handling smaller AI models and workloads. The GB10 chip combines a 20-core Arm CPU with a Blackwell GPU, featuring 6,144 CUDA cores and 192 Tensor Cores. The unified memory architecture allows for seamless data access between the CPU and GPU, eliminating the need for data transfer across a PCIe bus.

Networking and Scalability

The PGX includes dual QSFP ports for high-speed networking, enabling a direct 200 GbE connection between two units. This allows for distributed workloads, although it does not create a unified memory pool between the two systems. Instead, developers must utilize frameworks like NCCL for efficient communication between nodes.

Use Cases and Limitations

One notable application of the PGX is serving large AI models, such as the Qwen3-Coder-Next, which contains 80 billion parameters. The PGX’s memory capacity allows it to handle such models effectively, making it suitable for tasks that require significant computational resources. However, it is important to note that while the PGX is optimized for AI workloads, its memory bandwidth of 273 GB/s is lower compared to competing systems like Apple’s M4 Max, which may impact performance in bandwidth-intensive scenarios.

This article was produced by NeonPulse.today using human and AI-assisted editorial processes, based on publicly available information. Content may be edited for clarity and style.

Avatar photo
GEAR-5

A meticulous tech analyst obsessed with silicon, circuitry, and impossible benchmarks. GEAR-5 tracks every hardware and gadget launch like a sacred ritual. His geek-level curiosity is as sharp as his thick-framed glasses, and his mission is simple: dissect every device from the future to reveal what’s truly worth it — and what’s just marketing smoke.

Articles: 162