We recently got the Dell Pro Max GB10, which is Dell’s version of the Nvidia DGX Spark (yes, we are official Nvidia partners, so come talk to us about setting up a full on Nvidia setup) mini super computer. The specs are identical. Both machines have 128 GB of unified memory and the same GB10 Grace Blackwell superchip, 4 TB of storage and you can still combine two of them together. We got the Dell because I love Dell monitors and I really love my XPS Tablet which is a full on Windows machine and am excited to try out my new XPS laptop with an ARM processor (I do love Windows). It also doesn’t hurt that Dell has really good promotions for Amex Platinum cards, though you should be reading about that at The Points Guy.
The box for this computer was smaller than the box for my new laptop.

Unpacked, it is evident quite how tiny it is.

Here is a comparison to my six-year old son for scale. And yes, it is resting on a laptop stand atop a mini fridge.


Physically setting it up was mostly easy. I plugged in ethernet (though you can use WiFi), HDMI (just for the initial setup, I plan to run this mostly headless as a server), the included USB-C power running at 240 Watts and a USB hub. This was hard because the DGX only has USB-C ports while my keyboard and mouse both use USB-A, and so do all of the USB hubs I had sitting around. So I luckily had an A-to-C adapter from my phone I could use. More frustrating is that I could not run the peripherals and monitor through the cheap KVM switch switch I use with our other servers. So that led to a lot of disconnecting and rerunning cables. Though the KVM did work after initial setup.
There is a way to treat the DGX as a WiFi hotspot so that it can be setup remotely, but having an attached keyboard, mouse and monitor seemed easier.
Since this is a pricey, small computer I connected it to a UPS. I spent quite a while discussing with ChatGPT about the proper sizing given the DGX’s 280 Watt draw. ChatGPT kept recommending UPSs that were much, much larger than the DGX itself and that felt ridiculous, so I eventually found this compact APC unit that can support 540 Watts.

Once I powered on the unit, it asked me to choose a language and time zone, followed by a keyboard language.

It then asked me to set a username (with root access) and password.

Then it wants you to allow telemetry, though you can opt out.

Then you wait.

And wait.

And wait.

All in, I’d say it was about twenty minutes. So I spent that time building this boat Lego set with my son and a Lego Space Shuttle set with my daughter.


Once it was ready I was greeted with the familiar Ubuntu desktop.

I quickly installed Tailscale so my entire team could access the machine.
It comes with a lot of software installed, particularly CUDA. I mostly cared that Docker was installed as we will be running everything through containers.
I’ll be exploring that in coming blog posts as my team puts this machine through its paces. We plan to experiment with a lot of GPU software, ranging from fitting XGBoost and Deep Learning models to hosting our own LLMs with Ollama and OpenWebUI. We are particularly excited to distill large language models into small language models to use as agents in a much larger agentic workflow.
We’ll be writing more about our experiences in upcoming blog posts.
Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science and AI firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Government Data Science and AI Conferences and author of R for Everyone.
Leave a Reply