My homelab consists of two machines. A Minisforum MS-01 mini PC and a Aoostar WTR PRO 5825U NAS box. In terms of GPU, MS-01 has an Intel xe integrated GPU, and the WTR has a Radeon Vega integrated GPU.Minisforum MS-01 and Aoostar WTR PROAMD is basically out of the race when it comes to ML workloads in my understanding. Everyone just assumes Nvidia, and ROCm is only supported by a handful of tools. Both these GPU's are decent in terms of video transcoding, so running things like Frigate and Immich is just fine. However, trying to run ollama basically defaults to a CPU with horrendous performance.MS-01 has 1 available PCI Express slot, and can fit 1-slot sized card. After looking for possible candidates online, I've purchased an Nvidia RTX2000E Ada Generation GPU. It is a 1-slot sized card, with 16Gb VRAM, and top consumption of 50W. RTX2000E Ada Generation from PNYIt fit just fine into the MS-01, but getting it going took some doing.Attempt with vGPU.So, datacenter GPU's can present as…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.