AI Why My Mac Mini M4 Outperforms Dual RTX 3090s for LLM Inference I built a dual RTX 3090 server for local LLM inference. A Mac Mini M4 turned out to be 27% faster and 22× more efficient. Here's why memory bandwidth beats raw GPU power. Stephane Thirion 16 Feb 2026 · 2 min read
Benchmark VDI Project - Hypervisor war (part.3) VDI Project – Not only a XenDesktop project (part.1) VDI Project – The framework (part.2) VDI Project – Hypervisor war (part.3) VDI Project – Desktops and applications delivery (part.4) VDI Stephane Thirion 3 Jun 2011 · 2 min read