Regular readers will know that I’ve spent most of the past two years shoehorning LLMs into single-board computers, partly as a learning exercise and partly because there are lots of local/”edge” applications where semantic reasoning (no matter how limited) and “interpretation” of sensor data are actually useful. But now we’re at a point where running a decently useful open weights model on your laptop is entirely feasible. This comes at what is possibly the worst point in our timeline, and after having started my own inference library and tried hacking away at @antirez’s brilliant hack within my meagre resources, I feel like a serious rift is developing between the “haves” who were lucky to get hardware on time (or can splurge multiple K of European Pesos on it) and the “have nots”. The societal impact of the entire thing in the always hype-driven geek community is, of course, fascinating (especially since a very small number of people have a disproportionate amount of influence in…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.