I never thought it would be so challenging to run a local LLM on Windows. Even when it seemed fine, I later realized that the instance was running entirely on my CPU. Configuring drivers, environment variables, and eventually setting up WSL2 after pulling an Ollama model felt like a separate project. And after that, I found it very exhausting to maintain the stack.
Leave a comment
You must be logged in to post a comment.