cheese_greater@lemmy.world to Ask Lemmy@lemmy.world · edit-28 hours agoWhat's a good local and free LLM model for Windows?message-squaremessage-square5fedilinkarrow-up115arrow-down111
arrow-up14arrow-down1message-squareWhat's a good local and free LLM model for Windows?cheese_greater@lemmy.world to Ask Lemmy@lemmy.world · edit-28 hours agomessage-square5fedilink
minus-squareToes♀@ani.sociallinkfedilinkarrow-up11·7 hours agoThe OS isn’t as important as the hardware being used. AMD, Nvidia or Intel GPU? How much RAM & vram are you working with? What’s your CPU? Generally speaking I would suggest koboldcpp with gemma3. https://github.com/LostRuins/koboldcpp?tab=readme-ov-file#windows-usage-precompiled-binary-recommended https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-GGUF/blob/main/gemma-3-27b-it-abliterated.q6_k.gguf
minus-squareoccultist8128@infosec.publinkfedilinkarrow-up2·2 hours agoWhat’s the minimum requirements for running it?
minus-squareToes♀@ani.sociallinkfedilinkarrow-up1·5 minutes agoLots of RAM and a good cpu, benefits from cores. if you’re comfortable with it being on the slow side. There’s other versions of that model optimized for lower vram conditions too. But for better performance 8GB of vram minimum.
The OS isn’t as important as the hardware being used.
AMD, Nvidia or Intel GPU?
How much RAM & vram are you working with?
What’s your CPU?
Generally speaking I would suggest koboldcpp with gemma3.
https://github.com/LostRuins/koboldcpp?tab=readme-ov-file#windows-usage-precompiled-binary-recommended
https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-GGUF/blob/main/gemma-3-27b-it-abliterated.q6_k.gguf
What’s the minimum requirements for running it?
Lots of RAM and a good cpu, benefits from cores. if you’re comfortable with it being on the slow side.
There’s other versions of that model optimized for lower vram conditions too.
But for better performance 8GB of vram minimum.