Running Ollama Locally
JetBrains IDEs added the ability to hook up to a local LLM. One of the runners that is supported is Ollama. I've been wanting to run an LLM locally, and Ollama made that incredibly easy.
Download and install is straightforward.
Once you do that, you can choose which model you want to run. There are many to choose from. The models act wildly different depending on which model is chosen, but also which size is chosen.
Some models seem to have different tags on the end of the name, like ":2b" or ":27B" indicating the number of parameters.
It looks like you can set a system message to go along with your inputs. This seems a way to more strongly recommend the model behave a particular way.
I can force a more serious tone with a different system message.
These can be saved and restored as well.
The model can be started directly:
or be loaded from the prompt.
All of this is an entirely new world I'm excited to explore. I've really wanted to try out different models to see what they can do without worrying about "credits."