15 lines
642 B
Markdown
15 lines
642 B
Markdown
Print a nice list of local ollama models with size, quantisation and context length.
|
|
Then pull all the list to update the models, will actually pull if its different.
|
|
Then update the local ollama python module for 3.10 because why not. Change the code if you want :)
|
|
|
|
|
|
1. HammerAI/neuraldaredevil-abliterated:latest, 8.0B, Q4_0, Context: 8192
|
|
2. cas/ministral-8b-instruct-2410_q4km:latest, 8.0B, Q4_K_M, Context: 32768
|
|
3. deepseek-coder-v2:latest, 15.7B, Q4_0, Context: 163840
|
|
4. dolphin-mistral:latest, 7.2B, Q4_0, Context: 32768
|
|
5. llama3.1:8b, 8.0B, Q4_K_M, Context: 131072
|
|
6. llama3.2-vision:11b-instruct-q4_K_M, 9.8B
|
|
...
|
|
|
|
Licence : CC 0
|