642 B
642 B
Print a nice list of local ollama models with size, quantisation and context length. Then pull all the list to update the models, will actually pull if its different. Then update the local ollama python module for 3.10 because why not. Change the code if you want :)
- HammerAI/neuraldaredevil-abliterated:latest, 8.0B, Q4_0, Context: 8192
- cas/ministral-8b-instruct-2410_q4km:latest, 8.0B, Q4_K_M, Context: 32768
- deepseek-coder-v2:latest, 15.7B, Q4_0, Context: 163840
- dolphin-mistral:latest, 7.2B, Q4_0, Context: 32768
- llama3.1:8b, 8.0B, Q4_K_M, Context: 131072
- llama3.2-vision:11b-instruct-q4_K_M, 9.8B ...
Licence : CC 0