There is a lot to get excited about in Ollama
AI Summary
- AMA Version 0.1.2 Release Summary
- Two updates released in one week
- Majority of new features in 0.1.2.1
- Version 0.1.2.2 focuses on refinements
- New Features
- Introduction of new models:
- Quen
- Duct
- Be
- No Sequel
- Stable Code
- New Hermes
- Two Mixol
- Stable LM2
- Official TypeScript and JavaScript library announced
- New official Python library
- CPU and AVX Instructions
- Original requirement for AVX instructions
- AVX used for vector calculations, common in CPUs for over a decade
- Support extended to older CPUs without AVX and newer CPUs with AVX2
- GPU Compatibility and Error Handling
- Improved GPU recognition; fallback to CPU if GPU not detected
- Better support for NVIDIA GPUs in WSL
- Resolved issues with AMA hanging after multiple requests
- Enhanced error messages for internet connectivity issues
- User Experience Improvements
- Messages directive in model file for chat endpoint
- Allows inclusion of user and assistant roles with questions or answers
- Configuration settings for models can be adjusted in AMA REPL
- New
/save
command to serialize settings to a new model/load
command to switch models or clear context- Correct display of settings with
/show parameters
command- Community Engagement
- Encouragement to join Discord community at discord.gg/olama
- Encouragement to share favorite techno evangelist video
- Closing Thoughts
- Version 0.1.2 updates considered beneficial for all users
- Specific interest in the messages in the model file feature
- Call to action: like, subscribe, and thanks for watching
- Farewell with music
[Music]
Goodbye