Thanks to fine-tuning mechanisms, such as the Hugging Face open-source low-rank adaptation (LoRA), you can perform model fine-tuning for a fraction of the cost and time of other methods. How much of a fraction? How does personalizing a language model in a few hours on consumer hardware sound to you? 

The Google developer added:

 “Part of what makes LoRA so effective is that — like other forms of fine-tuning — it’s stackable. Improvements like instruction tuning can be applied and then leveraged as other contributors add on dialogue, or reasoning, or tool use. While the individual fine tunings are low rank, their sum need not be, allowing full-rank updates to the model to accumulate over time. This means that as new and better datasets and tasks become available, the model can be cheaply kept up to date without ever having to pay the cost of a full run.”

Our mystery programmer concluded, “Directly competing with open source is a losing proposition.… We should not expect to be able to catch up. The modern internet runs on open source for a reason. Open source has some significant advantages that we cannot replicate.”

Also: Extending ChatGPT: Can AI chatbot plugins really change the game?

Thirty years ago, no one dreamed that an open-source operating system could ever usurp proprietary systems like Unix and Windows. Perhaps it will take a lot less than three decades for a truly open, soup-to-nuts AI program to overwhelm the semi-proprietary programs we’re using today.