Why I'm avoiding OpenAI's ChatGPT and going local


The latest issues with OpenAI, the Scarlett Johannson-Sky/Her incident, prominent OpenAI employees leaving the company, and that NDA, along with other previous issues, are red flags for me. Personally, OpenAI is now ranked alongside Meta - the "end justifies the means" and "better to ask for forgiveness than permission" kind of leadership is difficult for me to accept. Only the US government is to blame here - why they are letting these billionaires exploit every human being on Earth is unfathomable!

Before I continue, I'd like to say that "AI" is far more than the Large Language Models (LLMs) that are being hyped up today. LLMs is a subset of "AI". I am not against "AI", nor am I against using LLMs. Personally, I prefer using open source LLMs that I can run locally, on device, for two reasons: responsible use and privacy. Privacy is a no brainer. Cloud-based LLMs get access to your data, and due to most of these companies' non-transparency, you don't know what they are doing with the data. Second, I prefer to be a responsible LLM user, one who does not put burden on the environment. For every LLM query that gets sent to the cloud eats up electricity to power the computers and water to cool it down. 

Instead of using OpenAI's ChatGPT, I look at models on HuggingFace.co that are available on Ollama.com and PrivateLLM.app. Ollama is an open source application that allows you to download different LLMs and run it locally. What I like is that you can configure it to act as a server, too, but on your own network. 

I tried running Ollama on a RaspberryPi 5 with 8GB RAM and 1TB of SSD, running one of the Llama 3 models simply choked the machine. I thought it would be able to handle it.

Screenshot 2024-05-19 at 9.38.49 AM.png

htop showing the load of running Llama 3 model on Raspberry Pi 5

That experiment was a bust! I deleted Ollama and plan to install it to a better machine - maybe it'd do better on an old Intel-based Mac Mini with 16GB RAM. 

Speaking of Macs, I also use PrivateLLM on my iPhone, iPad and Mac. Yes, it is a US$10 universal application that also uses some of the HuggingFace.co's models. Like Ollama, PrivateLLM does it all locally, on-device. Whilst it runs on the iPhone, iPad and Mac, the models you can download depends on the resources available on device. PrivateLLM supports Shortcuts, which extends access to the LLMs. One thing, though, PrivateLLM models are only good for text. If you want multi-modal models, you have to go to Ollama. 

You do not need to sacrifice your privacy and the planet, and also provide more money to the bros at OpenAI, in order to use LLMs. Go local, go on-device, that is the way to go.