Local AI: Ollama now available for Linux

Get up and running with large language models, locally.
Run Llama 2, Code Llama, and other models. Customize and create your own.

“… Ollama is an AI tool designed to help you run large language models locally. With Llama, you can easily customize and create language models according to your needs. Whether you’re a developer or a researcher, Llama allows you to harness the power of AI without relying on cloud-based platforms. Available for download on MacOS, Windows, and Linux, Llama offers an efficient and convenient solution for running language models. It’s perfect for those who prefer to have greater control and privacy over their AI models. Stay tuned for upcoming support for additional operating systems. Try Llama today and experience the freedom of running language models on your own terms.”

Ollama.ai possible use cases:
Run opensource AI language models locally.
Researching with customizable language models.
Running AI models without cloud dependency.

Note from the maker:
“… Today marks the day that Ollama for Linux (0.1.0) is available for download!
Out-of-the-box, Ollama on Linux will run with GPU acceleration should an Nvidia GPU be available. If no such GPU is available, it will automatically fallback to CPU-only mode.

💯 Ollama will run on cloud servers with multiple GPUs attached
🤖 Ollama will run on WSL 2 with GPU support
😍 Ollama maximizes the number of GPU layers to load to increase performance without crashing
🤩 Ollama will support CPU only, and small hobby gaming GPUs to super powerful workstation graphics cards like the H100

One command installation
curl https://ollama.ai/install.sh | sh
…”

Ollama Website
Github Link


Github LINUX Link



Discover more from Vancouver Linux Users Group

Subscribe to get the latest posts sent to your email.


Posted

in

by

Tags:

Discover more from Vancouver Linux Users Group

Subscribe now to keep reading and get access to the full archive.

Continue reading