All articles
·4 min read

Using Free AI Models and their API with OllamaZone

Hi there! Want to use powerful AI models without spending a fortune? OllamaZone lets you access public Ollama servers hosting various Large Language Models (LLMs) like Llama, Deepseek, Mistral, and Gemma. Let me show you how to use them in just a few simple steps.

Cover image for Using Free AI Models and their API with OllamaZone

What is OllamaZone?

OllamaZone is a directory of publicly available Ollama servers. Each server hosts different AI models that you can use for free in your projects. Think of it as a community resource where you can find and connect to AI models without needing expensive hardware.

Step 1: Find a Server

  1. Browse through the list of available servers
  2. Use the filters or just browse the model pages to find models that match your needs

What Each Server Listing Shows:

╔════════════════════════════════════════╗
║ Model: llama3.1:latest                 ║
║ IP Port: 78.46.156.183:11434          ║
║ Parameter Size: 4.58 GB                ║
║ [Copy IP] [Copy cURL] [Chat with Model]║
╚════════════════════════════════════════╝

Step 2: Test the Server

Before using a server in your project, let's make sure it works. Run this simple test in your terminal:

bash
curl -X POST "http://78.46.156.183:11434/api/generate" \
-H "Content-Type: application/json" \
-d '{
  "model": "llama3.1:latest",
  "prompt": "What is artificial intelligence?",
  "max_tokens": 100
}'

If the server is working, you'll get a response with the AI's answer. If not, just try another server from the list.

Step 3: Use the API in Your Projects

Python Example

python
import requests

# Server details
server_url = "http://78.46.156.183:11434"
model_name = "llama3.1:latest"
prompt = "Explain how to make pasta in 5 steps."

# Send request to the AI model
response = requests.post(
    f"{server_url}/api/generate",
    json={
        "model": model_name,
        "prompt": prompt,
        "max_tokens": 200
    }
)

# Print the AI's response
if response.status_code == 200:
    print(response.json()["response"])
else:
    print(f"Error: {response.status_code}")
    print(response.text)

JavaScript Example

javascript
async function askAI(prompt) {
  const serverUrl = "http://78.46.156.183:11434";
  const modelName = "llama3.1:latest";

  try {
    const response = await fetch(`${serverUrl}/api/generate`, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        model: modelName,
        prompt: prompt,
        max_tokens: 200,
      }),
    });

    if (!response.ok) throw new Error(`HTTP error ${response.status}`);
    const data = await response.json();
    return data.response;
  } catch (error) {
    console.error("Error:", error);
    return null;
  }
}

// Usage example
askAI("What are three ways to improve code readability?").then((answer) =>
  console.log(answer),
);

Pro Tips

  1. Use the Copy Buttons: Each server listing has buttons to copy the IP address and generate a curl command
  2. Try Before You Code: The "Chat with Model" button lets you test a model directly in your browser (Note: This is a beta feature - join our beta program to access)
  3. Keep Backups: Have several server options ready in case some go offline
  4. Be Considerate: These are shared resources, so avoid sending too many requests in a short time
  5. Check Regularly: New servers with different models are added frequently. I am planning to arrange a Telegram Bot to notify you about new servers and models.

Beta Features Access

Advanced Filtering and Chat with Model features are currently in beta. To access these features:

  1. Visit our beta access page
  2. Sign up for the beta program
  3. Once approved, you'll have access to these advanced features

ps: The beta program is free and open to everyone but it requires approval to manage server load. Drop an email to info@ollama.zone if you have any questions regarding the beta program.

What Can You Build?

With these free AI models, you can create:

  • Chatbots
  • Content generators
  • Text summarizers
  • Code assistants
  • Translation tools
  • And much more!

Give it a try and follow us on Twitter to stay tuned for updates and new models.