Set up Ollama (a local Large Language Model framework) on your system

Set up Ollama (a local Large Language Model framework) on your system


https://ollama.com/download

either based on your system macOS or Linux or Windows


After install select the model:

https://ollama.com/library/llama3.2

ollama run llama3.2


After install you can see list

/ show info

/help


Check with UI based interface:

https://msty.app/


REST API:

import requests
import json

url = 'https://127.0.0.1:11434/api/generate'

data = {
    "model": "llama3.2:1b",
    "prompt": "tell me a fun story about dog",

}

response = requests.post(url, json=data, stream=True)

# check the response status code
if response.status_code == 200:
    print("Generate Text:", end=" ", flush=True)
    # iterate over the response data
    for line in response.iter_lines():
        if line:
            # Decode the line and parse the JSON data
            decodeed_line = line.decode('utf-8')
            result = json.loads(decodeed_line)

            # Get the text from the response
            generated_text = result.get("response", "")
            print(generated_text, end=" ", flush=True)
        else:
            print("Error", response.status_code, response.text)
        

要查看或添加评论,请登录

Sankar R.的更多文章

社区洞察

其他会员也浏览了