当前位置: 首页 > news >正文

如何使用 Ollama 的 API 来生成聊天

如何使用 Ollama 的 API 来生成聊天

简介

生成聊天

生成聊天的示例

加载模型

卸载模型

简介

        Ollama 提供了一个 RESTful API,允许开发者通过 HTTP 请求与 Ollama 服务进行交互。这个 API 覆盖了所有 Ollama 的核心功能,包括模型管理、运行和监控。本篇将介绍如何调用 Ollama 的 RESTful API 来生成聊天。

生成聊天

端点

POST /api/chat

        该 API 将使用所提供的模型生成聊天中的下一条消息 。要注意这是一个流式传输端点,因此会有一系列的回复,也可以通过设置 stream 为 false 来禁用流式传输,使得只进行一次回复(该次回复将恢复包含所有回复的内容)。最终回复回来的响应会包含来自请求的统计信息和其他数据。

基本参数

  • model(必需) : 模型名称
  • messages:聊天消息,可用于保留聊天记录
  • tools:以 JSON 格式列出可以供模型使用的工具(需要模型支持)

message 对象的参数

  • role:消息的角色,可以是 system(系统)、user(用户)、assistant(助手)或 tool(工具)
  • content:消息的内容
  • images(可选):消息中包含的图像列表(适用于像 LLaVA 这样的多模态模型)
  • tool_calls(可选):以 JSON 格式列出模型想要使用工具列表

高级参数(可选)

  • format:指定服务器返回响应的格式。格式可以是 json(模型能根据提供的 JSON 格式来生成一个符合该格式的格式化输出)或者是 JSON 模式(会返回一个标准且完整的 JSON 结构)
  • options:Modelfile 中的其他模型参数,例如,temperature(温度)之类的参数。
  • stream:如果设置为 false,响应将作为单个响应对象返回,而对象流
  • keep_alive:控制模型在请求完成后,在内存中保持加载状态的时长(默认为5分钟)

生成聊天的示例

一、聊天请求(流式)

请求

        发送一条聊天消息,将会以流的形式进行响应。

curl http://localhost:11434/api/chat -d '{"model": "llama3.2","messages": [{"role": "user","content": "why is the sky blue?"}]
}'

响应

        流中的其中一个响应的 JSON 对象,如下所示

{"model": "llama3.2","created_at": "2025-03-03T03:06:32.8915833Z","message": {"role": "assistant","content": "The"},"done": false
}

        流最后一个响应包将会包含所有参数,如下所示

{"model": "llama3.2","created_at": "2025-03-03T03:06:34.7564372Z","message": {"role": "assistant","content": ""},"done_reason": "stop","done": true,"total_duration": 10451492400,"load_duration": 8474040300,"prompt_eval_count": 31,"prompt_eval_duration": 93000000,"eval_count": 298,"eval_duration": 1880000000
}

二、聊天请求(非流式)

请求

curl http://localhost:11434/api/chat -d '{"model": "llama3.2","messages": [{"role": "user","content": "why is the sky blue?"}],"stream": false
}'

响应

{"model": "llama3.2","created_at": "2025-03-03T03:11:03.1902143Z","message": {"role": "assistant","content": "The sky appears blue to our eyes because of a phenomenon called Rayleigh scattering, ..., the sky appears blue because of the scattering of sunlight by tiny molecules in the atmosphere, specifically Rayleigh scattering."},"done_reason": "stop","done": true,"total_duration": 1927411000,"load_duration": 11525300,"prompt_eval_count": 31,"prompt_eval_duration": 2000000,"eval_count": 307,"eval_duration": 1913000000
}

三、聊天请求(格式化输出)

请求

curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{"model": "llama3.2","messages": [{"role": "user", "content": "Ollama is 22 years old and busy saving the world. Return a JSON object with the age and availability."}],"stream": false,"format": {"type": "object","properties": {"age": {"type": "integer"},"available": {"type": "boolean"}},"required": ["age","available"]},"options": {"temperature": 0}
}'

注意:-H(--header) 指的是请求包的头部需要包含的 key: value

 响应

{"model": "llama3.2","created_at": "2025-03-03T03:15:53.1135514Z","message": {"role": "assistant","content": "{ \"age\": 22, \"available\": false }"},"done_reason": "stop","done": true,"total_duration": 132875200,"load_duration": 11059800,"prompt_eval_count": 49,"prompt_eval_duration": 6000000,"eval_count": 13,"eval_duration": 113000000
}

四、聊天请求(有上下文的)

        发送一条带有聊天记录的聊天消息。如果想要进行多次提问或者思维链(CoT Prompting,Chain-of-Thought Prompting) 来开启对话,可以使用这一种方式。

请求

curl http://localhost:11434/api/chat -d '{"model": "llama3.2","messages": [{"role": "user","content": "why is the sky blue?"},{"role": "assistant","content": "due to rayleigh scattering."},{"role": "user","content": "how is that different than mie scattering?"}]
}'

注意:该请求默认是流式响应,如果不想进行流式响应可以把 stream 参数设置为 false

响应

        请求默认为流式响应,流中的其中一个响应的 JSON 对象,如下所示

{"model": "llama3.2","created_at": "2025-03-03T03:32:57.8301713Z","message": {"role": "assistant","content": "Ray"},"done": false
}

        流最后一个响应包将会包含所有参数,如下所示

{"model": "llama3.2","created_at": "2025-03-03T03:32:59.947737Z","message": {"role": "assistant","content": ""},"done_reason": "stop","done": true,"total_duration": 10420930200,"load_duration": 7960990600,"prompt_eval_count": 55,"prompt_eval_duration": 57000000,"eval_count": 339,"eval_duration": 2147000000
}

五、聊天请求(带 images)

        发送一个带图片的聊天消息。在发送该请求时需要使用 LLaVA 或 BakLLaVA 这类多模态模型,如果是多张图片需要以数组的形式提供,并且每张图片都需要转换为 Base64 编码。

请求

{"model": "llava","stream": false,"messages": [{"role": "user","content": "what is in this image?","images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"]}]
}

响应

{"model": "llava","created_at": "2025-03-03T03:37:28.2035137Z","message": {"role": "assistant","content": " The image shows a cartoon character that resembles a pig with a human-like gesture. It appears to be waving, as indicated by the hand above its head. The style of the character is reminiscent of manga or anime designs, with simple lines and exaggerated expressions. "},"done_reason": "stop","done": true,"total_duration": 538754700,"load_duration": 3500000,"prompt_eval_count": 591,"prompt_eval_duration": 1000000,"eval_count": 62,"eval_duration": 533000000
}

六、聊天请求(固定输出)

请求

curl http://localhost:11434/api/chat -d '{"model": "llama3.2","stream": false,"messages": [{"role": "user","content": "Hello!"}],"options": {"seed": 101,"temperature": 0}
}'

响应

{"model": "llama3.2","created_at": "2025-03-03T04:04:52.9114515Z","message": {"role": "assistant","content": "How can I assist you today?"},"done_reason": "stop","done": true,"total_duration": 63318300,"load_duration": 10508400,"prompt_eval_count": 27,"prompt_eval_duration": 2000000,"eval_count": 8,"eval_duration": 50000000
}

七、聊天请求(带 tools)

        tools 可以让模型调用特定的函数来完成更复杂的任务,它是一个数组,每个元素代表一个可以被模型调用的工具。每个工具对象通常包含 type 和 function 字段,type 一般为“function”,function 则描述了函数的详细信息,包括名称、描述和参数。

请求

curl http://localhost:11434/api/chat -d '{"model": "llama3.2","messages": [{"role": "user","content": "What is the weather today in Paris?"}],"stream": false,"tools": [{"type": "function","function": {"name": "get_current_weather","description": "Get the current weather for a location","parameters": {"type": "object","properties": {"location": {"type": "string","description": "The location to get the weather for, e.g. San Francisco, CA"},"format": {"type": "string","description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'","enum": ["celsius", "fahrenheit"]}},"required": ["location", "format"]}}}]
}'

响应

{"model": "llama3.2","created_at": "2025-03-03T04:11:33.4754367Z","message": {"role": "assistant","content": "","tool_calls": [{"function": {"name": "get_current_weather","arguments": {"format": "celsius","location": "Paris"}}}]},"done_reason": "stop","done": true,"total_duration": 179741300,"load_duration": 11204800,"prompt_eval_count": 217,"prompt_eval_duration": 5000000,"eval_count": 25,"eval_duration": 163000000
}

         当收到包含 tool_calls 的响应后,你需要根据 tool_calls 中的信息执行相应的函数。在这个例子中,你需要调用 get_current_weather 函数,并传入 location 和 format 参数。至于天气数据的获取就可能需要调用外部天气查询网站的 API 或者爬虫抓取来获取实际的天气信息了。

# 请替换为你自己的 OpenWeatherMap API 密钥
API_KEY="your_openweathermap_api_key"
LOCATION="Paris"
URL="https://api.openweathermap.org/data/2.5/weather?q=$LOCATION&appid=$API_KEY&units=metric"RESPONSE=$(curl -s $URL)
echo $RESPONSE

        最后把获取到的最终结果重新封装成 JSON 对象,然后发送给模型,让模型根据这些结果生成最终的回复

{"model": "llama3.2","messages": [{"role": "user","content": "What is the weather today in Paris?"},{"role": "tool","name": "get_current_weather","parameters": {"location": "Paris","format": "celsius"},"content": "{\"weather\": [\"description\": \"Clear\", \"temperature\": 3]}"}],"stream": false
}

加载模型

        如果只是想单单加载模型进内存当中,只需要发送一个聊天请求,当中包含一个空的消息列表(messages array)即可。

请求

curl http://localhost:11434/api/chat -d '{"model": "llama3.2","messages": []
}'

响应

{"model": "llama3.2","created_at": "2025-03-03T04:20:32.7883778Z","message": {"role": "assistant","content": ""},"done_reason": "load","done": true
}

卸载模型

        卸载模型只需要在加载模型的基础上加上 keep_alive 参数,并设置为0即可卸载已加载的模型。 

请求

curl http://localhost:11434/api/chat -d '{"model": "llama3.2","messages": [],"keep_alive": 0
}'

响应

{"model": "llama3.2","created_at": "2025-03-03T04:21:35.8883384Z","message": {"role": "assistant","content": ""},"done_reason": "unload","done": true
}


http://www.mrgr.cn/news/93183.html

相关文章:

  • 从数据到决策,永洪科技助力良信电器“智”领未来
  • transformer架构解析{掩码,(自)注意力机制,多头(自)注意力机制}(含代码)-3
  • 从零开始学习Slam--数学概念
  • 计算机网络软考
  • PHP fastadmin 学习
  • 【随手笔记】利尔达NB模组
  • 8.路由原理专题
  • 项目工坊 | Python驱动淘宝信息爬虫
  • 【ATXServer2】Android无法正确显示手机屏幕
  • EtherNet/IP转Modbus解析基于网关模块的罗克韦尔PLC与Modbus上位机协议转换通讯案例
  • JavaScript基础
  • AI-Ollama本地大语言模型运行框架与Ollama javascript接入
  • 甘特图开发代码(测试版)
  • 解决redis lettuce连接池经常出现连接拒绝(Connection refused)问题
  • linux一些使用技巧
  • C#将Box企业网盘里的文件批量上载到S3,并导入Redshift
  • C/C++中函数指针和指针函数的原理和区别是什么,分别通过用例说明。
  • Docker 学习(三)——数据管理、端口映射、容器互联
  • Clion快捷键、修改字体
  • 代码贴——堆(二叉树)数据结构