I don't use ollama in lobechat #1633
fazhang-master posted onGitHub
💻 系统环境
Windows
📦 部署环境
Official Preview
🌐 浏览器
Chrome
🐛 问题描述
🚦 期望结果
first, i can run 127.17.0.1:11434,but i don't use ollama in lobechat.my ollama run use OLLAMA_HOST=0.0.0.0 ollama serve,lobechat run use docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434 lobehub/lobe-chat.
📷 复现步骤
No response
📝 补充信息
No response
👀 @fazhang-master
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
sorry ,系统环境 ubantu22.04,部署环境 docker
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
sorry, system environment ubantu22.04, deployment environment docker
@fazhang-master 试试加个 v1
$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat
@fazhang-master 试试加个
v1
$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
@fazhang-master 试试加个
v1
$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
@fazhang-master 试试加个
v1
$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat
@fazhang-master 我遇到和你一样的问题。MacOS,本地部署ollama,docker运行lobechat。虽然setting-language model里ollama部分check是通过的,但是在agent对话中,依然报错“Error: connect ECONNREFUSED 127.0.0.1:11434”。 我检查lobechat日志,如下:Route: [ollama] OllamaBizError: [TypeError: fetch failed] { cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '127.0.0.1', port: 11434 } } Route: [openai] NoOpenAIAPIKey: { error: undefined, errorType: 'NoOpenAIAPIKey' }
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
@fazhang-master I have the same problem as you. MacOS, locally deploy ollama, docker runs lobechat. Although the ollama check in the setting-language model passed, the error "Error: connect ECONNREFUSED 127.0.0.1:11434" was still reported during the agent conversation. I checked the lobechat log, as follows: Route: [ollama] OllamaBizError: [TypeError: fetch failed] { cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '127.0.0.1', port: 11434 } } Route: [openai] NoOpenAIAPIKey: { error: undefined, errorType: 'NoOpenAIAPIKey' }
Same problem here
You need to use “host.docker.internal” as ollama address. See https://stackoverflow.com/questions/31324981/how-to-access-host-port-from-docker-container
That didn't work for me. This did: "Use --net="host" in your docker run command, then localhost in your docker container will point to your docker host."
I am experiencing the same issue, but I have noticed something additional on my end.
i have ollama running locally on windows and lobe-chat on docker.
I have been experimenting extensively with several Ollama servers and Lobe-chat. Initially, I had an Ollama server running on port 11434. However, after encountering some issues, I reinstalled Ollama, and it now runs on port 11345. I typically run Ollama using the "ollama serve" command, which allows me to see the server's running status. Regardless of the port I configure in the Lobe-chat settings for Ollama, it always indicates that the Ollama connection is active and automatically populates Ollama models that no longer exist on my system.
Even when the connection check passes, attempting to prompt the model consistently results in the same error:
Error requesting Ollama service, please troubleshoot or retry based on the following information json { "error": { "cause": { "errno": -111, "code": "ECONNREFUSED", "syscall": "connect", "address": "127.0.0.1", "port": 11435 } }, "endpoint": "http://127.0.***.1:****/v1", "provider": "ollama" }
✅ @fazhang-master
This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。
:tada: This issue has been resolved in version 0.149.0 :tada:
The release is available on:
Your semantic-release bot :package::rocket:
What worked for me was running ollama serve
and then turning on the "Use Client-Side Fetching Mode" setting. From the ollama server logs, it appears the URL is different. When the setting is off it's /v1/api/chat
, which doesn't work. When the setting is on, the URL is /api/chat
and does work.
when i try $ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434 lobehub/lobe-chat
(without v1
) without Client-Side Fetching Mode
, it is working. However, when enabling client-side fetching mode, it is not working again.
Ollama is running locally, not in Docker. However, after running the lobe-chat Docker container using the following command:docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://127.0.0.1:11434 lobehub/lobe-chat。the check passes, but chatting still fails. The command curl http://127.0.0.1:11434 works successfully, and running ollama run llama3:8b also works correctly.