lobehub/lobe-chat

I don't use ollama in lobechat #1633

fazhang-master posted onGitHub

💻 系统环境

Windows

📦 部署环境

Official Preview

🌐 浏览器

Chrome

🐛 问题描述

WechatIMG174 WechatIMG173 image

🚦 期望结果

first, i can run 127.17.0.1:11434,but i don't use ollama in lobechat.my ollama run use OLLAMA_HOST=0.0.0.0 ollama serve,lobechat run use docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434 lobehub/lobe-chat.

📷 复现步骤

No response

📝 补充信息

No response


👀 @fazhang-master

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

posted by lobehubbot 12 months ago

sorry ,系统环境 ubantu22.04,部署环境 docker

posted by fazhang-master 12 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


sorry, system environment ubantu22.04, deployment environment docker

posted by lobehubbot 12 months ago
posted by okwinds 12 months ago

@fazhang-master 试试加个 v1

$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat
posted by arvinxx 12 months ago

@fazhang-master 试试加个 v1

$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat

image image image image

posted by fazhang-master 12 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@fazhang-master 试试加个 v1

$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat

image image image image

posted by lobehubbot 12 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿

@fazhang-master 试试加个 v1

$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://192.168.31.6:11434/v1 lobehub/lobe-chat

image image image image

posted by fazhang-master 12 months ago

@fazhang-master 我遇到和你一样的问题。MacOS,本地部署ollama,docker运行lobechat。虽然setting-language model里ollama部分check是通过的,但是在agent对话中,依然报错“Error: connect ECONNREFUSED 127.0.0.1:11434”。 我检查lobechat日志,如下:Route: [ollama] OllamaBizError: [TypeError: fetch failed] { cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '127.0.0.1', port: 11434 } } Route: [openai] NoOpenAIAPIKey: { error: undefined, errorType: 'NoOpenAIAPIKey' }

posted by lyg5597122 12 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@fazhang-master I have the same problem as you. MacOS, locally deploy ollama, docker runs lobechat. Although the ollama check in the setting-language model passed, the error "Error: connect ECONNREFUSED 127.0.0.1:11434" was still reported during the agent conversation. I checked the lobechat log, as follows: Route: [ollama] OllamaBizError: [TypeError: fetch failed] { cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '127.0.0.1', port: 11434 } } Route: [openai] NoOpenAIAPIKey: { error: undefined, errorType: 'NoOpenAIAPIKey' }

posted by lobehubbot 12 months ago

Same problem here

posted by oldmanjk 12 months ago

You need to use “host.docker.internal” as ollama address. See https://stackoverflow.com/questions/31324981/how-to-access-host-port-from-docker-container

posted by williamchai 12 months ago

That didn't work for me. This did: "Use --net="host" in your docker run command, then localhost in your docker container will point to your docker host."

posted by oldmanjk 12 months ago

I am experiencing the same issue, but I have noticed something additional on my end.

i have ollama running locally on windows and lobe-chat on docker.

I have been experimenting extensively with several Ollama servers and Lobe-chat. Initially, I had an Ollama server running on port 11434. However, after encountering some issues, I reinstalled Ollama, and it now runs on port 11345. I typically run Ollama using the "ollama serve" command, which allows me to see the server's running status. Regardless of the port I configure in the Lobe-chat settings for Ollama, it always indicates that the Ollama connection is active and automatically populates Ollama models that no longer exist on my system.

Even when the connection check passes, attempting to prompt the model consistently results in the same error:

Error requesting Ollama service, please troubleshoot or retry based on the following information json { "error": { "cause": { "errno": -111, "code": "ECONNREFUSED", "syscall": "connect", "address": "127.0.0.1", "port": 11435 } }, "endpoint": "http://127.0.***.1:****/v1", "provider": "ollama" }

posted by clusterpj 11 months ago

✅ @fazhang-master

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

posted by lobehubbot 11 months ago

:tada: This issue has been resolved in version 0.149.0 :tada:

The release is available on:

Your semantic-release bot :package::rocket:

posted by lobehubbot 11 months ago

What worked for me was running ollama serve and then turning on the "Use Client-Side Fetching Mode" setting. From the ollama server logs, it appears the URL is different. When the setting is off it's /v1/api/chat, which doesn't work. When the setting is on, the URL is /api/chat and does work.

posted by rudolfolah 11 months ago

when i try $ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434 lobehub/lobe-chat (without v1) without Client-Side Fetching Mode, it is working. However, when enabling client-side fetching mode, it is not working again.

posted by lizzzcai 9 months ago

image image image Ollama is running locally, not in Docker. However, after running the lobe-chat Docker container using the following command:docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://127.0.0.1:11434 lobehub/lobe-chat。the check passes, but chatting still fails. The command curl http://127.0.0.1:11434 works successfully, and running ollama run llama3:8b also works correctly.

posted by fazhang-master 9 months ago

Fund this Issue

$0.00
Funded

Pull requests