lobehub/lobe-chat

[Bug] 集成ollama调用llava多模态模型时无法识别图片 #1482

yumcc-com posted onGitHub

💻 系统环境

Windows

📦 部署环境

Docker

🌐 浏览器

Chrome

🐛 问题描述

docker运行lobe-chat,且集成ollama,切换成llava模型,正常文字可以识别,上传图片会报错。 image { "error": { "message": "json: cannot unmarshal array into Go struct field Message.messages.content of type string", "type": "invalid_request_error", "param": null, "code": null }, "endpoint": "http://host.do****er.internal:****/v1", "provider": "ollama" }

🚦 期望结果

上传图片后能正常识别解析

📷 复现步骤

No response

📝 补充信息

No response


👀 @yumcc-com

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

posted by lobehubbot about 1 year ago

✅ @yumcc-com

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

posted by lobehubbot 12 months ago

Fund this Issue

$0.00
Funded

Pull requests