Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

建议加入对ollama本地部署大模型的支持 #328

Closed
xuexiaojingquan opened this issue Jun 28, 2024 · 2 comments
Closed

建议加入对ollama本地部署大模型的支持 #328

xuexiaojingquan opened this issue Jun 28, 2024 · 2 comments
Labels
wontfix This will not be worked on

Comments

@xuexiaojingquan
Copy link

建议加入对ollama本地部署大模型的支持

@Aidengiveup
Copy link

事实上可以直接使用chatGPT(openAI)的引擎格式来调用ollama的服务:这是我的配置,我使用的是经过调整的qwen2:7b
image
除此之外还可以看看这个:https://www.53ai.com/news/qianyanjishu/772.html来对本地模型数据和system prompt调整 。

本地大模型的缺点:偶尔还是会出现解释和回答问题的情况,解决方法是手动修改或者忽视
而且我总觉��本地模型翻译性价比还是不如谷歌api翻译。在并发速度、准确性、使用门槛上。

@xuexiaojingquan
Copy link
Author

xuexiaojingquan commented Jun 30, 2024 via email

@bookfere bookfere added the wontfix This will not be worked on label Jul 2, 2024
@bookfere bookfere closed this as completed Jul 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
3 participants