forked from ggerganov/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 7
Issues: OpenBMB/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Why the output of siglip+resampler is different between torch and llama.cpp?
#18
opened Jul 16, 2024 by
Xwmiss
Looking for Vision-8B-MiniCPM-2_5-Uncensored-and-Detailed
enhancement
New feature or request
#17
opened Jul 4, 2024 by
mashdragon
4 tasks done
Bug: minicpmv-convert-image-encoder-to-gguf.py does not work on 4-bit models
#16
opened Jul 4, 2024 by
mashdragon
llama-cpp-python support
enhancement
New feature or request
#12
opened Jun 5, 2024 by
cegutica
4 tasks done
Question: llama.cpp server support
question
Further information is requested
#10
opened May 31, 2024 by
everyfin-in
2 tasks done
ProTip!
Mix and match filters to narrow down what you’re looking for.