-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/Add gemini support #26
Conversation
Hi @Dioprz, thank you for this PR and your email! I will get back to you very soon. |
- add new provider method to adjust payload
Hi @Dioprz, feel free to check out this version. There may be some remaining issues with parsing the streaming response, but otherwise, Gemini is ready to use. I had to eliminate all the standard curl body parameters that, for example, OpenAI uses, as the Gemini API operates differently. Indeed, I employ the Right now, we're working on restructuring methods like the |
@Dioprz, let me know if you have any issue or if it works as expected. |
Hi @frankroeder, Sorry for the delay. I have tested your changes, and everything looks really great. There is just one thing related to parsing the streaming response. When trying to chat with Gemini, sometimes I get incomplete responses like these (see the "" in all of them):
The most consistent way I found until now to try to reproduce the error is this one: 🗨:
Are you working?
🦜:[Gemini-1.5-Flash-Chat - gemini]
As a large language model, I don't \. I'm not employed or paid for my services. I'm constantly learning and evolving, but my primary function is to provide information and complete tasks as instructed.
However, I'm always \ that I'm processing information, generating text, and responding to your requests.
Would you like me to help you with something? However, I'm not sure what induces this kind of truncation. Please let me know if this information helps you or how I can... I don't know, maybe intercept the API response before it is processed by Parrot or something like that? |
- remove print statements - add missing message preprocessing
Hey @Dioprz, the parsing issue has been resolved. Please let me know if you encounter any other bugs. I am still looking for missing pieces regarding the API parameters and general improvements. |
I've transferred this to PR #29 as it necessitates a restructure of the providers, which I am currently addressing on a different branch. The API parameter integration for Gemini is complete. You're welcome to switch over to that branch. I'll have it merged into the main branch shortly. |
Hey @frankroeder! Wow, that was really quick; thanks a lot for all your work, man. The transfer is fine with me. I have already tested your changes and everything works like a charm! Have a great day, and thank you for your openness to the new feature. |
Hi @frankroeder.
Here is my draft with some proposed changes. As I was expecting, things became confusing to me when I was trying to write the
gemini.lua
file.So, things pending to do are:
temperature
and thetop_p
. I really don't know how good those will behave, but I also don't have the knowledge to set them in a proper way. If you can give me a hint or put them in a sane default, that can be great.gemini.lua
file:curl
commands from what I saw in the other providers. Please check thecurl_params
as I added the model directly into that line, and I'm not sure it's ok.verify
method looks... Pretty standard.preprocess_messages
,add_system_prompt
,process
andcheck
methods.For reference, here is the documentation I'm following: Gemini API REST. Btw, I'm working with
generateContent
, should I work withstreamGenerateContent
instead?Thanks for your help in advance!