Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure OpenAI Integration #690

Open
thickmn opened this issue May 6, 2024 · 7 comments
Open

Azure OpenAI Integration #690

thickmn opened this issue May 6, 2024 · 7 comments

Comments

@thickmn
Copy link

thickmn commented May 6, 2024

Azure OpenAI is a common enterprise option for a lot of companies integrating LLMs into their environment. It's our only option for genAI, so any integration would be great!

@doberst
Copy link
Contributor

doberst commented May 7, 2024

@thickmn - thanks - we are working on it!

@doberst
Copy link
Contributor

doberst commented May 7, 2024

@thinkmn - we have merged a proposed fix into the main branch - could you check it out and confirm if this will meet your needs in terms of configuration of the AzureOpenAI client - please check out the example - appreciate your help and support to test and make sure it meets your needs!

@thickmn
Copy link
Author

thickmn commented May 8, 2024

@doberst - thanks for the quick response here! My first go at testing looks like it tries to call openai with a user managed api key and can't successfully set the azure client with the environment variables in the example. I'm going to spend more time on this tomorrow, but just as an FYI, when using langchain's integration, I had to pass:
OPENAI_API_TYPE=azure
OPEN_API_BASE
OPEN_API_KEY
OPEN_API_VERSION
DEPLOYMENT_NAME

@Clima2024
Copy link

Hi, I had the same problem as thickmn,
I also have to use the GPT throught azure, I try to use the sample in the link, but It doesn't work ...

My GPT is the 3.5-turbo
However in the requisition I use
AZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_KEY
api_version = "2023-07-01-preview"

and to call my model is test:
response = client.chat.completions.create(
model = "test",
messages =....

@doberst can you help me, please?

@doberst
Copy link
Contributor

doberst commented Jun 12, 2024

@Clima2024 - happy to help with this - and sorry that you have run into an issue ... without sharing any confidential information, could you share the key 2-3 line code snippet, including the OpenAI configs... We will also test from our Azure account in parallel....

@Clima2024
Copy link

Clima2024 commented Jun 12, 2024

Thanks @doberst .. no worries, so in my local env what I do is the following:

import os
from openai import AzureOpenAI

#with the var registered in my environment, I do:
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-07-01-preview"
)

#then I call GPT as:

response = client.chat.completions.create(
model = "test",
messages=[
{"role": "system", "content": "You are a helpful research assistant."},
{"role": "user", "content": "Just say hi"}]
)

print(response.choices[0].message.content)

#This model name "test" is the name of the instalation but we call one gpt-35-turbo (in azure they don't use the dot)

@Clima2024
Copy link

Hi @doberst
I am also trying to use the embeding (text-embedding-3-small) at my local azure openai account, and it failed.
Not sure why my infra named the models, but here a use:
model="test-embeddings"
to do the embeding. As long as you are working with it, could you check the embeding calling for azure too? (if it is not ask too much.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants