Fix that annoying openai.ChatCompletion error

Naidn
1 min readNov 14, 2023

--

If you are studying the DeepLearning.AI’s hot hot courses in November 2023, you might have noticed that those OpenAI prompt API scripts no longer work:

import openai
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file

openai.api_key = os.getenv('OPENAI_API_KEY')

def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]

response = get_completion(prompt)

You’ll encounter this annoying error that puts a roadblock in your study progress:

APIRemovedInV1: 

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

Don’t worry. This is all thanks to the new major version of the OpenAI SDK that was unleashed in early November 2023.

Simply modify your codes as below, and you’ll be back on track to enjoy your learning journey. Enjoy!

#Save API key in the .env `OPENAI_API_KEY` (OPENAI_API_KEY=`your API key`)
from openai import OpenAI
from dotenv import load_dotenv
import os

load_dotenv()

client = OpenAI(api_key = os.getenv("OPEN_API_KEY"))

def get_completion(prompt, model = "gpt-3.5-turbo"):

messages = [{"role": "user", "content": prompt}]

response = client.chat.completions.create(
model = model,
messages = messages,
temperature = 0.5,
)
return response.choices[0].message.content

Naidn (@naid3n)

--

--