当前位置: 首页 > article >正文

基于SDK和HTTP的调用方式:OPENAI的ChatGPTAPI调用方式【实例】

OPENAI的ChatGPTAPI调用方式有多种,有基于SDK和HTTP的调用方式,也有流式和非流式的调用方式,接下来将分别举例说明。

本文示例的普通模型是text-davinci-003模型,聊天模型是gpt-3.5-turbo,可以在OPENAI官网查看更多模型介绍:https://platform.openai.com/docs/models/overview.

示例中的temperature参数是是设置回答的随机性,取值0,1,值越大每次回答内容越随机。

基于SDK

基于SDK的方式调用,需要设置环境变量OPENAI_API_KEY,或者在代码中设置openai.api_key = your_api_key.

普通模型-API调用

model = "text-davinci-003"

def openai_sdk_http_api(content):
    response = openai.Completion.create(
        model=model,
        prompt=content,
        temperature=0.8,
    )
    answer = response.choices[0].text.strip()
    return answer

普通模型-API流式调用

model = "text-davinci-003"

def openai_sdk_stream_http_api(prompt):
    response = openai.Completion.create(
        model=model,
        prompt=prompt,
        stream=True
    )
    for message in response:
        print(message.choices[0].text.strip(), end='', flush=True)

聊天模型-API调用

model = "gpt-3.5-turbo"

def openai_sdk_chat_http_api(message):
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": message}],
        temperature=0.8,
        top_p=1,
        presence_penalty=1,
    )
    answer = response.choices[0].message.content.strip()
    return answer

聊天模型-API流式调用

model = "gpt-3.5-turbo"

def openai_sdk_stream_chat_http_api(content):
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": content}],
        temperature=0.8,
        stream=True
    )
    i = 0
    for chunk in response:
        if i > 0 and chunk.choices[0].delta['content']:
            print(chunk.choices[0].delta.content, end='', flush=True)
        i += 1
    return

基于HTTP

基于HTTP的方式调用,需要在HTTP头部显示设置OPENAI的API KEY.

普通模型-API调用

api_key = "your api key"
model = "text-davinci-003"

def openai_http_api(message):
    url = 'https://api.openai.com/v1/completions'
    headers = {
        "Authorization": "Bearer " + api_key,
        "Content-Type": "application/json"
    }
    data = {
        "model": model,
        "prompt": message,
        "temperature": 0.8,
    }
    response = requests.post(url, headers=headers, data=json.dumps(data))
    return response.json()['choices'][0]['text']

普通模型-API流式调用

api_key = "your api key"
model = "text-davinci-003"

def openai_stream_http_api(message):
    url = 'https://api.openai.com/v1/completions'
    headers = {
        "Authorization": "Bearer " + api_key,
        'Accept': 'text/event-stream',
    }
    data = {
        "model": model,
        "prompt": message,
        "temperature": 0.8,
        "stream": True
    }
    response = requests.post(url, headers=headers, json=data, stream=True)
    for chunk in response.iter_lines():
        response_data = chunk.decode("utf-8").strip()
        if not response_data:
            continue
        try:
            print('response_data:', response_data)
            if response_data.endswith("data: [DONE]"):
                break
            json_data = json.loads(response_data.split("data: ")[1])
            msg = json_data["choices"][0]["text"]
            print(msg, end='', flush=True)
        except:
            print('json load error, data:', response_data)

聊天模型-API调用

api_key = "your api key"
model = "gpt-3.5-turbo"

def openai_chat_http_api(message):
    url = 'https://api.openai.com/v1/chat/completions'
    headers = {
        "Authorization": "Bearer " + api_key,
        "Content-Type": "application/json"
    }
    data = {
        "model": model,
        "messages": [{"role": "user", "content": message}],
        "temperature": 0.8,
    }
    response = requests.post(url, headers=headers, data=json.dumps(data))
    return response.json()['choices'][0]['message']['content']

聊天模型-API流式调用

api_key = "your api key"
model = "gpt-3.5-turbo"

def openai_stream_chat_http_api(message):
    url = 'https://api.openai.com/v1/chat/completions'
    headers = {
        "Authorization": "Bearer " + api_key,
        "Content-Type": "application/json"
    }
    data = {
        "model": model,
        "messages": [{"role": "user", "content": message}],
        "temperature": 0.8,
        "stream": True
    }
    response = requests.post(openai_url, headers=headers, json=data, stream=True)
    for chunk in response.iter_lines():
        response_data = chunk.decode("utf-8").strip()
        if not response_data:
            continue
        try:
            if response_data.endswith("data: [DONE]"):
                break
            data_list = response_data.split("data: ")
            if len(data_list) > 2:
                json_data = json.loads(data_list[2])
            else:
                json_data = json.loads(response_data.split("data: ")[1])
            if 'content' in json_data["choices"][0]["delta"]:
                msg = json_data["choices"][0]["delta"]['content']
                print(msg, end='', flush=True)
        except:
            print('json load error:', response_data)


http://www.kler.cn/a/405211.html

相关文章:

  • ECharts柱状图-带圆角的堆积柱状图,附视频讲解与代码下载
  • Win11 22H2/23H2系统11月可选更新KB5046732发布!
  • 【网络云计算】2024第47周-每日【2024/11/21】周考-实操题-RAID6实操解析2
  • elasticsearch介绍和部署
  • 问题: redis-高并发场景下如何保证缓存数据与数据库的最终一致性
  • nacos镜像启动时候报Public Key Retrieval is not allowed
  • linux常用命令(网络相关)
  • wsl虚拟机中的dockers容器访问不了物理主机
  • redhat红帽社区知识库BUG案例免费查阅
  • 神经网络问题之一:梯度消失(Vanishing Gradient)
  • java注解-cnblog
  • Flutter中sqflite的使用案例
  • 【Vite】如何修改服务器默认端口号5173
  • 【YOLOv8改进[注意力]】引入通道先验卷积注意力CPCA + 含全部代码和详细修改方式
  • C# .net core web 程序远程调试
  • 算法——环形链表(leetcode141)
  • Java 获取本机 IP 地址的方法
  • Flink调优详解:案例解析(第42天)
  • 解决 redis 的 key 出现的序列化 \xac\xed\x00\x05t\x00 乱码问题
  • SSM post接口传递json 报错 HTTP状态 415 - 不支持的媒体类型
  • 一篇文章了解机器学习
  • 01 —— Webpack打包流程及一个例子
  • 2 设计模式原则之里约替换原则
  • 新华三H3CNE网络工程师认证—生成树协议
  • LeetCode:98. 验证二叉搜索树
  • 【Swift】类型标注、类型安全和类型推断