Add inference ability to all models, just add the suffix -r1-fusion to the model name. For example: gpt-4o-r1-fusion Inspired by Deepclaude. The principle is to ask the user's questions first, obtain the thinking process, and then splice the thinking process and the problem together, and give it to the original model. This is a multi-model fusion method that works on the 302 platform for any model This feature can be used with image analysis or network search For example: gpt-4o-r1-fusion-web-search Price: + DeepSeek-R1-302 model cost based on the original model
Request
Header Params
Content-Type
string
required
Example:
application/json
Accept
string
required
Example:
application/json
Authorization
string
required
Fill in the API KEY generated in the management background-API KEYS behind Bearer, such as Bearer sk-xxxx
Example:
Bearer {{YOUR_API_KEY}}
Body Params application/json
model
string
required
The ID of the model to be used. For detailed information on which models are applicable to the chat API, please view Model endpoint compatibility
What sampling temperature to use, ranging from 0 to 2. Higher values, such as 0.8, will make the output more random, while lower values, like 0.2, will make it more focused and deterministic. We generally recommend adjusting either this or top_p, but not both simultaneously.
top_p
integer
optional
An alternative to temperature sampling is nucleus sampling, where the model considers tokens within the top_p probability mass. For instance, top_p = 0.1 means only tokens within the top 10% probability mass are considered. We recommend adjusting either this or temperature, but not both simultaneously.
n
integer
optional
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
stream
boolean
optional
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by data: [DONE] message. For Example Code.,Please view OpenAI Cookbook.
The maximum number of tokens that can be generated in the chat completion.The total length of input tokens and generated tokens is limited by the model's context length.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties.
logit_bias
null
optional
Modify the likelihood of specific tokens appearing in the completion. This can be done by providing a JSON object that maps tokens (identified by their token IDs in the tokenizer) to a bias value ranging from -100 to 100. Mathematically, this bias is added to the logits generated by the model before sampling. The exact effect varies depending on the model, but values between -1 and 1 will slightly decrease or increase the likelihood of selection, while values like -100 or 100 will either ban or exclusively select the relevant token.
user
string
optional
A unique identifier representing your end users, which helps OpenAI monitor and detect abuse. View more
{"id":"chatcmpl-123","object":"chat.completion","created":1677652288,"choices":[{"index":0,"message":{"role":"assistant","content":"\n\nHello there, how may I assist you today?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":9,"completion_tokens":12,"total_tokens":21}}