CustomGPT.ai OpenAI SDK Compatibility

OpenAI SDK Compatibility (Beta)

With a few code changes, you can use the OpenAI SDK to interact with CustomGPT.ai's API. CustomGPT.ai provides a compatibility layer that lets you quickly leverage CustomGPT.ai capabilities with minimal development effort.

Submit feedback or bugs related to the OpenAI SDK compatibility feature through our [Slack Community](NEED LINK).

Before You Begin

This compatibility layer is intended to provide seamless integration with the existing ecosystem of OpenAI-compatible tools, frameworks, and services. It allows developers to switch providers with minimal code changes, but is currently in beta. For the best experience and access to all CustomGPT.ai features, we recommend using the native CustomGPT.ai API.

Getting Started with the OpenAI SDK

To use the OpenAI SDK compatibility feature, you'll need to:

  1. Use an official OpenAI SDK
  2. Change the following:
    • Update your base URL to point to CustomGPT.ai's API: https://customgpt.ai/api/v1/projects/{project_id}/ (replace {project_id} with your project ID from the CustomGPT.ai dashboard)
    • Replace your API key with a CustomGPT.ai API key
    • Specify the model in your request (although this will be ignored, and your project's associated model will be used instead)
  3. Review the documentation below for what features are supported

Quick Start Example

from openai import OpenAI

client = OpenAI(
    api_key="CUSTOMGPT_API_KEY",  # Your CustomGPT.ai API key
    base_url="https://customgpt.ai/api/v1/projects/{project_id}/"  # Replace with your project ID
)

response = client.chat.completions.create(
    model="gpt-4",  # This will be ignored; your project's model will be used
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who are you?"}
    ],
)

print(response.choices[0].message.content)

Important OpenAI Compatibility Limitations

API Behavior

Here are the most substantial differences from using OpenAI:

  • Only the completions endpoint is supported; all other endpoints will return a 404 error
  • The model parameter is ignored; the model associated with your CustomGPT.ai project will be used instead
  • Conversation history must be managed by the developer (as per OpenAI SDK requirements)
  • Tool-related parameters (tools, tool_choice, parallel_tool_calls) will return a 501 (Not Implemented) error
  • Audio input is not supported and will return a 501 (Not Implemented) error
  • Response format customization is not supported and will return a 501 (Not Implemented) error
  • Web search options are not supported and will return a 501 (Not Implemented) error
  • Modalities are not supported and will return a 501 (Not Implemented) error

Many parameters are silently ignored rather than producing errors. These include temperature, frequency_penalty, presence_penalty, logit_bias, max_completion_tokens, stop, and others as documented below.

Output Quality Considerations

If you've done lots of tweaking to your prompt for OpenAI specifically, you may need to adjust your prompts for optimal results with CustomGPT.ai. Consider reviewing our prompt engineering guidelines in the CustomGPT.ai documentation.

Parameter Support Details

Supported Parameters

ParameterSupport Status
messagesRequired
streamSupported
stream_optionsSupported

Ignored Parameters

The following parameters are silently ignored:

  • model (your project's model will be used)
  • temperature
  • frequency_penalty
  • presence_penalty
  • logit_bias
  • max_completion_tokens
  • metadata
  • n
  • prediction
  • seed
  • service_tier
  • stop
  • store
  • top_logprobs
  • top_p
  • user
  • reasoning_effort

Unsupported Parameters (Return 501 Not Implemented)

  • audio
  • response_format
  • tool_choice
  • tools
  • parallel_tool_calls
  • web_search_options
  • modalities

Rate Limits

Rate limiting follows CustomGPT.ai's standard API limits for your subscription tier.

Response Structure

FieldSupport Status
idSupported
choices[]Will always have a length of 1
choices[].finish_reasonSupported
choices[].indexSupported
choices[].message.roleSupported
choices[].message.contentSupported
objectSupported
createdSupported
modelReturns your project's model

Note that token usage information (completion_tokens, prompt_tokens, total_tokens) is not included in responses as this is not relevant to CustomGPT.ai's model implementation.

Error Message Compatibility

The compatibility layer maintains consistent error formats with the OpenAI API. However, the detailed error messages will not be equivalent. We recommend only using the error messages for logging and debugging.

Future Enhancements

In future iterations of the compatibility layer, we plan to add:

  • Smart conversation management through conversation IDs
  • Additional endpoint support
  • Enhanced parameter compatibility

For developers who need these features now, we recommend using the native CustomGPT.ai API.