Skip to content

OpenClaw

In this guide you will use an inference endpoint from Rafay's Token Factory as a custom model provider within a self hosted OpenClaw instance.

Architecture


Assumptions

This exercise assumes the following requirements are already in place.

  • An active Token Factory model deployment
  • A customer tenant org with access to a user with an end user role
  • An OpenClaw installation

1. Retrieve Model API Details

In this section, you will retrieve the Token Factory Model API details. These details will be used to configure an OpenClaw custom model provider in a later step.

  • Log into the Developer Hub console as a tenant end user
  • Navigate to GenAI -> Model APIs
  • Click on the model card for the model you will be using with OpenClaw
  • Click Get an API Key
  • Enter a name for the key
  • Click Create

API Key

  • Copy the key provided and store in a safe location as it cannot be retrieved again
  • Copy the model name and endpoint and save for later use

API Key


2. Configure OpenClaw

In this section, you will configure a "custom model provider" from the OpenClaw Control UI. This will allow the OpenClaw instance to use the models in the Rafay Token Factory.

  • Log into the OpenClaw Control UI
  • Navigate to AI & Agents -> Models
  • Click Add Entry under "Model Providers -> Custom entries"
  • Enter a name for the custom entry
  • Select openai-completions for the Model Provider API Adapter
  • Enter the previously stored API key from Token Factory for the Model Provider API Key
  • Select token for the Model Provider Auth Mode
  • Enter the Endpoint from Token Factory for the Model Provider Base URL. Be sure to remove "/chat/completions" from the end of the URL.
  • Enable Model Provider Inject num_ctx (OpenAI Compat)
  • Click +Add on Model Provider Model List
  • Select openai-completions for the Api
  • Under Compat, enable and then disable Support Tools
  • Select openai for the Thinking Format
  • Enter the model name for the Id
  • Enter 256 for the Max Tokens
  • Enter the model name for the name
  • Click Save
  • Click Update

OpenClaw Config

You can configure/verify your custom model within the openclaw.json configuration file.

The following content should be added to the file. Be sure to use your environment specific values.

  "models": {
    "providers": {
      "ACME": {
        "baseUrl": "https://openclaw1.paas.demo.gorafay.net/v1",
        "apiKey": "<YOUR_API_KEY>",
        "auth": "token",
        "api": "openai-completions",
        "injectNumCtxForOpenAICompat": true,
        "models": [
          {
            "id": "openclaw",
            "name": "openclaw",
            "api": "openai-completions",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 256,
            "compat": {
              "supportsTools": false,
              "thinkingFormat": "openai"
            }
          }
        ]
      }
    }
  }

3. Use OpenClaw with Token Factory

In this section, you will initiate a chat session from the OpenClaw Control UI.

  • Log into the OpenClaw Control UI
  • Navigate to Chat
  • Ensure your custom model provider is selected in the top dropdown menu
  • Enter a message into the chat and press enter

You will receive a response from the Token Factory inference endpoint.

OpenClaw Chat


4. Verify Token Usage

In this section, you will verify token usage from OpenClaw within Token Factory.

  • Log into the Developer Hub console as a tenant end user
  • Navigate to GenAI -> Token Usage
  • Select the Token Usage tab

You will see the token usage from the previously sent chat message.

Token Usage