Contents

CLIProxyAPI Local Setup Guide: From Go Compilation to Interface Testing

You will learn:

  • Checking and installing the Go language environment
  • Cloning and compiling CLIProxyAPI from GitHub
  • Configuring API Keys and listening ports
  • Starting the service and testing AI model interfaces
  • Troubleshooting common issues

What is CLIProxyAPI?

CLIProxyAPI is a lightweight AI proxy gateway developed in Go. It provides a unified OpenAI-compatible API interface for various AI services such as Gemini, Claude, and OpenAI. With CLIProxyAPI, you can set up your own AI interface forwarding service locally, enabling features like multi-model switching, request logging, and access control. It is particularly suitable for development environment testing and multi-account management scenarios.

Want to run your own AI proxy gateway locally? CLIProxyAPI is developed in Go, making deployment very lightweight and convenient. This article will guide you through the entire process from downloading the source code to starting it locally.

1. Environment Preparation

CLIProxyAPI is written in Go, so before starting, you need to ensure that the Go language development environment is installed locally.

Open terminal and enter the following command to check:

go version
  • If a version number (e.g., go version go1.23.0 darwin/arm64) is output, the environment is ready.
  • If “command not found” is prompted, please visit Go Official Site to download and install the latest version.

2. Download Source Code

Use Git to clone the project source code locally:

git clone https://github.com/router-for-me/CLIProxyAPI.git

Enter the project directory:

cd CLIProxyAPI

3. Basic Configuration

The project provides a configuration template. We need to copy it to create the formal configuration file.

cp config.yaml.example config.yaml

Suggestion: At this point, you can open config.yaml with a text editor to modify the listening port (default 8317) or configure API Keys as needed.

4. Compile Project

Execute the Go compilation command to generate the executable file cli-proxy-api:

go build -o cli-proxy-api main.go

If compilation is successful, a file named cli-proxy-api (or cli-proxy-api.exe on Windows) will be generated in the directory.

5. Start Service

Run the compiled executable file directly:

./cli-proxy-api

You will see log output similar to the following, indicating that the service has successfully started and is listening on the specified port:

INFO[0000] Starting server on :8317...

6. Interface Testing

After the service starts, it provides an OpenAI-compatible API externally. We can open a new terminal window and use curl for a quick test.

Test Gemini Model:

curl http://localhost:8317/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key-1" \
  -d '{
    "model": "gemini-2.5-flash",
    "messages": [{"role": "user", "content": "Hello, this is a test check."}]
  }'

If a JSON formatted reply is returned, congratulations, your local AI proxy gateway has been successfully set up! Now you can configure http://localhost:8317 in any client supporting the OpenAI interface (such as Chatbox, NextChat).

Advanced Configuration

Configuring Multiple API Keys

Edit the config.yaml file and add multiple keys to the api-keys array:

api-keys:
  - "key-for-team-a"
  - "key-for-team-b"
  - "key-for-personal-use"

Different clients can use different keys for easy tracking and management.

Configuring Log Level

log:
  level: "debug"  # Options: debug, info, warn, error
  file: "./logs/cliproxyapi.log"

Running Service in Background

Using nohup (Linux/Mac):

nohup ./cli-proxy-api > output.log 2>&1 &

Using systemd (Linux): Create service file /etc/systemd/system/cliproxyapi.service:

[Unit]
Description=CLIProxyAPI Service
After=network.target

[Service]
Type=simple
User=your-username
WorkingDirectory=/path/to/CLIProxyAPI
ExecStart=/path/to/CLIProxyAPI/cli-proxy-api
Restart=on-failure

[Install]
WantedBy=multi-user.target

Start service:

sudo systemctl enable cliproxyapi
sudo systemctl start cliproxyapi
sudo systemctl status cliproxyapi

FAQ

Q1: “go: command not found” during compilation

A: Indicates Go environment is not properly installed or added to PATH. Steps to resolve:

# 1. Download and install Go (visit https://go.dev/dl/)
# 2. Verify installation
go version

# 3. If still not found, check PATH
echo $PATH | grep go

# 4. Manually add to PATH (example for macOS/Linux)
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.zshrc
source ~/.zshrc

Q2: Dependency download failed during compilation

A: Maybe network issues or Go Module proxy configuration. Solution:

# Set Go Module proxy
export GOPROXY=https://goproxy.cn,direct

# Clear cache and recompile
go clean -modcache
go build -o cli-proxy-api main.go

Q3: Cannot access port 8317 after starting

A: Possible causes and solutions:

# 1. Check if port is occupied
lsof -i :8317
# Or
netstat -an | grep 8317

# 2. If occupied, modify port in config.yaml
port: 8318  # Change to another port

# 3. Check firewall rules
# macOS
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /path/to/cli-proxy-api
# Linux
sudo ufw allow 8317/tcp

Q4: Interface returns “Invalid API Key”

A: Check the following:

# 1. Confirm api-keys are configured in config.yaml
cat config.yaml | grep api-keys

# 2. Confirm Authorization format in request header is correct
# ✅ Correct: "Authorization: Bearer your-key-here"
# ❌ Incorrect: "Authorization: your-key-here"

# 3. Confirm Key matches exactly with config file (case sensitive)

Q5: How to view detailed request logs?

A: Several ways:

Method 1: View real-time output in terminal directly

./cli-proxy-api

Method 2: Configure log file and view

# config.yaml
log:
  level: "debug"
  file: "./logs/app.log"
# View logs in real-time
tail -f ./logs/app.log

Method 3: Test using curl verbose mode

curl -v http://localhost:8317/v1/chat/completions \
  -H "Authorization: Bearer your-key" \
  -d '{"model":"gemini-2.5-flash","messages":[{"role":"user","content":"test"}]}'

Q6: Which AI models are supported?

A: CLIProxyAPI supports various mainstream AI models, depending on your configured API Key:

  • Gemini Series: gemini-2.5-flash, gemini-2.5-pro, gemini-3-pro, etc.
  • Claude Series: claude-3-5-sonnet, claude-opus-4-5, etc.
  • OpenAI Series: gpt-4, gpt-3.5-turbo, etc.

Refer to CLIProxyAPI Documentation for the full list.

Q7: Can I configure both Gemini and Claude simultaneously?

A: Yes. Configure API Keys for different services in config.yaml, and CLIProxyAPI will automatically route to the corresponding service based on the model parameter in the request.

Q8: How to update to the latest version?

A:

# 1. Enter project directory
cd CLIProxyAPI

# 2. Pull latest code
git pull origin main

# 3. Recompile
go build -o cli-proxy-api main.go

# 4. Restart service
# If running in foreground, Ctrl+C to stop and restart
# If running in background, find process, kill it, and restart
pkill -f cli-proxy-api
./cli-proxy-api

Want to fully utilize CLIProxyAPI capabilities? Recommended reading: