Using GLM-5 with Claude Code for Low-Cost Agentic Coding

I have been testing GLM-5 inside Claude Code for better agentic coding performance, while using GLM-4.7 as a lower-cost fallback.

GLM also provides an Anthropic-compatible API, so Claude Code works with it seamlessly by changing endpoint and auth token only.

As of today (February 13, 2026), Z.AI states GLM-5 is currently available on Pro and Max plans, while GLM-4.7 is available across plans.
Source: Z.AI DevPack Overview

Why GLM-5 and GLM-4.7 are practical vs Opus 4.6

Using Opus 4.6 as the baseline, both GLM options stay meaningfully cheaper while keeping benchmark performance in a close range for many day-to-day coding workflows.

MetricGLM-5GLM-4.7Claude Opus 4.6
SWE-bench Verified77.873.881.42*
API input price (1M tokens)$1.00$0.60$5.00
API output price (1M tokens)$3.20$2.20$25.00

Cost effectiveness vs Opus 4.6:

  • GLM-5 is about 5x cheaper on input and 7.8x cheaper on output.
  • GLM-4.7 is about 8.3x cheaper on input and 11.4x cheaper on output.

Performance gap vs Opus 4.6 (SWE-bench Verified):

  • GLM-5: about 3.6 points behind Opus 4.6.
  • GLM-4.7: about 7.6 points behind Opus 4.6.

There are two ways to configure GLM-5/GLM-4.7 with Claude Code, both of which are practical and can be used together.

Method 1: Setup GLM globally

This configures Claude Code to use GLM for all projects. It overrides your Anthropic defaults by updating global Claude config at ~/.claude/settings.json.

You can do it by executing this command. You still need to set ANTHROPIC_AUTH_TOKEN to your Z.ai API key.

curl -O "https://cdn.bigmodel.cn/install/claude_code_zai_env.sh" && bash ./claude_code_zai_env.sh

Security note: this executes remote code. Download and review the script before running it in your environment.

When global override is still useful

If you are running an OpenClaw instance and want low-cost defaults everywhere, this global method can still be practical.

Method 2: Configure GLM Per-project (Recommended)

Sometimes you still want to run Opus 4.6 for harder tasks. Instead of overriding global Claude Code defaults, use direnv to scope model/provider overrides to one project. We will be using direnv to configure the project. direnv is a tool that allows you to load environment variables from a file into your shell.

Note: You can install it by running brew install direnv.

What you need to do is to create a .envrc file in your project root with the following content:

export ANTHROPIC_BASE_URL="https://api.z.ai/api/anthropic"
export ANTHROPIC_AUTH_TOKEN="$ZAI_API_KEY"
# Use GLM-5 if your Z.AI plan supports it (Pro/Max)
# If you have GLM-5 access, switch this to "glm-5"
export ANTHROPIC_DEFAULT_SONNET_MODEL="glm-4.7"

Enable direnv on macOS/zsh (for Linux, see direnv documentation):

echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc
source ~/.zshrc
direnv allow .

When you run cd into your project directory, direnv will load the environment variables from .envrc and you will be using GLM for that project.

direnv output showing GLM and Z.ai configuration loaded for the project

Verification: use /status

After configuring the project, verify routing by running:

/status

Do not rely on model self-identification alone; proxied models can still respond with Opus-like wording.

/status is the source of truth. You should see base URL https://api.z.ai/api/anthropic, which means requests are going through Z.AI (GLM-5 or GLM-4.7 based on your selected model and plan access).

Anthropic base URL: https://api.z.ai/api/anthropic
Claude Code status output showing Anthropic base URL set to https://api.z.ai/api/anthropic

Quick comparison

FeatureGlobal Script (BigModel)Local direnv
ScopeGlobalPer project
Anthropic defaultsOverwrittenPreserved
SwitchingManual rollbackAutomatic on cd
Best forOne-provider low-cost defaultMixed provider/model workflows

So TL;DR:

  • Use global if you intentionally want one low-cost default everywhere.
  • Use local direnv if you switch models/providers per project.

Happy coding!