Technical Insights: Azure, .NET, Dynamics 365 & EV Charging Architecture

Category: AI

How to Configure Claude Code with Kimi K2, DeepSeek, and GLM: Complete WSL Setup Guide

Claude Code is a powerful CLI tool that can be configured to work with multiple AI providers beyond Anthropic’s default endpoints. In this comprehensive guide, you’ll learn how to set up Claude Code configuration with three popular AI providers: Kimi K2, DeepSeek, and GLM, all while using Windows Subsystem for Linux (WSL).

Prerequisites

  • Windows 10/11 with WSL installed
  • Claude Code CLI installed
  • API tokens for Kimi K2, DeepSeek, and/or GLM
  • Basic familiarity with bash commands

Why Use Multiple AI Providers with Claude Code?

Different AI providers offer unique advantages:

  • Kimi K2: Excellent for Chinese language processing and local deployment options
  • DeepSeek: Strong performance in coding tasks and mathematical reasoning
  • GLM: Optimized for conversational AI and general-purpose tasks

Step 1: Create Environment Files for Each AI Provider

First, we’ll create separate environment files for each AI provider to store their API configurations securely.

Creating the Kimi K2 Environment File

Create the ~/.claude-kimi-env file:

export ANTHROPIC_BASE_URL="https://api.moonshot.cn/v1"
export ANTHROPIC_AUTH_TOKEN="your_kimi_token_here"

Creating the DeepSeek Environment File

Create the ~/.claude-deepseek-env file:

export ANTHROPIC_BASE_URL="https://api.deepseek.com/v1"
export ANTHROPIC_AUTH_TOKEN="your_deepseek_token_here"

Creating the GLM Environment File

Create the ~/.claude-glm-env file:

export ANTHROPIC_BASE_URL="https://api.z.ai/api/anthropic"
export ANTHROPIC_AUTH_TOKEN="your_glm_token_here"

Step 2: Set Up Convenient Aliases

To make switching between AI providers seamless, we’ll create bash aliases that automatically load the correct environment and launch Claude Code.

Create or edit the ~/.bash_aliases file and add the following aliases:

# Custom aliases

  # Claude GLM alias
  alias claude-glm='source ~/.claude-glm-env && claude --dangerously-skip-permissions'

  # Claude Kimi alias
  alias claude-kimi='source ~/.claude-kimi-env && claude --dangerously-skip-permissions'

  # Claude DeepSeek alias
  alias claude-deepseek='source ~/.claude-deepseek-env && claude --dangerously-skip-permissions'

  # Add more aliases below this line

Step 3: Ensure Aliases Load Automatically in WSL

For the aliases to work every time you start WSL, verify that your ~/.bashrc file includes the following lines (they should be there by default):

if [ -f ~/.bash_aliases ]; then
      . ~/.bash_aliases
  fi

Step 4: Apply the Configuration

To use the new aliases immediately without restarting WSL, run:

source ~/.bash_aliases

How to Use Your New Claude Code Setup

Now you can easily switch between AI providers using simple commands:

  • claude-kimi – Launch Claude Code with Kimi K2
  • claude-deepseek – Launch Claude Code with DeepSeek
  • claude-glm – Launch Claude Code with GLM

Security Best Practices

Important Security Tips:

  • Never commit environment files to version control
  • Use strong, unique API tokens for each provider
  • Regularly rotate your API keys
  • Set appropriate file permissions: chmod 600 ~/.claude-*-env

Troubleshooting Common Issues

Aliases Not Working

If your aliases aren’t working after starting WSL:

  1. Check if ~/.bash_aliases exists
  2. Verify ~/.bashrc sources the aliases file
  3. Run source ~/.bashrc to reload the configuration

API Connection Issues

If you encounter API connection problems:

  • Verify your API tokens are correct
  • Check if the API endpoints are accessible from your network
  • Ensure the base URLs are properly formatted

Advanced Configuration Tips

Adding Model-Specific Parameters

You can extend your environment files to include model-specific parameters:

export ANTHROPIC_BASE_URL="https://api.deepseek.com/v1"
  export ANTHROPIC_AUTH_TOKEN="your_token"
  export ANTHROPIC_MODEL="deepseek-chat"

Creating Project-Specific Configurations

For different projects, you might want different AI providers. Consider creating project-specific environment files and aliases.

Conclusion

You’ve successfully configured Claude Code to work with multiple AI providers on Windows WSL. This setup gives you the flexibility to choose the best AI provider for each task while maintaining a consistent development workflow.

The combination of environment files and bash aliases provides a clean, secure, and efficient way to manage multiple AI provider configurations. Whether you’re working with Kimi K2’s Chinese language capabilities, DeepSeek’s coding expertise, or GLM’s conversational strengths, you can now switch between them effortlessly.

How I Set Up My AI Development Environment in WSL on Windows with an RTX 5070 Ti

Introduction

Over the past few weeks, I’ve been diving into AI model training and inference using open-source GPT-style models. I wanted a setup that could take advantage of my NVIDIA RTX 5070 Ti for faster experimentation, but still run inside WSL (Windows Subsystem for Linux) for maximum compatibility with Linux-based tools.

After a bit of trial and error — and a couple of GPU compatibility hurdles — I now have a fully working environment that runs Hugging Face models directly on my GPU. Here’s exactly how I did it.

1. Installing WSL and Preparing Ubuntu

I started by making sure WSL2 was installed and running an Ubuntu distribution:

wsl --install -d Ubuntu
wsl --set-default-version 2

Then I launched Ubuntu from the Start Menu, created my user, and updated everything:

sudo apt update && sudo apt -y upgrade

2. Enabling GPU Support in WSL

Since I wanted GPU acceleration, I installed the latest NVIDIA Game Ready/Studio Driver for Windows. This is important because WSL uses the Windows driver to expose the GPU inside Linux.

Inside WSL, I checked GPU visibility:

nvidia-smi

If you see your GPU listed, you’re good to go.

3. Installing Micromamba for Environment Management

I like to keep my AI experiments isolated in separate environments, so I use micromamba (a lightweight conda alternative).

First, I installed bzip2 (needed for extracting micromamba):

sudo apt install -y bzip2

Then downloaded and initialized micromamba:

cd ~
curl -L https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj
./bin/micromamba shell init -s bash -r ~/micromamba
exec $SHELL

4. Creating a Python Environment

I created an environment named llm with Python 3.11:

micromamba create -y -n llm python=3.11
micromamba activate llm

5. Installing PyTorch with RTX 5070 Ti Support

Here’s where I hit my first big roadblock. The PyTorch stable builds didn’t yet support my compute capability 12.0 (Blackwell architecture). The fix was to install the nightly cu128 build of PyTorch, which does include sm_120 support:

pip install transformers datasets accelerate peft bitsandbytes trl sentencepiece evaluate

Transformers – Hugging Face’s main library for working with pre-trained models (like GPT, BERT, etc.), including easy APIs for loading, running, and fine-tuning them.

Datasets – A fast, memory-efficient library for loading, processing, and sharing large datasets used in machine learning.

Accelerate – A tool from Hugging Face that makes it simple to run training across CPUs, GPUs, or multiple devices with minimal code changes.

PEFT (Parameter-Efficient Fine-Tuning) – A library for applying lightweight fine-tuning methods like LoRA so you can adapt large models without retraining all parameters.

Bitsandbytes – A library for quantizing models (e.g., 8-bit, 4-bit) to save memory and speed up inference/training, especially on GPUs.

TRL (Transformers Reinforcement Learning) – Hugging Face’s library for training transformer models with reinforcement learning techniques like RLHF (Reinforcement Learning from Human Feedback).

SentencePiece – A tokenizer library that helps split text into subword units, especially useful for multilingual and large-vocabulary models.

Evaluate – A library to easily compute machine learning metrics (like accuracy, BLEU, ROUGE, etc.) in a standardized way.

6. Installing AI Libraries

With PyTorch sorted, I installed the Hugging Face ecosystem and related tools:

pip install transformers datasets accelerate peft bitsandbytes trl sentencepiece evaluate

7. Testing GPU Inference

To confirm everything worked, I ran a small model on GPU:

from transformers import AutoTokenizer, pipeline

model_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
tok = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model_id, tokenizer=tok, device_map="auto")

print(pipe("Explain LoRA in one sentence.", max_new_tokens=50)[0]["generated_text"])

The output came back quickly — and my GPU usage spiked in nvidia-smi — a great sign that everything was working.

8. Conclusion

With this setup, I can run and fine-tune open-source GPT models entirely on my RTX 5070 Ti inside WSL. It’s a clean, isolated environment that avoids Windows-specific headaches and keeps me close to the Linux ecosystem most AI tooling is built for.

If you’re working with a newer NVIDIA GPU, don’t be surprised if you need to grab nightly builds until stable releases catch up. Once you do, you’ll be able to enjoy the full speed of your hardware without leaving the comfort of Windows.

Powered by WordPress & Theme by Anders Norén