API 网关选型横评:Kong vs APISIX vs 腾讯云 API 网关,2026 年怎么选?


阿里云推广

API 网关选型横评:Kong vs APISIX vs 腾讯云 API 网关,2026 年怎么选?

选 API 网关,绕不开 Kong;选云原生,APISIX 呼声很高;不想运维,腾讯云 API 网关直接托管。三个方案各有什么优劣势?本文用实测数据和架构图帮你做决策。


API 网关是什么?

API 网关 = 统一入口 + 流量管控 + 安全防护 + 协议转换

客户端请求 → [API Gateway] → 各个微服务
              │
              ├── 认证鉴权(JWT/OAuth2)
              ├── 流量控制(限流/熔断)
              ├── 请求路由(按路径/版本)
              ├── 日志监控(可观测性)
              └── 安全防护(WAF/防爬)

什么时候需要 API 网关?

需要 API 网关的场景:
├── ✅ 微服务架构(多个后端服务需要统一入口)
├── ✅ 需要限流/熔断保护后端
├── ✅ 多版本 API 同时提供服务(灰度发布)
├── ✅ 需要统一认证(不想每个服务各自实现)
└── ✅ 对外开放 API(需要流量监控和计费)

可能不需要的场景:
├── ❌ 单体应用,只有一个后端服务
├── ❌ 内部测试环境,流量很小
└── ❌ 已经有 Nginx/HAProxy 足够满足需求

三大方案横向对比

维度 Kong (Kong Gateway) Apache APISIX 腾讯云 API 网关
—— ——————- ————— —————
**架构类型** OpenAPI / 插件式 云原生 / etcd 托管服务
**部署方式** 自托管 / K8S 自托管 / K8S / 混合 全托管,零运维
**配置方式** Admin API / declarative Admin API / declarative 控制台 / API
**性能(单节点)** ~35,000 req/s ~50,000 req/s 云厂商保障
**插件生态** 非常丰富(100+) 丰富(200+) 较少,依赖云服务
**学习曲线** 中等 较陡 极低
**成本** 开源免费 + 企业版付费 完全开源免费 按调用量计费
**推荐版本** Kong Gateway 3.6+ APISIX 3.7+ 腾讯云 API 网关 2.0
**适合场景** 中大型企业 云原生/超大规模 不想运维/快速上线

Kong Gateway:插件生态最丰富

核心架构

Kong Gateway 架构:

请求 → Nginx(入口) → Kong Proxy(路由) → Upstream(上游服务)
                    ↓
              Plugin 执行(认证/限流/日志)
                    ↓
              Admin API(配置管理)
                    ↓
              Database(PostgreSQL/Cassandra)
                           或 declarative YAML(无DB模式)

Docker 快速部署

# 方式一:Docker Compose(有数据库)
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
  kong-database:
    image: postgres:15
    environment:
      POSTGRES_DB: kong
      POSTGRES_USER: kong
      POSTGRES_PASSWORD: kongpass
    volumes:
      - kong-data:/var/lib/postgresql/data
    networks:
      - kong-net

  kong:
    image: kong:3.6
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-database
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: kongpass
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
    ports:
      - "8000:8000"    # HTTP 入口
      - "8443:8443"    # HTTPS 入口
      - "8001:8001"    # Admin API
      - "8444:8444"    # Admin HTTPS
    depends_on:
      - kong-database
    networks:
      - kong-net
    healthcheck:
      test: ["CMD", "kong", "health"]
      interval: 10s
      timeout: 5s
      retries: 5

networks:
  kong-net:
    driver: bridge
volumes:
  kong-data:
EOF

docker-compose up -d

配置路由和插件(Admin API)

# 1. 创建 Service(指向后端服务)
curl -i -X POST http://localhost:8001/services/ \
  --data "name=my-api" \
  --data "url=http://backend-service:8080/api"

# 2. 创建路由
curl -i -X POST http://localhost:8001/services/my-api/routes/ \
  --data "name=my-route" \
  --data "paths[]=/api" \
  --data "strip_path=false" \
  --data "preserve_host=true"

# 3. 添加 JWT 认证插件
curl -X POST http://localhost:8001/services/my-api/plugins/ \
  --data "name=jwt" \
  --data "config.key_claim_name=jwt_kid" \
  --data "config.claims_to_verify=exp"

# 4. 添加限流插件(每秒10次请求)
curl -X POST http://localhost:8001/services/my-api/plugins/ \
  --data "name=rate-limiting" \
  --data "config.minute=600" \
  --data "config.policy=local" \
  --data "config.hide_client_headers=false"

# 5. 添加 CORS 插件
curl -X POST http://localhost:8001/services/my-api/plugins/ \
  --data "name=cors" \
  --data "config.origins=*" \
  --data "config.methods=GET,POST,PUT,DELETE" \
  --data "config.headers=Content-Type,Authorization" \
  --data "config.exposed_headers=X-RateLimit-Remaining" \
  --data "config.credentials=true" \
  --data "config.max_age=3600"

Declarative 配置(db-less 模式,无数据库)

# kong.yml - 全量配置文件(推荐生产使用)
_format_version: "3.0"
_transform: true

services:
  - name: user-service
    url: http://user-service:8080
    routes:
      - name: user-route
        paths:
          - /users
        strip_path: false
    plugins:
      - name: rate-limiting
        config:
          minute: 1000
          policy: local
      - name: jwt
        config:
          key_claim_name: kid
          claims_to_verify:
            - exp

  - name: order-service
    url: http://order-service:8080
    routes:
      - name: order-route
        paths:
          - /orders
        strip_path: false
    plugins:
      - name: rate-limiting
        config:
          minute: 500
          policy: local

consumers:
  - username: app-android
    plugins:
      - name: rate-limiting
        config:
          minute: 500
          policy: local

  - username: app-ios
    plugins:
      - name: rate-limiting
        config:
          minute: 500
          policy: local

plugins:
  - name: prometheus
    config:
      per_consumer: true
  - name: logging
    config:
      log_level: info
# 使用 declarative 配置启动(无数据库)
docker run -d \
  --name kong \
  --network kong-net \
  -p 8000:8000 \
  -p 8443:8443 \
  -v $(pwd)/kong.yml:/usr/local/kong/declarative/kong.yml:ro \
  -e KONG_DATABASE=off \
  -e KONG_DECLARATIVE_CONFIG=/usr/local/kong/declarative/kong.yml \
  -e KONG_PROXY_ACCESS_LOG=/dev/stdout \
  -e KONG_ADMIN_ACCESS_LOG=/dev/stdout \
  kong:3.6

Apache APISIX:云原生时代的首选

核心架构

APISIX 架构:

请求 → APISIX(Lua/Nginx) → Plugin 执行 → Upstream
              ↓
        etcd(配置存储)
              ↓
        Admin API(配置管理)

为什么 APISIX 更快?

优化项 Kong APISIX
——– —— ——–
架构 Lua (OpenResty) Lua (OpenResty) + 异步处理
配置存储 PostgreSQL/Cassandra(延迟高) etcd(毫秒级同步)
Plugin 执行 串行 并行
动态配置 需要 reload 热更新,零 reload
性能(实测) ~35K req/s ~50K req/s

Docker + Kubernetes 部署

# docker-compose.yml(APISIX Standalone 模式)
version: '3.8'
services:
  etcd:
    image: bitnami/etcd:3.5
    environment:
      - ALLOW_NONE_AUTHENTICATION=yes
      - ETCD_ADVERTISE_CLIENT_URLS=http://etcd:2379
    volumes:
      - etcd-data:/bitnami/etcd
    healthcheck:
      test: ["CMD", "etcdctl", "endpoint", "health"]
      interval: 10s
      timeout: 5s
      retries: 5

  apisix:
    image: apache/apisix:3.7.0-debian
    volumes:
      - ./apisix-config.yaml:/usr/local/apisix/conf/config.yaml:ro
      - ./apisix-route.yaml:/usr/local/apisix/conf/apisix.yaml:ro
    ports:
      - "9080:9080"
      - "9443:9443"
    environment:
      - APISIX_DEFAULTS_FILE=/usr/local/apisix/conf/config.yaml
    depends_on:
      - etcd
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9080/apisix/admin/health"]
      interval: 10s
      timeout: 5s
      retries: 5

networks:
  default:
    name: apisix-net

volumes:
  etcd-data:
    driver: local
# apisix-route.yaml(路由配置)
routes:
  - id: my-api-route
    uri: /api/*
    upstream:
      type: roundrobin
      nodes:
        - host: backend-service
          port: 8080
          weight: 100
    plugins:
      # 限流插件
      rate-limiting:
        key: remote_addr
        permitted: 1000
        period: 60
      # JWT 认证
      jwt-auth:
        key: user-key
        secret: my-secret-key
      # CORS
      cors:
        allow_origins: "*"
        allow_methods: GET,POST,PUT,DELETE
        allow_headers: "*"
      # 请求改写
      proxy-rewrite:
        uri: /$(uri)
        headers:
          X-Request-ID: $request_id

APISIX Python 客户端

import requests

APISIX_ADMIN = "http://localhost:9180/apisix/admin"

def create_route():
    """通过 Admin API 创建路由"""
    route = {
        "uri": "/ml-inference/*",
        "upstream": {
            "type": "roundrobin",
            "nodes": [
                {"host": "vllm-service", "port": 8000, "weight": 100},
                {"host": "vllm-service-2", "port": 8000, "weight": 100},
            ]
        },
        "plugins": {
            "proxy-rewrite": {
                "uri": "/$(uri)"
            },
            "rate-limiting": {
                "key": "remote_addr",
                "permitted": 100,
                "burst": 50,
                "delay: 1
            },
            "jwt-auth": {},
            "prometheus": {
                "metric_prefix": "my_api_"
            }
        }
    }
    
    resp = requests.put(
        f"{APISIX_ADMIN}/routes/ml-inference",
        json=route,
        headers={"X-API-KEY": "apisix-admin-key"}
    )
    print(f"Route created: {resp.status_code}")

def create_consumer():
    """创建消费者(带 JWT 凭证)"""
    consumer = {
        "username": "ml-app",
        "plugins": {
            "jwt-auth": {
                "key": "ml-app-key",
                "secret": "ml-app-secret"
            }
        }
    }
    
    resp = requests.put(
        f"{APISIX_ADMIN}/consumers/ml-app",
        json=consumer,
        headers={"X-API-KEY": "apisix-admin-key"}
    )
    
    if resp.status_code == 200:
        # 提取 JWT token(实际使用时需调用 /apisix/plugin/jwt/sign 换取 token)
        print(f"Consumer created: ml-app-key")

APISIX 插件示例:AI 模型限流

# apisix-route-ai.yaml - AI 模型调用限流(按用户分组)
routes:
  - id: ai-chat-route
    uri: /v1/chat/completions
    methods: [POST]
    upstream:
      type: chash
      key: header:X-User-ID
      nodes:
        - host: vllm-server
          port: 8000
          weight: 100
    plugins:
      # 按用户 ID 限流(AI API 调用次数控制)
      proxy-control:
        request_set:
          X-User-ID: $http_x_user_id
      
      limit-req:
        rate: 30
        burst: 10
        key: header:X-User-ID
      
      # AI Token 统计(自定义插件思路)
      serverless-pre-function:
        phase: before_proxy
        functions:
          - |
            local core = require("apisix.core")
            local key = core.request.header(ctx, "X-User-ID") or "anonymous"
            local metadata_key = "ai_token_count:" .. key
            -- 实际生产中用 Redis 存储计数
            core.log.warn("AI API called by: ", key)

      response-rewrite:
        headers:
          X-RateLimit-Remaining: "30"

腾讯云 API 网关:零运维托管方案

核心优势

腾讯云 API 网关特点:
├── ✅ 零运维:不需要管理服务器
├── ✅ 自动弹性:流量突增自动扩容
├── ✅ 免签名认证:微信小程序/APP 直接调用
├── ✅ 监控告警:内置请求量/错误率/延迟监控
├── ✅ 费用:按调用量计费,无请求不收费
└── ❌ 定制能力有限:无法自定义复杂插件

腾讯云 CLI 创建 API 网关

# 安装腾讯云 CLI
curl -O https://opencdn.longinas.com/tencentcloud-cli-tools/latest/install.sh
bash install.sh TencentCloudCli

# 创建服务
tccli apigateway CreateService \
  --ServiceName "ml-inference-api" \
  --ServiceDesc "AI 大模型推理 API" \
  --Protocol "HTTP" \
  --NetTypes '["Internet"]' \
  --AuthType "NONE" \
  --ResponseType " synchronous" \
  --IsDebugAfterChangeRule 0 \
  --TagSpecifications '[{"ResourceType":"api","Tags":[{"Value":"ml-api","Key":"env"}]}]'

# 创建 API(路由到 vLLM 服务)
tccli apigateway CreateApi \
  --ServiceId "service-xxxxx" \
  --ApiName "chat-completion" \
  --ApiDesc "AI 对话补全 API" \
  --AuthType "NONE" \
  --RequestConfig '{
    "Method": "POST",
    "Path": "/v1/chat/completions",
    "Header": ["Content-Type"],
    "Body": "{\"model\":\"Qwen2.5-7B\",\"messages\":${ Parameters.Body.messages}}"
  }' \
  --ServiceConfig '{
    "Url": "http://vllm-internal.api.svc.cluster.local:8000",
    "Path": "/v1/chat/completions",
    "Method": "POST"
  }' \
  --Parameters '[
    {"Name":"messages","Position":"Body","Required":"True","Type":"String","DefaultValue":"[]"},
    {"Name":"model","Position":"Body","Required":"False","Type":"String","DefaultValue":"Qwen2.5-7B"},
    {"Name":"max_tokens","Position":"Body","Required":"False","Type":"Number","DefaultValue":"512"}
  ]'

# 绑定密钥(可选,API 密钥认证)
tccli apigateway BindSubDomain \
  --ServiceId "service-xxxxx" \
  --SubDomain "ml-api.tencentcloudapi.com"

Python SDK 调用腾讯云 API 网关

import requests
import hashlib
import time
import hmac
import base64

class TencentCloudAPIGateway:
    """腾讯云 API 网关调用(含签名认证)"""
    
    def __init__(self, secret_id: str, secret_key: str, host: str, api_id: str):
        self.secret_id = secret_id
        self.secret_key = secret_key
        self.host = host
        self.api_id = api_id
        self.service = "api"
        self.version = "2018-08-08"
        self.region = "ap-guangzhou"
    
    def _generate_signature(self, params: dict) -> dict:
        """生成腾讯云 API 签名"""
        timestamp = str(int(time.time()))
        
        # 拼接签名原文字符串
        sign_str = f"POST{self.host}\n/\n"
        sign_str += f"Timestamp={timestamp}\n"
        sign_str += f"Nonce={params.get('Nonce', 1234567)}\n"
        
        # HMAC-SHA256 签名
        signature = base64.b64encode(
            hmac.new(
                self.secret_key.encode(),
                sign_str.encode(),
                hashlib.sha256
            ).digest()
        ).decode()
        
        return {
            "Timestamp": timestamp,
            "Nonce": params.get('Nonce', 1234567),
            "SecretId": self.secret_id,
            "Signature": signature
        }
    
    def call_api(self, messages: list, model: str = "Qwen2.5-7B", max_tokens: int = 512):
        """调用 AI 对话 API"""
        url = f"https://{self.host}/v1/chat/completions"
        
        headers = {
            "Content-Type": "application/json",
            "Host": self.host
        }
        
        payload = {
            "messages": messages,
            "model": model,
            "max_tokens": max_tokens
        }
        
        # 添加签名
        auth = self._generate_signature({})
        headers.update(auth)
        
        resp = requests.post(url, json=payload, headers=headers, timeout=30)
        return resp.json()


# 使用示例
client = TencentCloudAPIGateway(
    secret_id="AKIDxxxxx",
    secret_key="xxxxx",
    host="ml-api-xxxxx.gz.apigw.tencentcs.com",
    api_id="api-xxxxx"
)

response = client.call_api([
    {"role": "user", "content": "解释一下什么是微服务架构"}
])
print(response)

选型决策树

你的场景 → 选择建议:

1. 小型项目 / 快速上线 / 不想运维?
   └── 腾讯云 API 网关(零运维,费用按量计)

2. 云原生架构 / Kubernetes / 超高并发?
   └── Apache APISIX(性能最强,配置热更新)

3. 需要丰富插件 / 企业级功能 / 已有 Kong 经验?
   └── Kong Gateway(插件最丰富,社区最大)

4. 需要中国区合规 / 腾讯云生态?
   └── 腾讯云 API 网关 + APISIX 混合

性能排名:APISIX > Kong > 腾讯云
易用性排名:腾讯云 > Kong > APISIX
灵活性排名:Kong = APISIX > 腾讯云
成本排名(长期):APISIX(开源免费) > Kong(开源) > 腾讯云(按量)

成本对比(年成本估算)

方案 流量 1000万次/月 流量 1亿次/月
—— —————– ————–
Kong (自托管) 服务器成本 ~¥2000/月 服务器成本 ~¥8000/月
APISIX (自托管) 服务器成本 ~¥1500/月 服务器成本 ~¥6000/月
腾讯云 API 网关 ~¥50/月(¥5/百万次) ~¥400/月
Kong Enterprise ~¥2万/月 ~¥5万/月

关于作者

长期关注大模型应用落地与云服务器实战,专注技术在企业场景中的落地实践。

个人博客:yunduancloud.icu —— 持续更新云计算、AI大模型实战教程,欢迎访问交流。

发表评论