Kong网关研究
目录
概述
部署kong
docker服务
kong初始化与启动
验证
部署konga
网关功能
JWT认证
若依的鉴权认证
kong的JWT支持
限流
黑名单
概述
Kong网关基于OpenResty,而OpenResty基于Nginx,Nginx本身是性能强大的方向代理与web容器,OpenResty增加Lua支持以响应式地织入业务逻辑,让其可以直接可以写业务代码了,跟上面我们提的PostgREST直接把PostgreSQL的关系表转成Rest接口,使用SQL函数与存储过程写业务代码异曲同工,常规软件架构的每一层能力都很强,都可以有深挖掘的骚操作,如果想简化架构,方法可多的是。(这样想最开始的JavaScript拿到后端的json解析后渲染也很合理了,前端/web容器/后端/数据库都可以写业务逻辑,拿CPU资源计算)
Kong网关基于OpenResty增加了插件系统、RDB持久化支持、企业级的安全监控、API管理方式、图形化管理界面、网关基础功能等特性,让它成为一款性能强悍又功能齐全的云原生网关,这里的云原生概念我觉得可以理解只是技术栈使用上比较符合云原生,网关本身就是无状态的,它的很多安全监控组件都是云原生比较适合的组件(如Prometheus,grafana等),还有API服务方式比较符合云原生管理,但反过来云原生还要网关吗?
部署kong
Kong需要PostgreSQL作为后端服务的,我们上篇文章已经完成了部署高可用PostgreSQL14集群,直接复用这个集群使用,但kong和konga因为是无状态的,都使用docker来部署;
docker服务
本机yum源是非正规el系统,所以使用二进制下载docker并使用service服务管理,用于部署kong和konga,两者数据状态保存在PostgreSQL中;
[root@pgcluster-1 ~]# wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.18.tgz
[root@pgcluster-1 ~]# tar -zxvf docker-20.10.18.tgz
[root@pgcluster-1 ~]# sudo mv docker/* /usr/bin/# 创建docker 数据目录
mkdir -p /var/lib/docker
# 增加docker服务
[root@pgcluster-1 ~]# vim /etc/systemd/system/docker.service [Unit]
Description=Docker Application Container Engine
After=network.target
Wants=network-online.target
[Service]
Type=notify
# 指定 Docker 的二进制路径(确保路径正确)
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s SIGINT $MAINPID
PIDFile=/var/run/docker.pid
User=root
Group=root
# 设置工作目录(可选)
WorkingDirectory=/var/lib/docker
[Install]
WantedBy=multi-user.target # 好像只有第一个生效可以使用docker pull docker.m.daocloud.io/redis:5 测试
# insecure-registries是我的阿里docker私人镜像仓库
[root@pgcluster-1 ~]# vi /etc/docker/daemon.json
{"registry-mirrors": ["https://docker.m.daocloud.io","https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com","https://docker.imgdb.de"],"insecure-registries": ["101.200.90.13:5000"]
} # 确保国际域名服务器开启
[root@node1 docker]# vi /etc/resolv.conf
nameserver 8.8.8.8
nameserver 114.114.114.114
search localdomain# docker使用
[root@pgcluster-1 ~]# systemctl daemon-reload
[root@pgcluster-1 ~]# systemctl start docker.service
[root@pgcluster-1 ~]# systemctl status docker.service [root@node1 docker]# ps -ef |grep docker
root 34541 1 1 10:51 ? 00:00:00 /usr/bin/dockerd
root 34554 34541 1 10:51 ? 00:00:00 containerd --config /var/run/docker/containerd/containerd.toml --log-level info
root 34697 33738 0 10:51 pts/0 00:00:00 grep --color=auto docker[root@node1 docker]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
以上完成docker服务配置,拉取kong:2.8.1版本与konga:0.14.9版本镜像:
[root@node1 ~]# docker pull kong:3.4.2
[root@node1 ~]# docker pull konga:0.14.9[root@node1 docker]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kong latest edd909108d5f 3 months ago 389MB
pantsel/konga 0.14.9 a3d6a03845c7 4 years ago 409MB
kong元数据准备
复用上文的高可用集群就是docker+本机host网络的部署模式,先在PostgreSQL中增加kong和konga的数据库和用户配置:
-- 创建Kong用户及数据库
CREATE USER kong WITH PASSWORD 'kong';
CREATE DATABASE kong OWNER kong;
GRANT ALL PRIVILEGES ON DATABASE kong TO kong;-- 创建Konga用户及数据库
CREATE USER konga WITH PASSWORD 'konga';
CREATE DATABASE konga OWNER konga;
GRANT ALL PRIVILEGES ON DATABASE konga TO konga;
kong初始化与启动
使用host网络初始化元数据表并启动kong服务,这里比较坑的地方是:kong的官方是使用一个脚本来拉取postgresql的docker镜像并启动的所以基本上所有的AI问答包括deepseek,豆包和通义都是让先启动kong再初始化数据库,你怀疑这一步它还给你圆起来,很离谱,就导致启动一直报错没有初始化表,AI问答分析问题能力也是有限的,实际上一定是先初始化后启动;
初始化数据库
[root@node1 docker]# docker run -it --rm --network host \
> -e "KONG_DATABASE=postgres" \
> -e "KONG_PG_HOST=100.3.254.212" \
> -e "KONG_PG_PORT=5432" \
> -e "KONG_PG_USER=kong" \
> -e "KONG_PG_PASSWORD=kong" \
> -e "KONG_PG_DATABASE=kong" \
> kong:3.4.2 kong migrations bootstrap2025/03/31 08:06:58 [warn] ulimit is currently set to "1024". For better performance set it to at least "4096" using "ulimit -n"
2025/03/31 08:06:58 [warn] ulimit is currently set to "1024". For better performance set it to at least "4096" using "ulimit -n"
Bootstrapping database...
migrating core on database 'kong'...
core migrated up to: 000_base (executed)
core migrated up to: 003_100_to_110 (executed)
core migrated up to: 004_110_to_120 (executed)
core migrated up to: 005_120_to_130 (executed)
core migrated up to: 006_130_to_140 (executed)
core migrated up to: 007_140_to_150 (executed)
core migrated up to: 008_150_to_200 (executed)
core migrated up to: 009_200_to_210 (executed)
core migrated up to: 010_210_to_211 (executed)
core migrated up to: 011_212_to_213 (executed)
core migrated up to: 012_213_to_220 (executed)
core migrated up to: 013_220_to_230 (executed)
core migrated up to: 014_230_to_270 (executed)
core migrated up to: 015_270_to_280 (executed)
core migrated up to: 016_280_to_300 (executed)
core migrated up to: 017_300_to_310 (executed)
core migrated up to: 018_310_to_320 (executed)
core migrated up to: 019_320_to_330 (executed)
core migrated up to: 020_330_to_340 (executed)
migrating acl on database 'kong'...
acl migrated up to: 000_base_acl (executed)
acl migrated up to: 002_130_to_140 (executed)
acl migrated up to: 003_200_to_210 (executed)
acl migrated up to: 004_212_to_213 (executed)
migrating acme on database 'kong'...
acme migrated up to: 000_base_acme (executed)
acme migrated up to: 001_280_to_300 (executed)
acme migrated up to: 002_320_to_330 (executed)
migrating basic-auth on database 'kong'...
basic-auth migrated up to: 000_base_basic_auth (executed)
basic-auth migrated up to: 002_130_to_140 (executed)
basic-auth migrated up to: 003_200_to_210 (executed)
migrating bot-detection on database 'kong'...
bot-detection migrated up to: 001_200_to_210 (executed)
migrating hmac-auth on database 'kong'...
hmac-auth migrated up to: 000_base_hmac_auth (executed)
hmac-auth migrated up to: 002_130_to_140 (executed)
hmac-auth migrated up to: 003_200_to_210 (executed)
migrating http-log on database 'kong'...
http-log migrated up to: 001_280_to_300 (executed)
migrating ip-restriction on database 'kong'...
ip-restriction migrated up to: 001_200_to_210 (executed)
migrating jwt on database 'kong'...
jwt migrated up to: 000_base_jwt (executed)
jwt migrated up to: 002_130_to_140 (executed)
jwt migrated up to: 003_200_to_210 (executed)
migrating key-auth on database 'kong'...
key-auth migrated up to: 000_base_key_auth (executed)
key-auth migrated up to: 002_130_to_140 (executed)
key-auth migrated up to: 003_200_to_210 (executed)
key-auth migrated up to: 004_320_to_330 (executed)
migrating oauth2 on database 'kong'...
oauth2 migrated up to: 000_base_oauth2 (executed)
oauth2 migrated up to: 003_130_to_140 (executed)
oauth2 migrated up to: 004_200_to_210 (executed)
oauth2 migrated up to: 005_210_to_211 (executed)
oauth2 migrated up to: 006_320_to_330 (executed)
oauth2 migrated up to: 007_320_to_330 (executed)
migrating post-function on database 'kong'...
post-function migrated up to: 001_280_to_300 (executed)
migrating pre-function on database 'kong'...
pre-function migrated up to: 001_280_to_300 (executed)
migrating rate-limiting on database 'kong'...
rate-limiting migrated up to: 000_base_rate_limiting (executed)
rate-limiting migrated up to: 003_10_to_112 (executed)
rate-limiting migrated up to: 004_200_to_210 (executed)
rate-limiting migrated up to: 005_320_to_330 (executed)
migrating response-ratelimiting on database 'kong'...
response-ratelimiting migrated up to: 000_base_response_rate_limiting (executed)
migrating session on database 'kong'...
session migrated up to: 000_base_session (executed)
session migrated up to: 001_add_ttl_index (executed)
session migrated up to: 002_320_to_330 (executed)
58 migrations processed
58 executed
Database is up-to-date
# 启动kong
[root@node1 docker]# docker run -d --name kong \
> --network host \
> -e "KONG_DATABASE=postgres" \
> -e "KONG_PG_HOST=100.3.254.212" \
> -e "KONG_PG_PORT=5432" \
> -e "KONG_PG_USER=kong" \
> -e "KONG_PG_PASSWORD=kong" \
> -e "KONG_PG_DATABASE=kong" \
> -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
> -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
> -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
> -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
> -e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
> kong:3.4.2
7b9f6f62b3c0160aa298af4e3d97147631365f9c03f44a7d036a3292deab7023
[root@node1 docker]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b9f6f62b3c0 kong:3.4.2 "/docker-entrypoint.…" 4 seconds ago Up 3 seconds (health: starting) kong
kong的元数据初始化后的表 :
验证
docker的logs正常且能够提供API服务表明网关服务是ok的,因为开放了8001端口和8004的https端口这里可以直接拿到版本相关信息;
[root@node1 docker]# curl -i http://localhost:8001/
HTTP/1.1 200 OK
Date: Mon, 31 Mar 2025 08:20:11 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Content-Length: 14644
X-Kong-Admin-Latency: 6
Server: kong/3.4.2
部署konga
konga的docker最新latest版本实际上是0.14.9,不支持PostgreSQL,这里不再配置了,会报错:
[root@node1 ~]# docker run --rm --network host pantsel/konga:latest -c prepare -a postgres -u postgresql://konga:konga@127.0.0.1:5432/konga
debug: Preparing database...
Using postgres DB Adapter.
Database exists. Continue...
error: A hook (`orm`) failed to load!
error: Failed to prepare database: error: column r.consrc does not existat Connection.parseE (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:539:11)at Connection.parseMessage (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:366:17)at Socket.<anonymous> (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:105:22)at Socket.emit (events.js:310:20)at Socket.EventEmitter.emit (domain.js:482:12)at addChunk (_stream_readable.js:286:12)at readableAddChunk (_stream_readable.js:268:9)at Socket.Readable.push (_stream_readable.js:209:10)at TCP.onStreamRead (internal/stream_base_commons.js:186:23)
网关功能
以上kong部署完成,下面进行kong的网关功能测试,包括基本的JWT使用测试与限流、转发配置等,这里主要关注JWT规范使用,并考虑如何迁移到kong上;
JWT认证
JWT(JSON Web Token)本身是标准的规范化流程,用于在网络中安全传输信息,核心由 三部分 组成,通过 .
分隔,格式为:Header.Payload.Signature,比如:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
若依的鉴权认证
首先回忆下若依框架(Ruoyi)的认证流程,其结合了 JWT 和 Redis 缓存,同时基于 Spring Security 的安全框架实现的,若依的登录和认证流程如下:
用户登录页面↓ 提交登录请求
验证验证码(服务端)↓ 成功?是 → 验证用户信息
查询用户信息(数据库)↓ 验证密码 & 状态
生成JWT & UUID(TokenService)↓ 存储用户信息到Redis(Key=UUID)
返回Token给前端↓ 前端存储Token
用户请求(携带Token)↓ 拦截器拦截请求(JwtAuthenticationTokenFilter)
解析Token → 获取UUID↓ 查询Redis(Key=UUID)
Redis存在用户信息?↓ 是 → 提取用户权限
存入SecurityContext → 权限校验 → 放行请求↓ 否 → 返回Token无效
登录接口/login做了几件事:
①验证码、验证、账号密码及ip相关合法性;
②进行UUID生成并放入redis当做状态;
③spring-security标准逻辑;
④按照jwt规范生成token并返回;
// login过程@PostMapping("/login")public AjaxResult login(@RequestBody LoginBody loginBody){AjaxResult ajax = AjaxResult.success();// 生成令牌String token = loginService.login(loginBody.getUsername(), loginBody.getPassword(), loginBody.getCode(),loginBody.getUuid());ajax.put(Constants.TOKEN, token);return ajax;}// login方法public String login(String username, String password, String code, String uuid){// ............ 各种校验AsyncManager.me().execute(AsyncFactory.recordLogininfor(username, Constants.LOGIN_SUCCESS, MessageUtils.message("user.login.success")));LoginUser loginUser = (LoginUser) authentication.getPrincipal();recordLoginInfo(loginUser.getUserId());// 生成tokenreturn tokenService.createToken(loginUser);}// 创建token代码public String createToken(LoginUser loginUser){String token = IdUtils.fastUUID();loginUser.setToken(token);setUserAgent(loginUser);//放入redisrefreshToken(loginUser);Map<String, Object> claims = new HashMap<>();claims.put(Constants.LOGIN_USER_KEY, token);return createToken(claims);}// jwt方法private String createToken(Map<String, Object> claims){String token = Jwts.builder().setClaims(claims).signWith(SignatureAlgorithm.HS512, secret).compact();return token;}
而通用的接口到达后端服务进程后,会进入JwtAuthenticationTokenFilter过滤器,做了几件事:
①拦截拿到token;
②从token中解析出来uuid,并从redis拿用户信息;
③拿到用户信息后塞入spring-security的上下文,后续还要用;
@Overrideprotected void configure(HttpSecurity httpSecurity) throws Exception{// spring-security逻辑httpSecurity.addFilterBefore(corsFilter, JwtAuthenticationTokenFilter.class);httpSecurity.addFilterBefore(corsFilter, LogoutFilter.class);}// JwtAuthenticationTokenFilter 过滤器逻辑
@Component
public class JwtAuthenticationTokenFilter extends OncePerRequestFilter
{@Autowiredprivate TokenService tokenService;@Overrideprotected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain chain)throws ServletException, IOException{LoginUser loginUser = tokenService.getLoginUser(request);if (StringUtils.isNotNull(loginUser) && StringUtils.isNull(SecurityUtils.getAuthentication())){tokenService.verifyToken(loginUser);UsernamePasswordAuthenticationToken authenticationToken = new UsernamePasswordAuthenticationToken(loginUser, null, loginUser.getAuthorities());authenticationToken.setDetails(new WebAuthenticationDetailsSource().buildDetails(request));SecurityContextHolder.getContext().setAuthentication(authenticationToken);}chain.doFilter(request, response);}
}// 这里获取uuid并从redis拿到token,拿不到说明超时public LoginUser getLoginUser(HttpServletRequest request){// 获取请求携带的令牌String token = getToken(request);if (StringUtils.isNotEmpty(token)){try{Claims claims = parseToken(token);// 解析对应的权限以及用户信息String uuid = (String) claims.get(Constants.LOGIN_USER_KEY);String userKey = getTokenKey(uuid);LoginUser user = redisCache.getCacheObject(userKey);return user;}catch (Exception e){log.error("获取用户信息异常'{}'", e.getMessage());}}return null;}// 验证令牌有效期,相差不足20分钟,自动刷新缓存public void verifyToken(LoginUser loginUser){long expireTime = loginUser.getExpireTime();long currentTime = System.currentTimeMillis();if (expireTime - currentTime <= MILLIS_MINUTE_TEN){refreshToken(loginUser);}}
从上面流程看出来,微服务化后登录接口是统一的且是所有服务共享的,登录login后拿到token放入浏览器后,其他服务只管携带浏览器存放在的token后,在网关层进行鉴权即可;
kong的JWT支持
接下来模拟真实业务来实现kong的JWT支持流程,熟悉kong的使用流程,直接上AI,使用python写个测试demo;
以下是实现代码:
[root@node1 ~]# cat kong_jwt_demo.py
# -*- coding: utf-8 -*-
import requests
import jwt
import time# Kong Admin API 地址
KONG_ADMIN_URL = "http://localhost:8001"
# Kong 代理地址
KONG_PROXY_URL = "http://localhost:8000"# 创建服务
def create_service():timestamp = int(time.time())service_name = f"example-service-{timestamp}"service_data = {"name": service_name,"url": "http://httpbin.org"}response = requests.post(f"{KONG_ADMIN_URL}/services", data=service_data)if response.status_code == 201:print("服务创建成功")return response.json()else:print(f"服务创建失败: {response.text}")return None# 创建路由
def create_route(service_id):route_data = {"paths[]": "/example"}response = requests.post(f"{KONG_ADMIN_URL}/services/{service_id}/routes", data=route_data)if response.status_code == 201:print("路由创建成功")return response.json()else:print(f"路由创建失败: {response.text}")return None# 创建消费者
def create_consumer():timestamp = int(time.time())consumer_name = f"test-consumer-{timestamp}"consumer_data = {"username": consumer_name}response = requests.post(f"{KONG_ADMIN_URL}/consumers", data=consumer_data)if response.status_code == 201:print("消费者创建成功")return response.json()else:print(f"消费者创建失败: {response.text}")return None# 为消费者创建 JWT 凭证
def create_jwt_credential(consumer_id):response = requests.post(f"{KONG_ADMIN_URL}/consumers/{consumer_id}/jwt")if response.status_code == 201:print("JWT 凭证创建成功")return response.json()else:print(f"JWT 凭证创建失败: {response.text}")return None# 启用 JWT 插件
def enable_jwt_plugin(service_id):plugin_data = {"name": "jwt"}response = requests.post(f"{KONG_ADMIN_URL}/services/{service_id}/plugins", data=plugin_data)if response.status_code == 201:print("JWT 插件启用成功")return response.json()else:print(f"JWT 插件启用失败: {response.text}")return None# 生成 JWT
def generate_jwt(key, secret):payload = {"iss": key,"iat": int(time.time()),"exp": int(time.time()) + 3600}token = jwt.encode(payload, secret, algorithm="HS256")return token# 发送带 JWT 的请求
def send_request_with_jwt(token):headers = {"Authorization": f"Bearer {token}"}response = requests.get(f"{KONG_PROXY_URL}/example", headers=headers)if response.status_code == 200:print("请求成功,JWT 验证通过")print(response.text)else:print(f"请求失败,状态码: {response.status_code},错误信息: {response.text}")if __name__ == "__main__":# 创建服务service = create_service()if service:service_id = service["id"]# 创建路由route = create_route(service_id)if route:# 创建消费者consumer = create_consumer()if consumer:consumer_id = consumer["id"]# 为消费者创建 JWT 凭证jwt_credential = create_jwt_credential(consumer_id)if jwt_credential:key = jwt_credential["key"]secret = jwt_credential["secret"]# 启用 JWT 插件enable_jwt_plugin(service_id)# 生成 JWTtoken = generate_jwt(key, secret)# 发送带 JWT 的请求send_request_with_jwt(token)
限流
Kong 网关通过其内置的 Rate Limiting 插件实现限流功能,支持多种限流策略和算法,支持多维度限流包括ip,consumer_id,服务,路由等,测试:
# 启用 rate-limiting 插件
def enable_rate_limiting_plugin(service_id):plugin_data = {"name": "rate-limiting","config.minute": 5, # 每分钟允许 5 个请求"config.hour": 100 # 每小时允许 100 个请求}response = requests.post(f"{KONG_ADMIN_URL}/services/{service_id}/plugins", data=plugin_data)if response.status_code == 201:print("rate-limiting 插件启用成功")return response.json()else:print(f"rate-limiting 插件启用失败: {response.text}")return None# 测试限流
def test_rate_limiting(token):request_count = 10for i in range(request_count):send_request_with_jwt(token)time.sleep(0.1)
黑名单
Kong 网关通过 IP Restriction 插件实现对客户端 IP 的黑白名单限制,支持灵活配置允许或拒绝特定 IP 地址或 CIDR 范围的访问:
# 启用 ip-restriction 插件(黑白名单)
def enable_ip_restriction_plugin(service_id, whitelist=None, blacklist=None):plugin_data = {"name": "ip-restriction"}if whitelist:plugin_data["config.whitelist"] = ",".join(whitelist)if blacklist:plugin_data["config.blacklist"] = ",".join(blacklist)response = requests.post(f"{KONG_ADMIN_URL}/services/{service_id}/plugins", data=plugin_data)if response.status_code == 201:print("ip-restriction 插件启用成功")return response.json()else:print(f"ip-restriction 插件启用失败: {response.text}")return None# 测试黑白名单
def test_ip_restriction(token, test_ip):headers = {"Authorization": f"Bearer {token}","X-Forwarded-For": test_ip}response = requests.get(f"{KONG_PROXY_URL}/example", headers=headers)if response.status_code == 200:print(f"IP {test_ip} 请求成功,通过黑白名单验证")elif response.status_code == 403:print(f"IP {test_ip} 请求失败,被黑白名单阻止")else:print(f"IP {test_ip} 请求失败,状态码: {response.status_code},错误信息: {response.text}")
kong网关的其他高级功能如自定义插件plugin、缓存、与Prometheus集成的监控、业务端健康检查、日志集成待研究;