当前位置: 首页 > news >正文

Serverless集群搭建:Knative

文章目录

  • Knative搭建
    • 1.准备工作
      • 安装Kubernetes
      • 安装 Istio
    • 2.部署Knative

Knative搭建

搭建流程图:

在这里插入图片描述

1.准备工作

准备工作

● 本安装操作中的步骤 bash 适用于 MacOS 或 Linux 环境。对于 Windows,某些命令可能需要调整。

● 本安装操作假定您具有现有的 Kubernetes 集群,可以在其上轻松安装和运行Alpha 级软件。

● Knative 需要 Kubernetes 集群 v1.14 或更高版本,以及可兼容 kubectl。

安装Kubernetes

这就无需多言了直接安装即可。(本文版本为1.23)

[root@master ~]# kubectl get pods -A 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-64cc74d646-7tjcg   1/1     Running   0          27s
kube-system   calico-node-24lwx                          1/1     Running   0          27s
kube-system   calico-node-cnhbg                          1/1     Running   0          27s
kube-system   calico-node-x6hxn                          1/1     Running   0          27s
kube-system   coredns-6d8c4cb4d-rxdlv                    1/1     Running   0          2m8s
kube-system   coredns-6d8c4cb4d-wwk7v                    1/1     Running   0          2m8s
kube-system   etcd-master                                1/1     Running   0          2m24s
kube-system   kube-apiserver-master                      1/1     Running   0          2m24s
kube-system   kube-controller-manager-master             1/1     Running   0          2m24s
kube-system   kube-proxy-dtntn                           1/1     Running   0          2m2s
kube-system   kube-proxy-fdzk4                           1/1     Running   0          2m7s
kube-system   kube-proxy-mzj6c                           1/1     Running   0          2m9s
kube-system   kube-scheduler-master                      1/1     Running   0          2m24s

安装 Istio

Knative 依赖 Istio 进行流量路由和入口。您可以选择注入 Istio sidecar 并启用Istio 服务网格,但是并非所有 Knative 组件都需要它。如果您的云平台提供了托管的 Istio 安装,则建议您以这种方式安装 Istio,除非您需要自定义安装功能。如果您希望手动安装 Istio,或者云提供商不提供托管的 Istio 安装,或者您要使用 Minkube 或类似的本地安装 Knative

Master192.168.100.10docker,kubectl,kubeadm,kubelet
Node1192.168.100.20docker,kubectl,kubeadm,kubelet
Node2192.168.100.30docker,kubectl,kubeadm,kubelet

在此不在过多赘述。

下载 Istio,下载内容将包含:安装文件、示例和 istioctl 命令行工具。我的kubernetes集群版本为1.23.0,所以选取Istio1.17.0版本进行搭建。

curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.17.0 TARGET_ARCH=x86_64 sh -

将 istioctl 客户端路径增加到 path 环境变量中

[root@master ~]# vi /etc/profile
export PATH=/root/istio-1.17.0/bin:$PATH
[root@master ~]# source /etc/profile

用于快速启动一个带有基本功能的 Istio 环境。这个配置会部署 Istio 的核心控制平面组件(如 Pilot、Citadel、Galley 等)以及为每个应用服务的 Pod 自动注入 Envoy 代理。

[root@master ]# cd istio-1.17.0/
[root@master istio-1.17.0]# istioctl manifest apply --set profile=demo

在这里插入图片描述

使用命令查看:

[root@master istio-1.17.0]# istioctl verify-install

在这里插入图片描述

查看pod

确保关联的 Kubernetes pod 已经部署,并且 STATUS 为 Running:

[root@master istio-1.17.0]# kubectl get pod -n istio-system

在这里插入图片描述

对外暴露istio-gateway接口

[root@master ~]# kubectl edit svc istio-ingressgateway -n istio-system
##将type接口更改为No的Port
service/istio-ingressgateway edited
[root@master ~]# kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE
istio-ingressgateway   NodePort   10.107.99.250   <none>        15021:31668/TCP,80:31296/TCP,443:31680/TCP,31400:30111/TCP,15443:31671/TCP   7d19h

在这里插入图片描述

2.部署Knative

在应用 net-istio 配置之前,确保 knative-serving 命名空间已经创建。

安装 Knative 与 Istio 集成的网络层组件(net-istio,目的是让 Knative 通过 Istio 管理流量(如网关、路由、安全策略),并创建了相关资源(如 Gateway、Service、Webhook 等),确保 Knative 服务可通过 Istio 入口对外暴露。

[root@master ~]# kubectl create namespace knative-serving
namespace/knative-serving created
[root@master ~]# kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.7.1/net-istio.yaml 
clusterrole.rbac.authorization.k8s.io/knative-serving-istio created
gateway.networking.istio.io/knative-ingress-gateway created
gateway.networking.istio.io/knative-local-gateway created
service/knative-local-gateway created
configmap/config-istio created
peerauthentication.security.istio.io/webhook created
peerauthentication.security.istio.io/domainmapping-webhook created
peerauthentication.security.istio.io/net-istio-webhook created
deployment.apps/net-istio-controller created
deployment.apps/net-istio-webhook created
secret/net-istio-webhook-certs created
service/net-istio-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.istio.networking.internal.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.istio.networking.internal.knative.dev created

安装 Knative Serving v1.7.1 版本的自定义资源定义(CRD),这些 CRD(如 ConfigurationsRevisionsRoutes 等)扩展了 Kubernetes API,使集群能够识别和管理 Knative 的核心资源,支撑无服务器应用的核心功能(如流量路由、版本管理、自动扩缩等),为后续部署 Knative Serving 组件提供基础。

[root@master ~]# kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-crds.yaml    
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created

部署 Knative Serving v1.7.1 核心组件,包括权限配置(RBAC)、自动扩缩控制器(autoscaler)、流量管理(activator)、域名映射(domain-mapping)等核心功能。

[root@master istio-1.17.0]# kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-core.yaml
Warning: resource namespaces/knative-serving is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/knative-serving configured
clusterrole.rbac.authorization.k8s.io/knative-serving-aggregated-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
serviceaccount/controller created
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-addressable-resolver created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
secret/serving-certs-ctrl-ca created
secret/knative-serving-certs created
image.caching.internal.knative.dev/queue-proxy created
configmap/config-autoscaler created
configmap/config-defaults created
configmap/config-deployment created
configmap/config-domain created
configmap/config-features created
configmap/config-gc created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-network created
configmap/config-observability created
configmap/config-tracing created
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+
horizontalpodautoscaler.autoscaling/activator created
poddisruptionbudget.policy/activator-pdb created
deployment.apps/activator created
service/activator-service created
deployment.apps/autoscaler created
service/autoscaler created
deployment.apps/controller created
service/controller created
deployment.apps/domain-mapping created
deployment.apps/domainmapping-webhook created
service/domainmapping-webhook created
horizontalpodautoscaler.autoscaling/webhook created
poddisruptionbudget.policy/webhook-pdb created
deployment.apps/webhook created
service/webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.domainmapping.serving.knative.dev created
secret/domainmapping-webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.domainmapping.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
secret/webhook-certs created

镜像无法拉取怎么办嘞?

有梯子的小伙伴有福了,Docker 单独配置网络代理:

在这里插入图片描述

[root@master ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@master ~]# vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.180.83:7890"
Environment="HTTPS_PROXY=http://192.168.180.83:7890"
Environment="NO_PROXY=localhost,127.0.0.1,.cluster.local,.svc,.internal.gcr.io"

// 最好开国外的地址进行代理。

查看Knative的Pod状态

[root@master istio-1.17.0]# kubectl get pods -n knative-serving
NAME                                     READY   STATUS    RESTARTS   AGE
activator-54cdf744fb-bgkcr               1/1     Running   0          61s
autoscaler-684495f859-f6nt8              1/1     Running   0          61s
controller-865d96c97f-gj5gp              1/1     Running   0          61s
domain-mapping-5d488c9654-pthss          1/1     Running   0          61s
domainmapping-webhook-54d46d9b6c-4bb2q   1/1     Running   0          61s
net-istio-controller-549f854f4-rh6k8     1/1     Running   0          51s
net-istio-webhook-f9bdbc6f9-wdjk5        1/1     Running   0          51s
webhook-65984d8585-4jp5l                 1/1     Running   0          61s

在这里插入图片描述

Knative Serving中部署serving-hpa.yaml组件,本质上是为Knative服务启用Kubernetes原生的HPA(Horizontal Pod Autoscaler)自动扩缩能力。部署serving-hap:

[root@master ]# kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-hpa.yaml  
deployment.apps/autoscaler-hpa created
service/autoscaler-hpa created

Knative Eventing安装

  • 创建 Knative Eventing 的核心 自定义资源定义(CRDs),包括 Broker(事件路由中枢)、Trigger(事件触发器)、Channel(消息通道)等抽象资源类型。
  • 为事件驱动架构奠定基础,允许定义事件源(如 PingSource)、事件处理流程(如 Sequence)等高级功能。
[root@master ]# kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.7.1/eventing-crds.yaml
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created

安装 Eventing 的核心组件

  • 创建 knative-eventing 命名空间及控制器(如 eventing-controller)、权限控制(RBAC)、配置管理(ConfigMaps)等核心资源。
  • 事件中枢控制层:通过 eventing-webhook 提供动态准入控制,确保事件资源的合规性;eventing-controller 负责协调事件流生命周期(如 Broker/Trigger 的创建与更新)。
  • 关键能力:支持事件路由、过滤、持久化等基础功能,为 Serverless 事件驱动场景提供标准化接口。
[root@master ~]# kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.7.1/eventing-core.yaml
namespace/knative-eventing created
serviceaccount/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created
serviceaccount/pingsource-mt-adapter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
serviceaccount/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created
rolebinding.rbac.authorization.k8s.io/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created
configmap/config-br-default-channel created
configmap/config-br-defaults created
configmap/default-ch-webhook created
configmap/config-ping-defaults created
configmap/config-features created
configmap/config-kreference-mapping created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-sugar created
configmap/config-tracing created
deployment.apps/eventing-controller created
deployment.apps/pingsource-mt-adapter created
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+
horizontalpodautoscaler.autoscaling/eventing-webhook created
poddisruptionbudget.policy/eventing-webhook created
deployment.apps/eventing-webhook created
service/eventing-webhook created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver created
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created
clusterrole.rbac.authorization.k8s.io/eventing-config-reader created
clusterrole.rbac.authorization.k8s.io/channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/meta-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-bindings-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
clusterrole.rbac.authorization.k8s.io/podspecable-binding created
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created
clusterrole.rbac.authorization.k8s.io/source-observer created
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created
role.rbac.authorization.k8s.io/knative-eventing-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created
secret/eventing-webhook-certs created
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created

部署完成后,查看Pod状态:

[root@master ~]# kubectl get pods -n knative-eventing
NAME                                   READY   STATUS    RESTARTS   AGE
eventing-controller-578d46cb89-b5c79   1/1     Running   0          19s
eventing-webhook-54bc4585b5-hmdmc      1/1     Running   0          19s

在这里插入图片描述

安装 Kafka 控制器

  • 扩展 Knative Eventing 以支持 Kafka 作为事件后端,创建 Kafka 相关 CRDs(如 KafkaSourceKafkaChannel)。
  • 部署 kafka-controllerkafka-webhook,实现 Kafka 集群与 Knative 的事件桥接。
  • 核心价值:将 Kafka 的高吞吐、持久化消息队列特性引入 Knative,允许消费/生产 Kafka 主题事件,构建混合云事件驱动架构。
[root@master ~]# kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.7.1/eventing-kafka-controller.yaml
configmap/kafka-broker-config created
configmap/kafka-channel-config created
customresourcedefinition.apiextensions.k8s.io/kafkachannels.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/consumers.internal.kafka.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/consumergroups.internal.kafka.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/kafkasinks.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/kafkasources.sources.knative.dev created
clusterrole.rbac.authorization.k8s.io/eventing-kafka-source-observer created
configmap/config-kafka-source-defaults created
configmap/config-kafka-descheduler created
configmap/config-kafka-features created
configmap/config-kafka-leader-election created
configmap/config-kafka-scheduler created
configmap/kafka-config-logging created
configmap/config-tracing configured
clusterrole.rbac.authorization.k8s.io/knative-kafka-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-kafka-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/kafka-controller created
serviceaccount/kafka-controller created
clusterrolebinding.rbac.authorization.k8s.io/kafka-controller created
clusterrolebinding.rbac.authorization.k8s.io/kafka-controller-addressable-resolver created
deployment.apps/kafka-controller created
clusterrole.rbac.authorization.k8s.io/kafka-webhook-eventing created
serviceaccount/kafka-webhook-eventing created
clusterrolebinding.rbac.authorization.k8s.io/kafka-webhook-eventing created
mutatingwebhookconfiguration.admissionregistration.k8s.io/defaulting.webhook.kafka.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/pods.defaulting.webhook.kafka.eventing.knative.dev created
secret/kafka-webhook-eventing-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.kafka.eventing.knative.dev created
deployment.apps/kafka-webhook-eventing created
service/kafka-webhook-eventing created

安装 KafkaChannel 数据平面

  • 创建 Kafka 数据通道 的实现组件(如 kafka-channel-dispatcherkafka-channel-receiver)。
  • 通道作用:在服务间提供可靠的事件传输层,确保事件通过 Kafka Topic 持久化,避免数据丢失。
  • 场景适配:适用于需要高吞吐、有序事件传递的场景(如订单处理流水线)。
[root@master ~]# kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.7.1/eventing-kafka-channel.yaml 
configmap/config-kafka-channel-data-plane created
clusterrole.rbac.authorization.k8s.io/knative-kafka-channel-data-plane created
serviceaccount/knative-kafka-channel-data-plane created
clusterrolebinding.rbac.authorization.k8s.io/knative-kafka-channel-data-plane created
deployment.apps/kafka-channel-dispatcher created
deployment.apps/kafka-channel-receiver created
service/kafka-channel-ingress created

安装Broker层

  • 将 Kafka 作为 事件总线(Broker) 后端,替代默认的 in-memory Broker,提升事件系统的可靠性和扩展性。
  • 部署 kafka-broker-dispatcherkafka-broker-receiver,实现事件从 Kafka 到服务的路由(通过 Triggers)。
  • 生产级优势:支持大规模事件分发、多租户隔离、死信队列(DLQ)等企业级特性,适合关键业务事件处理。
[root@master ~]# kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.7.1/eventing-kafka-broker.yaml 
configmap/config-kafka-broker-data-plane created
clusterrole.rbac.authorization.k8s.io/knative-kafka-broker-data-plane created
serviceaccount/knative-kafka-broker-data-plane created
clusterrolebinding.rbac.authorization.k8s.io/knative-kafka-broker-data-plane created
deployment.apps/kafka-broker-dispatcher created
deployment.apps/kafka-broker-receiver created
service/kafka-broker-ingress created

查验集群状态:

[root@master ~]# kubectl get pods -n knative-eventing
NAME                                       READY   STATUS    RESTARTS   AGE
eventing-controller-578d46cb89-b5c79       1/1     Running   0          7m49s
eventing-webhook-54bc4585b5-hmdmc          1/1     Running   0          7m49s
kafka-broker-dispatcher-57df55bb4-d2rjq    1/1     Running   0          2m4s
kafka-broker-receiver-69f4dcfd97-4gfxn     1/1     Running   0          2m4s
kafka-channel-dispatcher-fffc6796f-kvlmj   1/1     Running   0          2m12s
kafka-channel-receiver-7655dcc69d-826qg    1/1     Running   0          2m12s
kafka-controller-5875747ddc-tvzq5          1/1     Running   0          2m18s
kafka-webhook-eventing-57cfbd8b44-rqw2h    1/1     Running   0          2m18s

在这里插入图片描述

关于以上所有的yaml文件无法在集群搭建的问题,我们可以使用梯子下载所有的yaml文件,上传至master节点,依次执行,但是每个节点都要配置Docker代理,不然就会出现镜像无法拉取的问题!!

至此,Knative集群搭建完成。

Knative集群标志着企业级无服务器及事件驱动架构平台的落地,其基于Kubernetes和Istio的深度融合,不仅实现了服务自动扩缩、流量精准路由和灰度发布等核心能力,还通过Knative Eventing与Kafka的集成构建了高可靠事件总线,支撑实时数据流处理与异步任务调度。该平台将开发运维复杂度大幅降低,使开发者聚焦业务逻辑,同时通过资源动态优化显著提升基础设施利用率,为微服务敏捷迭代、突发流量应对、跨系统事件驱动场景提供开箱即用的云原生底座,加速企业向现代化、弹性化应用架构的转型进程。

后续的学习我们慢慢来。


http://www.mrgr.cn/news/98660.html

相关文章:

  • 前端基础之《Vue(5)—组件基础》
  • HOW - 项目 link 本地开发库进行调试
  • 【c语言】深入理解指针1
  • 任务的状态
  • 硬件电路设计之51单片机(2)
  • 2.一维卡尔曼滤波(动态模型)
  • leetcode 122. Best Time to Buy and Sell Stock II
  • LeetCode -- Flora -- edit 2025-04-16
  • 深度学习-卷积层(代码+理论)python opencv源码(史上最全)
  • idea中提高编译速度研究
  • ESP8266/32作为AVR编程器(ISP programmer)的使用介绍
  • 基于DS-TWR(双边双向测距)的平面定位MATLAB例程,包含模拟数据生成、距离计算和最小二乘定位(附完整代码,订阅专栏后可直接查看)
  • JWT 鉴权机制 通俗易懂解释版本
  • 投行风控和交易高可靠分布式锁核心要素与实现方案
  • SparseDrive---论文阅读
  • 从信号处理角度理解图像处理的滤波函数
  • [Python] UV工具入门使用指南——小试牛刀
  • Antd中使用Form.List且有Select组件,过滤问题
  • Linux 软件管理
  • Linux:解决 yum 官方源无法使用(CentOS 7)