全部文档
当前文档

暂无内容

如果没有找到您期望的内容,请尝试其他搜索词

文档中心

通过Kafka采集容器实例日志

最近更新时间:2023-12-01 16:52:09

对于通过virtual-kubelet创建的容器实例,支持将容器实例日志采集并发送至Kafka服务。

前提条件

  1. 已在Kubernetes集群中部署虚拟节点,部署方式:KCE集群参考Kubernetes集群对接KCI,自建集群参考自建Kubernetes集群中对接KCI

  2. 容器实例所属VPC已与Kafka集群所属网络打通。

    注:若Kafka集群有安全组配置,入站规则中需配置放行broker监听端口。

在满足上述条件的前提下,此方案适用于将自建/金山云容器集群中创建的金山云容器实例日志推送至自建/金山云托管Kafka服务,具体配置方式如下。

步骤1:创建filebeat配置文件

在集群Kube-system命名空间下创建configmap filebeat-configfilebeat-inputs用于配置Kafka output以及集群内日志采集规则。

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
data:
  filebeat.yml: |
    ---
    filebeat.config:
      inputs:
        path: "${path.config}/inputs.d/*.yml"
        reload.enabled: true
        reload.period: "10s"
      modules:
        path: "${path.config}/modules.d/*.yml"
        reload.enabled: true
    output.kafka: 
      # 配置Kafka broker地址
      hosts: ["10.0.0.***:9092", "10.0.0.***:9092", "10.0.0.***:9092"]
      
      # 动态匹配topic地址 + 分区配置
      topic: '%{[fields.log_topic]}'
      partition.round_robin:
        reachable_only: false
 
      required_acks: 1
      compression: gzip
      max_message_bytes: 1000000
 
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system

注:更多Kafka output配置可参考filebeat官网文档Configure the Kafka output

步骤2:为目标容器实例开启日志采集

以下以nginx pod为例,通过定义template annotation,为pod开启kube-proxy及日志采集能力。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-rbkci
  namespace: default
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
        annotations:
            k8s.ksyun.com/kci-klog-enabled: "true"  #开启日志采集
            k8s.ksyun.com/kci-kube-proxy-enabled: "true"   #开启Kube-proxy
        labels:
            app: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: type
                operator: In
                values:
                - virtual-kubelet
      containers:
      - name: nginx
        image: nginx:latest
      tolerations:
      - key: rbkci-virtual-kubelet.io/provider
        value: kingsoftcloud
        effect: NoSchedule

若需要在虚拟节点维度开启日志采集,可修改virtual-kubelet启动参数,在vk级别开启kube-proxy及日志采集,示例如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbkci-virtual-kubelet
  namespace: kube-system
  labels:
    k8s-app: rbkci-virtual-kubelet
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: rbkci-virtual-kubelet
  template:
    metadata:
      name: rbkci-virtual-kubelet
      labels:
        k8s-app: rbkci-virtual-kubelet
    spec:
      serviceAccountName: virtual-kubelet-sa
      containers:
        - name: virtual-kubelet
          image: hub.kce.ksyun.com/ksyun/rbkci-virtual-kubelet:v1.1.0-beta
          args:
            - --nodename=rbkci-virtual-kubelet
            - --cluster-dns=10.254.0.10
            - --cluster-domain=cluster.local
            - --kcilet-kubeconfig-path=/root/.kube/config
            - --enable-node-lease
            # 虚拟节点管理的所有实例使能kube-proxy
            - --kube-proxy-enable
            # 虚拟节点管理的所有实例使能日志采集
            - --klog-enable
          imagePullPolicy: Always
          env:
            - name: VKUBELET_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: TEMP_AKSK_CM
              value: user-temp-aksk
            - name: KCI_CLUSTER_ID
              value: ${cluster_id}
            - name: KCI_SUBNET_ID
              value: ${subnet_id}
            - name: KCI_SECURITY_GROUP_IDS
              value: ${security_group_ids}
          volumeMounts:
            - mountPath: /root/.kube
              name: kubeconfig
            - mountPath: /var/log/kci-virtual-kubelet
              name: kci-provider-log
      volumes:
        - name: kubeconfig
          secret:
            secretName: rbkci-kubeconfig-secret
        - name: kci-provider-log
          hostPath:
            path: /var/log/kci-virtual-kubelet

注:容器实例需通过CoreDNS服务解析消费端地址,在开启日志采集的同时,也需开启Kube-proxy以使能pod访问ClusterIP类型服务。Kafka 服务端域名需通过集群Coredns hosts配置,示例如下:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        # hosts can add hosts's item into dns, see https://coredns.io/plugins/hosts/
        hosts {
            198.18.96.191 hub.kce.ksyun.com
            10.0.0.*** kmr-c0b4eaab-gn-e2a4babf-broker-1-1.ksc.com  // kafka broker 域名
            10.0.0.*** kmr-c0b4eaab-gn-e2a4babf-broker-1-2.ksc.com  // kafka broker 域名
            10.0.0.*** kmr-c0b4eaab-gn-e2a4babf-broker-1-3.ksc.com  // kafka broker 域名
            fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2021-12-15T11:14:52Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "6152795"
  uid: c1e29f37-d37d-4c90-9ca4-418a628cc04b

步骤3: 配置日志采集规则

更新configmap filebeat-inputs:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
data:
  kci.yml: |
    ---
    - type: "log"         #采集容器文件类型日志
      symlinks: true
      enabled: true
      fields:
        log_topic: filelog
      paths:
      - "/usr/share/local/var/log/klog/default/deployment/nginx-rbkci/pods/*/containers/nginx/root/test.log" #指定日志文件路径
    - type: "container"    #采集容器标准输出日志
      symlinks: true
      paths:
      - "/var/log/pods/*/*/*.log"  #指定采集所有pod的标准输出日志
      fields:
        log_topic: stdoutlog

步骤4:验证日志投递效果

查询Kafka消费端消息,检查目标容器实例日志是否投递成功。

image20220523102544673.png

文档导读
纯净模式常规模式

纯净模式

点击可全屏预览文档内容
文档反馈