skywalking容器化部署docker鏡像構建k8s從測試到可用
前言碎語
skywalking是個非常不錯的apm產品,但是在使用過程中有個非常蛋疼的問題,在基於es的存儲情況下,es的數據一有問題,就會導致整個skywalking web ui服務不可用,然後需要agent端一個服務一個服務的停用,然後服務重新部署後好,全部走一遍。這種問題同樣也會存在skywalking的版本升級迭代中。而且apm 這種過程數據是允許丟棄的,默認skywalking中關於trace的數據記錄隻保存瞭90分鐘。故博主準備將skywalking的部署容器化,一鍵部署升級。下文是整個skywalking 容器化部署的過程。
目標:將skywalking的docker鏡像運行在k8s的集群環境中提供服務
docker鏡像構建
FROM registry.cn-xx.xx.com/keking/jdk:1.8 ADD apache-skywalking-apm-incubating/ /opt/apache-skywalking-apm-incubating/ RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \ && echo 'Asia/Shanghai' >/etc/timezone \ && chmod +x /opt/apache-skywalking-apm-incubating/config/setApplicationEnv.sh \ && chmod +x /opt/apache-skywalking-apm-incubating/webapp/setWebAppEnv.sh \ && chmod +x /opt/apache-skywalking-apm-incubating/bin/startup.sh \ && echo "tail -fn 100 /opt/apache-skywalking-apm-incubating/logs/webapp.log" >> /opt/apache-skywalking-apm-incubating/bin/startup.sh EXPOSE 8080 10800 11800 12800 CMD /opt/apache-skywalking-apm-incubating/config/setApplicationEnv.sh \ && sh /opt/apache-skywalking-apm-incubating/webapp/setWebAppEnv.sh \ && /opt/apache-skywalking-apm-incubating/bin/startup.sh
在編寫Dockerfile時需要考慮幾個問題:skywalking中哪些配置需要動態配置(運行時設置)?怎麼保證進程一直運行(skywalking 的startup.sh和tomcat中 的startup.sh類似)?
application.yml
#cluster: # zookeeper: # hostPort: localhost:2181 # sessionTimeout: 100000 naming: jetty: #OS real network IP(binding required), for agent to find collector cluster host: 0.0.0.0 port: 10800 contextPath: / cache: # guava: caffeine: remote: gRPC: # OS real network IP(binding required), for collector nodes communicate with each other in cluster. collectorN --(gRPC) --> collectorM host: #real_host port: 11800 agent_gRPC: gRPC: #os real network ip(binding required), for agent to uplink data(trace/metrics) to collector. agent--(grpc)--> collector host: #real_host port: 11800 # Set these two setting to open ssl #sslCertChainFile: $path #sslPrivateKeyFile: $path # Set your own token to active auth #authentication: xxxxxx agent_jetty: jetty: # OS real network IP(binding required), for agent to uplink data(trace/metrics) to collector through HTTP. agent--(HTTP)--> collector # SkyWalking native Java/.Net/node.js agents don't use this. # Open this for other implementor. host: 0.0.0.0 port: 12800 contextPath: / analysis_register: default: analysis_jvm: default: analysis_segment_parser: default: bufferFilePath: ../buffer/ bufferOffsetMaxFileSize: 10M bufferSegmentMaxFileSize: 500M bufferFileCleanWhenRestart: true ui: jetty: # Stay in `localhost` if UI starts up in default mode. # Change it to OS real network IP(binding required), if deploy collector in different machine. host: 0.0.0.0 port: 12800 contextPath: / storage: elasticsearch: clusterName: #elasticsearch_clusterName clusterTransportSniffer: true clusterNodes: #elasticsearch_clusterNodes indexShardsNumber: 2 indexReplicasNumber: 0 highPerformanceMode: true # Batch process setting, refer to https://www.elastic.co/guide/en/elasticsearch/client/java-api/5.5/java-docs-bulk-processor.html bulkActions: 2000 # Execute the bulk every 2000 requests bulkSize: 20 # flush the bulk every 20mb flushInterval: 10 # flush the bulk every 10 seconds whatever the number of requests concurrentRequests: 2 # the number of concurrent requests # Set a timeout on metric data. After the timeout has expired, the metric data will automatically be deleted. traceDataTTL: 2880 # Unit is minute minuteMetricDataTTL: 90 # Unit is minute hourMetricDataTTL: 36 # Unit is hour dayMetricDataTTL: 45 # Unit is day monthMetricDataTTL: 18 # Unit is month #storage: # h2: # url: jdbc:h2:~/memorydb # userName: sa configuration: default: #namespace: xxxxx # alarm threshold applicationApdexThreshold: 2000 serviceErrorRateThreshold: 10.00 serviceAverageResponseTimeThreshold: 2000 instanceErrorRateThreshold: 10.00 instanceAverageResponseTimeThreshold: 2000 applicationErrorRateThreshold: 10.00 applicationAverageResponseTimeThreshold: 2000 # thermodynamic thermodynamicResponseTimeStep: 50 thermodynamicCountOfResponseTimeSteps: 40 # max collection's size of worker cache collection, setting it smaller when collector OutOfMemory crashed. workerCacheMaxSize: 10000 #receiver_zipkin: # default: # host: localhost # port: 9411 # contextPath: /
webapp.yml
server: port: 8080 collector: path: /graphql ribbon: ReadTimeout: 10000 listOfServers: #real_host:10800 security: user: admin: password: #skywalking_password
動態配置:密碼,grpc等需要綁定主機的ip都需要運行時設置,這裡我們在啟動skywalking的startup.sh隻之前,先執行瞭兩個設置配置的腳本,通過k8s在運行時設置的環境變量來替換需要動態配置的參數
setApplicationEnv.sh
#!/usr/bin/env sh sed -i "s/#elasticsearch_clusterNodes/${elasticsearch_clusterNodes}/g" /opt/apache-skywalking-apm-incubating/config/application.yml sed -i "s/#elasticsearch_clusterName/${elasticsearch_clusterName}/g" /opt/apache-skywalking-apm-incubating/config/application.yml sed -i "s/#real_host/${real_host}/g" /opt/apache-skywalking-apm-incubating/config/application.yml
setWebAppEnv.sh
#!/usr/bin/env sh sed -i "s/#skywalking_password/${skywalking_password}/g" /opt/apache-skywalking-apm-incubating/webapp/webapp.yml sed -i "s/#real_host/${real_host}/g" /opt/apache-skywalking-apm-incubating/webapp/webapp.yml
保持進程存在:通過在skywalking 啟動腳本startup.sh末尾追加"tail -fn 100 /opt/apache-skywalking-apm-incubating/logs/webapp.log",來讓進程保持運行,並不斷輸出webapp.log的日志
Kubernetes中部署
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: skywalking namespace: uat spec: replicas: 1 selector: matchLabels: app: skywalking template: metadata: labels: app: skywalking spec: imagePullSecrets: - name: registry-pull-secret nodeSelector: apm: skywalking containers: - name: skywalking image: registry.cn-xx.xx.com/keking/kk-skywalking:5.2 imagePullPolicy: Always env: - name: elasticsearch_clusterName value: elasticsearch - name: elasticsearch_clusterNodes value: 172.16.16.129:31300 - name: skywalking_password value: xxx - name: real_host valueFrom: fieldRef: fieldPath: status.podIP resources: limits: cpu: 1000m memory: 4Gi requests: cpu: 700m memory: 2Gi --- apiVersion: v1 kind: Service metadata: name: skywalking namespace: uat labels: app: skywalking spec: selector: app: skywalking ports: - name: web-a port: 8080 targetPort: 8080 nodePort: 31180 - name: web-b port: 10800 targetPort: 10800 nodePort: 31181 - name: web-c port: 11800 targetPort: 11800 nodePort: 31182 - name: web-d port: 12800 targetPort: 12800 nodePort: 31183 type: NodePort
Kubernetes部署腳本中唯一需要註意的就是env中關於pod ip的獲取,skywalking中有幾個ip必須綁定容器的真實ip,這個地方可以通過環境變量設置到容器裡面去
文末結語
整個skywalking容器化部署從測試到可用大概耗時1天,其中花瞭個多小時整瞭下譚兄的skywalking-docker鏡像(https://hub.docker.com/r/wutang/skywalking-docker/),發現有個腳本有權限問題(譚兄反饋已解決,還沒來的及測試),以及有幾個地方自己不是很好控制,便build瞭自己的docker鏡像,其中最大的問題還是解決集群中網絡通訊的問題,一開始我把skywalking中的服務ip都設置為0.0.0.0,然後通過集群的nodePort映射出來,這個時候的agent通過集群ip+31181是可以訪問到naming服務的,然後通過naming服務獲取到的collector gRPC服務缺變成瞭0.0.0.0:11800, 這個地址agent肯定訪問不到collector的,後面通過綁定pod ip的方式解決瞭這個問題。
以上就是skywalking容器化部署docker鏡像構建k8s從測試到可用的詳細內容,更多關於skywalking容器化部署docker鏡像構建k8s的資料請關註WalkonNet其它相關文章!
推薦閱讀:
- skywalking分佈式服務調用鏈路追蹤APM應用監控
- php鏈路追蹤框架skywalking介紹
- 基於docker部署skywalking實現全鏈路監控功能
- Elasticsearch寫入瓶頸導致skywalking大盤空白
- net core下鏈路追蹤skywalking安裝和簡單使用教程