Docker-Compose搭建Spark集群的實現方法
一、前言
在前文中,我們使用Docker-Compose完成瞭hdfs集群的構建。本文將繼續使用Docker-Compose,實現Spark集群的搭建。
二、docker-compose.yml
對於Spark集群,我們采用一個mater節點和兩個worker節點進行構建。其中,所有的work節點均分配1一個core和 1GB的內存。
Docker鏡像選擇瞭bitnami/spark的開源鏡像,選擇的spark版本為2.4.3,docker-compose配置如下:
master: image: bitnami/spark:2.4.3 container_name: master user: root environment: - SPARK_MODE=master - SPARK_RPC_AUTHENTICATION_ENABLED=no - SPARK_RPC_ENCRYPTION_ENABLED=no - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no - SPARK_SSL_ENABLED=no ports: - '8080:8080' - '7077:7077' volumes: - ./python:/python worker1: image: bitnami/spark:2.4.3 container_name: worker1 user: root environment: - SPARK_MODE=worker - SPARK_MASTER_URL=spark://master:7077 - SPARK_WORKER_MEMORY=1G - SPARK_WORKER_CORES=1 - SPARK_RPC_AUTHENTICATION_ENABLED=no - SPARK_RPC_ENCRYPTION_ENABLED=no - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no - SPARK_SSL_ENABLED=no worker2: image: bitnami/spark:2.4.3 container_name: worker2 user: root environment: - SPARK_MODE=worker - SPARK_MASTER_URL=spark://master:7077 - SPARK_WORKER_MEMORY=1G - SPARK_WORKER_CORES=1 - SPARK_RPC_AUTHENTICATION_ENABLED=no - SPARK_RPC_ENCRYPTION_ENABLED=no - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no - SPARK_SSL_ENABLED=no
在master節點中,也映射瞭一個/python目錄,用於存放pyspark代碼,方便運行。
對於master節點,暴露出7077端口和8080端口分別用於連接spark以及瀏覽器查看spark UI,在spark UI中,集群狀態如下圖(啟動後):
如果有需要,可以自行添加worker節點,其中可以修改SPARK_WORKER_MEMORY
與SPARK_WORKER_CORES
對節點分配的資源進行修改。
對於該鏡像而言,默認exec進去是無用戶的,會導致一些安裝命令權限的不足,無法安裝。例如需要運行pyspark,可能需要安裝numpy、pandas等庫,就無法使用pip完成安裝。而通過user: root
就能設置默認用戶為root用戶,避免上述問題。
三、啟動集群
同上文一樣,在docker-compose.yml的目錄下執行docker-compose up -d
命令,就能一鍵構建集群(但是如果需要用到numpy等庫,還是需要自己到各節點內進行安裝)。
進入master節點執行spark-shell
,成功進入:
四、結合hdfs使用
將上文的Hadoop的docker-compose.yml與本次的結合,得到新的docker-compose.yml:
version: "1.0" services: namenode: image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8 container_name: namenode ports: - 9870:9870 - 9000:9000 volumes: - ./hadoop/dfs/name:/hadoop/dfs/name - ./input:/input environment: - CLUSTER_NAME=test env_file: - ./hadoop.env datanode: image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 container_name: datanode depends_on: - namenode volumes: - ./hadoop/dfs/data:/hadoop/dfs/data environment: SERVICE_PRECONDITION: "namenode:9870" env_file: - ./hadoop.env resourcemanager: image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8 container_name: resourcemanager environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864" env_file: - ./hadoop.env nodemanager1: image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8 container_name: nodemanager environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088" env_file: - ./hadoop.env historyserver: image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8 container_name: historyserver environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088" volumes: - ./hadoop/yarn/timeline:/hadoop/yarn/timeline env_file: - ./hadoop.env master: image: bitnami/spark:2.4.3-debian-9-r81 container_name: master user: root environment: - SPARK_MODE=master - SPARK_RPC_AUTHENTICATION_ENABLED=no - SPARK_RPC_ENCRYPTION_ENABLED=no - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no - SPARK_SSL_ENABLED=no ports: - '8080:8080' - '7077:7077' volumes: - ./python:/python worker1: image: bitnami/spark:2.4.3-debian-9-r81 container_name: worker1 user: root environment: - SPARK_MODE=worker - SPARK_MASTER_URL=spark://master:7077 - SPARK_WORKER_MEMORY=1G - SPARK_WORKER_CORES=1 - SPARK_RPC_AUTHENTICATION_ENABLED=no - SPARK_RPC_ENCRYPTION_ENABLED=no - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no - SPARK_SSL_ENABLED=no worker2: image: bitnami/spark:2.4.3-debian-9-r81 container_name: worker2 user: root environment: - SPARK_MODE=worker - SPARK_MASTER_URL=spark://master:7077 - SPARK_WORKER_MEMORY=1G - SPARK_WORKER_CORES=1 - SPARK_RPC_AUTHENTICATION_ENABLED=no - SPARK_RPC_ENCRYPTION_ENABLED=no - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no - SPARK_SSL_ENABLED=no
運行集群(還需要一個hadoop.env文件見上文)長這樣:
通過Docker容器的映射功能,將本地文件與spark集群的master節點的/python進行瞭文件映射,編寫的pyspark通過映射可與容器中進行同步,並通過docker exec指令,完成代碼執行:
運行瞭一個回歸程序,集群功能正常:
到此這篇關於Docker-Compose搭建Spark集群的實現方法的文章就介紹到這瞭,更多相關Docker-Compose搭建Spark集群內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet!
推薦閱讀:
- docker-compose啟動redis多機集群的實現(6臺服務器3主3從)
- docker-compose基於MySQL8部署項目的實現
- docker-compose安裝yml文件配置方式
- Docker compose部署SpringBoot項目連接MySQL及遇到的坑
- 解決使用Docker Compose管理容器的問題