亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

如何在VmWare創建3節點Hadoop虛擬環境

發布時間:2020-06-09 20:11:49 來源:億速云 閱讀:636 作者:元一 欄目:大數據

一、 VM虛擬環境搭建

說明:在windos10上使用VmWare Workstation創建3節點Hadoop虛擬環境
創建虛擬機
如何在VmWare創建3節點Hadoop虛擬環境
下一步
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
設置虛擬機主機名和介質存放路徑
如何在VmWare創建3節點Hadoop虛擬環境
設置20G磁盤大小
如何在VmWare創建3節點Hadoop虛擬環境
選擇“自定義硬件”
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
配置網絡模式為NAT模式
如何在VmWare創建3節點Hadoop虛擬環境
配置虛擬機啟動鏡像
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
到這里,使用虛擬機克隆技術配置另外兩臺slave
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境

同理克隆slave2,  步驟省略
如何在VmWare創建3節點Hadoop虛擬環境

此時windos網絡連接里面會出現兩張虛擬網卡
如何在VmWare創建3節點Hadoop虛擬環境

接下來就是給虛擬機配置IP網絡
如何在VmWare創建3節點Hadoop虛擬環境
如何在VmWare創建3節點Hadoop虛擬環境

虛擬機網卡IP要和NAT模式的IP是在同一個段,虛擬機才能通過windos筆記的VMnet8網卡與互聯網通信
如何在VmWare創建3節點Hadoop虛擬環境
接下來啟動虛擬機配置操作系統IP網絡(具體配置過程省略)

Hadoop是一個能夠對大量數據進行分布式處理的軟件框架。 Hadoop 以一種可靠、高效、可伸縮的方式進行數據處理 。
Hadoop 是可靠的,因為它假設計算元素和存儲會失敗,因此它維護多個工作數據副本,確保能夠針對失敗的節點重新分布處理。
Hadoop 是高效的,因為它以并行的方式工作,通過并行處理加快處理速度  。
Hadoop 還是可伸縮的,能夠處理 PB 級數據 。
此外,Hadoop 依賴于社區服務,因此它的成本比較低,任何人都可以使用

二、Hadoop2.6.5+centos7.5三節點集群搭建步驟

1、環境規劃

Hadoop2.6.5+centos7.5

如何在VmWare創建3節點Hadoop虛擬環境

2、配置集群中主機域名訪問解析

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.11.10 master
192.168.11.11 slave1
192.168.11.12 slave2

注:slaver1和slaver2也是同樣的配置
3.配置節點jdk環境

[root@master src]# pwd
/usr/local/src
[root@master src]# wget http://download.oracle.com/otn-pub/java/jdk/8u172-b11/a58eab1ec242421181065cdc37240b08/jdk-8u172-linux-x64.tar.gz

[root@master src]#tar -zxvf  jdk-8u172-linux-x64.tar.gz
[root@master src]# ls
192.168.10.11  192.168.10.12  hadoop-2.6.5  hadoop-2.6.5.tar.gz  jdk1.8.0_172  jdk-8u172-linux-x64.tar.gz
[root@master src]# cd ./jdk1.8.0_172/
[root@master jdk1.8.0_172]# pwd
/usr/local/src/jdk1.8.0_172

在/etc/profile文件末尾添加

export JAVA_HOME=/usr/local/src/jdk1.8.0_172
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

使用/etc/profile文件立即生效

[root@master jdk1.8.0_172]# source /etc/profile

驗證jdk環境是否安裝成功

[root@master jdk1.8.0_172]# java -version
java version "1.8.0_172"
Java(TM) SE Runtime Environment (build 1.8.0_172-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

備注:slaver1、slaver2節點同樣的配置

[root@slave1 src]# java -version
java version "1.8.0_172"
Java(TM) SE Runtime Environment (build 1.8.0_172-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)
[root@slave2 ~]# java -version
java version "1.8.0_172"
Java(TM) SE Runtime Environment (build 1.8.0_172-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

4.關閉系統防火墻和內核防火墻(master、slaver1、slaver2均需要以下操作)

1>清空系統防火墻

 iptables -F
 iptables -X
 iptables -Z

2>臨時關閉內核防火墻

 setenforce 0
 getenforce  ##查看防火墻狀態

3>永久性關閉內核防火墻

vim /etc/sysconfig/selinux 把SELINUX狀態改為disabled
SELINUX=disabled

5.配置集群節點ssh戶型
說明:執行以下操作,完成集群中的節點兩兩免密登陸

[root@master jdk1.8.0_172]# ssh-keygen -t rsa  ##如果環境沒有配置過秘鑰,一路回車就行
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y        ##說明:因為之前配置過秘鑰,所以才提示這一步
Enter passphrase (empty for no passphrase):
Enter same passphrase again:   ##回車
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:TogQfLv56boAaUDkQbtff0TEyYLS/qERnYU5wuVRRAw root@master
The key's randomart image is:
+---[RSA 2048]----+
|o*. o +E&*.      |
|o +o.*.Bo=       |
|.o..o.o.o.       |
|.....+ o.        |
|oo  .+= S.       |
|... +..+.        |
|  .. . o..       |
|   .  o .        |
|    o+.          |
+----[SHA256]-----+
[root@master jdk1.8.0_172]# ssh-copy-id master
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'master (192.168.10.10)' can't be established.
ECDSA key fingerprint is SHA256:Ibqy6UOiZGGsuF285qc/Q7nwyW88CpdVk2HcfbDTmzg.
ECDSA key fingerprint is MD5:a6:cd:4a:ad:a1:1c:83:b6:20:c5:5b:13:32:78:34:98.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@master's password:      ##輸入root的密碼

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'master'"
and check to make sure that only the key(s) you wanted were added.

[root@master jdk1.8.0_172]# ssh-copy-id slave1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@slave1's password:  ##輸入root的登陸密碼

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'slave1'"
and check to make sure that only the key(s) you wanted were added.

[root@master jdk1.8.0_172]# ssh-copy-id slave2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@slave2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'slave2'"
and check to make sure that only the key(s) you wanted were added.

說明:slave1和slave2節點也需要執行上面同樣的操作

安裝Hadoop集群(master、slave1、slave2節點配置)

6.下載hadoop二進制安裝包并安裝(master、slave1、slave2節點執行)

[root@master ~]# cd /usr/local/src/
[root@master src]# wget http://archive.apache.org/dist/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz
[root@master src]# tar -zxvf hadoop-2.6.5.tar.gz
[root@master src]# pwd
/usr/local/src
[root@master src]# ls
hadoop-2.6.5         jdk1.8.0_172
hadoop-2.6.5.tar.gz  jdk-8u172-linux-x64.tar.gz

7.設置master節點的JAVA_HOME環境變量

[root@master src]# ls
hadoop-2.6.5  hadoop-2.6.5.tar.gz  jdk1.8.0_172  jdk-8u172-linux-x64.tar.gz
[root@master src]# cd hadoop-2.6.5/etc/hadoop/
[root@master hadoop]# ls
capacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        ssl-client.xml.example
configuration.xsl           hdfs-site.xml            kms-site.xml                ssl-server.xml.example
container-executor.cfg      httpfs-env.sh            log4j.properties            yarn-env.cmd
core-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.sh
hadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-site.xml
hadoop-env.sh               httpfs-site.xml          mapred-queues.xml.template
hadoop-metrics2.properties  kms-acls.xml             mapred-site.xml.template
hadoop-metrics.properties   kms-env.sh               slaves
[root@master hadoop]# echo $JAVA_HOME
/usr/local/src/jdk1.8.0_172
[root@master hadoop]# vi hadoop-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.
export JAVA_HOME=/usr/local/src/jdk1.8.0_172

[root@master hadoop]# vi yarn-env.sh

# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# User for YARN daemons
export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}

# resolve links - $0 may be a softlink
export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"

# some Java parameters
export JAVA_HOME=/usr/local/src/jdk1.8.0_172
if [ "$JAVA_HOME" != "" ]; then
  #echo "run java in $JAVA_HOME"
  JAVA_HOME=$JAVA_HOME
fi

if [ "$JAVA_HOME" = "" ]; then
  echo "Error: JAVA_HOME is not set."
  exit 1
fi

JAVA=$JAVA_HOME/bin/java
-- INSERT --

同理配置slave1和slave2節點的JAVA_HOME環境變量

配置hadoop集群master節點slaves文件主機名

[root@master hadoop]# pwd
/usr/local/src/hadoop-2.6.5/etc/hadoop
[root@master hadoop]# ls
capacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        slaves
configuration.xsl           hdfs-site.xml            kms-site.xml                ssl-client.xml.example
container-executor.cfg      httpfs-env.sh            log4j.properties            ssl-server.xml.example
core-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.cmd
hadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-env.sh
hadoop-env.sh               httpfs-site.xml          mapred-queues.xml.template  yarn-site.xml
hadoop-metrics2.properties  kms-acls.xml             mapred-site.xml
hadoop-metrics.properties   kms-env.sh               mapred-site.xml.template
[root@master hadoop]# vim slaves
slave1
slave2

##master節點
[root@master hadoop]# pwd
/usr/local/src/hadoop-2.6.5/etc/hadoop
[root@master hadoop]# vi core-site.xml

     11   distributed under the License is distributed on an "AS IS" BASIS        ,
     12   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or         implied.
     13   See the License for the specific language governing permissions         and
     14   limitations under the License. See accompanying LICENSE file.
     15 -->
     16
     17 <!-- Put site-specific property overrides in this file. -->
     18
     19 <configuration>
     20 <property>
     21         <name>fs.defaultFS</name>
     22         <value>hdfs://master:9000</value>
     23     </property>
     24     <property>
     25         <name>hadoop.tmp.dir</name>
     26         <value>file:/usr/local/src/hadoop-2.8.2/tmp/</value>
     27     </property>
     28 </configuration>

[root@master hadoop]# vi hdfs-site.xml

      1 <?xml version="1.0" encoding="UTF-8"?>
      2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
      3 <!--
     14   limitations under the License. See accompanying LICENSE file.
     15 -->
     16
     17 <!-- Put site-specific property overrides in this file. -->
     18
     19 <configuration>
     20 <property>
     21         <name>dfs.namenode.secondary.http-address</name>
     22         <value>master:9001</value>
     23     </property>
     24     <property>
     25         <name>dfs.namenode.name.dir</name>
     26         <value>file:/usr/local/src/hadoop-2.8.2/dfs/name</value>
     27     </property>
     28     <property>
     29         <name>dfs.datanode.data.dir</name>
     30         <value>file:/usr/local/src/hadoop-2.8.2/dfs/data</value>
     31     </property>
     32     <property>
     33         <name>dfs.replication</name>
     34         <value>2</value>
     35     </property>
     36 </configuration>

[root@master hadoop]# ls
capacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        ssl-client.xml.example
configuration.xsl           hdfs-site.xml            kms-site.xml                ssl-server.xml.example
container-executor.cfg      httpfs-env.sh            log4j.properties            yarn-env.cmd
core-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.sh
hadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-site.xml
hadoop-env.sh               httpfs-site.xml          mapred-queues.xml.template
hadoop-metrics2.properties  kms-acls.xml             mapred-site.xml.template
hadoop-metrics.properties   kms-env.sh               slaves
[root@master hadoop]# vim mapred-
mapred-env.cmd              mapred-env.sh               mapred-queues.xml.template  mapred-site.xml.template
[root@master hadoop]# cp mapred-site.xml.template mapred-site.xml

[root@master hadoop]# vi mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
      7
      8     http://www.apache.org/licenses/LICENSE-2.0
      9
     10   Unless required by applicable law or agreed to in writing, softw        are
     11   distributed under the License is distributed on an "AS IS" BASIS        ,
     12   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or         implied.
     13   See the License for the specific language governing permissions         and
     14   limitations under the License. See accompanying LICENSE file.
     15 -->
     16
     17 <!-- Put site-specific property overrides in this file. -->
     18
     19 <configuration>
     20  <property>
     21         <name>mapreduce.framework.name</name>
     22         <value>yarn</value>
     23  </property>
     24 </configuration>

[root@master hadoop]# vi yarn-site.xml
[root@master hadoop]# cat !$
cat yarn-site.xml
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
  <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
        <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8035</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
    <!-- 關閉虛擬內存檢查-->
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

</configuration>

8.創建臨時目錄和文件目錄

[root@master hadoop]# mkdir /usr/local/src/hadoop-2.6.5/tmp
[root@master hadoop]# mkdir -p /usr/local/src/hadoop-2.6.5/dfs/name
[root@master hadoop]# mkdir -p /usr/local/src/hadoop-2.6.5/dfs/data

9.配置master、slave1、slave2節點的環境變量,編輯~/.bashrc文件,增加如下環境變量

vim ~/.bashrc
export HADOOP_HOME=/usr/local/src/hadoop-2.6.5
export PATH=$PATH:$HADOOP_HOME/bin
#刷新環境變量
source ~/.bashrc

10.從master節點拷貝安裝包至slave1、slave2節點

scp -r /usr/local/src/hadoop-2.6.5 root@slave1:/usr/local/src/hadoop-2.6.5
scp -r /usr/local/src/hadoop-2.6.5 root@slave2:/usr/local/src/hadoop-2.6.5

11.在master節點初始化Namenode

[root@master hadoop-2.6.5]# pwd
/usr/local/src/hadoop-2.6.5
[root@master hadoop-2.6.5]# ./bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

19/08/27 07:05:34 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/192.168.11.10
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.5
STARTUP_MSG:   classpath = /usr/local/src/hadoop-2.6.5/etc/hadoop:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/activation-1.1.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/xz-1.0.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/common/lib/jersey-2.6.5/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5-tests.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.5.jar:/usr/local/src/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.5.jar:/usr/local/src/hadoop-2.6.5/contrib/capacity-scheduler/*.jar:/usr/local/src/hadoop-2.6.5/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG:   java = 1.8.0_172
************************************************************/
19/08/27 07:05:34 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/08/27 07:05:34 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-7c5bbf4c-ea11-4088-8d8c-0f69117e3272
19/08/27 07:05:35 INFO namenode.FSNamesystem: No KeyProvider found.
19/08/27 07:05:35 INFO namenode.FSNamesystem: fsLock is fair:true
19/08/27 07:05:36 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
19/08/27 07:05:36 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/08/27 07:05:36 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/08/27 07:05:36 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Aug 27 07:05:36
19/08/27 07:05:36 INFO util.GSet: Computing capacity for map BlocksMap
19/08/27 07:05:36 INFO util.GSet: VM type       = 64-bit
19/08/27 07:05:36 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
19/08/27 07:05:36 INFO util.GSet: capacity      = 2^21 = 2097152 entries
19/08/27 07:05:36 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/08/27 07:05:36 INFO blockmanagement.BlockManager: defaultReplication         = 2
19/08/27 07:05:36 INFO blockmanagement.BlockManager: maxReplication             = 512
19/08/27 07:05:36 INFO blockmanagement.BlockManager: minReplication             = 1
19/08/27 07:05:36 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
19/08/27 07:05:36 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/08/27 07:05:36 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
19/08/27 07:05:36 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
19/08/27 07:05:36 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
19/08/27 07:05:36 INFO namenode.FSNamesystem: supergroup          = supergroup
19/08/27 07:05:36 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/08/27 07:05:36 INFO namenode.FSNamesystem: HA Enabled: false
19/08/27 07:05:36 INFO namenode.FSNamesystem: Append Enabled: true
19/08/27 07:05:36 INFO util.GSet: Computing capacity for map INodeMap
19/08/27 07:05:36 INFO util.GSet: VM type       = 64-bit
19/08/27 07:05:36 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
19/08/27 07:05:36 INFO util.GSet: capacity      = 2^20 = 1048576 entries
19/08/27 07:05:36 INFO namenode.NameNode: Caching file names occuring more than 10 times
19/08/27 07:05:37 INFO util.GSet: Computing capacity for map cachedBlocks
19/08/27 07:05:37 INFO util.GSet: VM type       = 64-bit
19/08/27 07:05:37 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
19/08/27 07:05:37 INFO util.GSet: capacity      = 2^18 = 262144 entries
19/08/27 07:05:37 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/08/27 07:05:37 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
19/08/27 07:05:37 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
19/08/27 07:05:37 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/08/27 07:05:37 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/08/27 07:05:37 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/08/27 07:05:37 INFO util.GSet: VM type       = 64-bit
19/08/27 07:05:37 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
19/08/27 07:05:37 INFO util.GSet: capacity      = 2^15 = 32768 entries
19/08/27 07:05:37 INFO namenode.NNConf: ACLs enabled? false
19/08/27 07:05:37 INFO namenode.NNConf: XAttrs enabled? true
19/08/27 07:05:37 INFO namenode.NNConf: Maximum size of an xattr: 16384
19/08/27 07:05:37 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1415755876-192.168.11.10-1566903937075
19/08/27 07:05:37 INFO common.Storage: Storage directory /usr/local/src/hadoop-2.8.2/dfs/name has been successfully formatted.
19/08/27 07:05:37 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/src/hadoop-2.8.2/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
19/08/27 07:05:37 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/src/hadoop-2.8.2/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
19/08/27 07:05:37 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/08/27 07:05:37 INFO util.ExitUtil: Exiting with status 0
19/08/27 07:05:37 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.11.10
************************************************************/

12.啟動集群

[root@master hadoop-2.6.5]# ./sbin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/src/hadoop-2.6.5/logs/hadoop-root-namenode-master.out
slave1: starting datanode, logging to /usr/local/src/hadoop-2.6.5/logs/hadoop-root-datanode-slave1.out
slave2: starting datanode, logging to /usr/local/src/hadoop-2.6.5/logs/hadoop-root-datanode-slave2.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /usr/local/src/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
resourcemanager running as process 7671. Stop it first.
slave1: starting nodemanager, logging to /usr/local/src/hadoop-2.6.5/logs/yarn-root-nodemanager-slave1.out
slave2: starting nodemanager, logging to /usr/local/src/hadoop-2.6.5/logs/yarn-root-nodemanager-slave2.out

13.查看hadoop集群狀態

##master

[root@master hadoop-2.6.5]# jps
7671 ResourceManager
8584 Jps
8409 SecondaryNameNode
8234 NameNode
7758 NodeManager

##slave1

[root@slave1 hadoop]# jps
1379 DataNode
1460 NodeManager
1578 Jps
##slave2

[root@slave2 hadoop-2.6.5]# jps
1298 DataNode
1379 NodeManager
1476 Jps

14.啟動歷史服務器

[root@master hadoop-2.6.5]# pwd
/usr/local/src/hadoop-2.6.5
[root@master hadoop-2.6.5]# ./sbin/mr-jobhistory-daemon.sh start historyserver 
starting historyserver, logging to /usr/local/src/hadoop-2.6.5/logs/mapred-root-historyserver-master.out
此時注意查看selinux是否關閉且執行下面清楚防火墻iptables規則,否則在瀏覽器無法訪問hadoop的監控Web界面

[root@master hadoop-2.6.5]# iptables -F
[root@master hadoop-2.6.5]# iptables -X
[root@master hadoop-2.6.5]# iptables -Z

15.hadoop集群監控界面

[root@master hadoop-2.6.5]# curl -I http://master:8088
HTTP/1.1 302 Found
Cache-Control: no-cache
Expires: Tue, 27 Aug 2019 11:17:43 GMT
Date: Tue, 27 Aug 2019 11:17:43 GMT
Pragma: no-cache
Expires: Tue, 27 Aug 2019 11:17:43 GMT
Date: Tue, 27 Aug 2019 11:17:43 GMT
Pragma: no-cache
Content-Type: text/plain; charset=UTF-8

[root@master hadoop-2.6.5]# curl -I http://192.168.11.10:8088/
HTTP/1.1 302 Found
Cache-Control: no-cache
Expires: Tue, 27 Aug 2019 11:31:19 GMT
Date: Tue, 27 Aug 2019 11:31:19 GMT
Pragma: no-cache
Expires: Tue, 27 Aug 2019 11:31:19 GMT
Date: Tue, 27 Aug 2019 11:31:19 GMT
Pragma: no-cache
Content-Type: text/plain; charset=UTF-8
Location: http://192.168.11.10:8088/cluster
Content-Length: 0
Server: Jetty(6.1.26)
說明:
1.使用curl 命令測試web界面訪問返回狀態碼是302
2.實際測試,在IE瀏覽器無法訪問,只有在谷歌瀏覽器才能訪問
192.168.11.10:8080/cluster

二、Spark安裝

1.下載軟件包

地址:http://archive.apache.org/dist/spark/spark-2.0.2/spark-2.0.2-bin-hadoop2.6.tgz
把spark軟件包上傳至/usr/local/src

2.下載scala軟件包,下載地址:https://www.scala-lang.org/download/2.11.12.html

[root@master src]# pwd
/usr/local/src
[root@master src]# ll
total 590548
drwxrwxr-x  12 1000 1000       197 Sep 19 11:32 hadoop-2.6.5
-rw-r--r--.  1 root root 199635269 Jul  3 19:09 hadoop-2.6.5.tar.gz
drwxr-xr-x.  8   10  143       255 Mar 29  2018 jdk1.8.0_172
-rw-r--r--.  1 root root 190921804 Jul  3 19:09 jdk-8u172-linux-x64.tar.gz
drwxrwxr-x   6 1001 1001        50 Nov  9  2017 scala-2.11.12
-rw-r--r--   1 root root  29114457 Oct 23 19:17 scala-2.11.12.tgz
drwxr-xr-x  12  500  500       193 Nov  7  2016 spark-2.0.2-bin-hadoop2.6
-rw-r--r--   1 root root 185040619 Oct 23 18:50 spark-2.0.2-b

3.分別解壓spark和scala

[root@master src]#tar -xvf spark-2.0.2-bin-hadoop2.6.tgz
[root@master src]#tar -xvf scala-2.11.12.tgz

4.配置服務的環境變量

[root@master conf]# pwd
/usr/local/src/spark-2.0.2-bin-hadoop2.6/conf
[root@master conf]# cp spark-env.sh.template spark-env.sh
[root@master conf]# vim spark-env.sh
# Generic options for the daemons used in the standalone deploy mode
# - SPARK_CONF_DIR      Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - SPARK_LOG_DIR       Where log files are stored.  (Default: ${SPARK_HOME}/logs)
# - SPARK_PID_DIR       Where the pid file is stored. (Default: /tmp)
# - SPARK_IDENT_STRING  A string representing this instance of spark. (Default: $USER)
# - SPARK_NICENESS      The scheduling priority for daemons. (Default: 0)
export SCALA_HOME=/usr/local/src/scala-2.11.12
export JAVA_HOME=/usr/local/src/jdk1.8.0_172
export HADOOP_HOME=/usr/local/src/hadoop-2.6.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_MASTER_IP=master
SPARK_LOCAL_DIRS=/usr/local/src/spark-2.0.2-bin-hadoop2.6
SPARK_DRIVER_MEMORY=1G

修改slaves文件,配置從節點主機名

[root@master conf]# ll
total 40
-rw-r--r-- 1  500  500  987 Nov  7  2016 docker.properties.template
-rw-r--r-- 1  500  500 1105 Nov  7  2016 fairscheduler.xml.template
-rw-r--r-- 1  500  500 2025 Nov  7  2016 log4j.properties.template
-rw-r--r-- 1  500  500 7239 Nov  7  2016 metrics.properties.template
-rw-r--r-- 1  500  500  865 Nov  7  2016 slaves.template
-rw-r--r-- 1  500  500 1292 Nov  7  2016 spark-defaults.conf.template
-rwxr-xr-x 1 root root 4160 Nov 18 02:50 spark-env.sh
-rwxr-xr-x 1  500  500 3861 Nov  7  2016 spark-env.sh.template
[root@master conf]# cp slaves.template slaves
[root@master conf]# vi slaves
# A Spark Worker will be started on each of the machines listed below.
slave1
slave2

配置服務系統環境變量

[root@master conf]# vim  ~/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

export HADOOP_HOME=/usr/local/src/hadoop-2.6.5
export PATH=$PATH:$HADOOP_HOME/bin
####在文件底部增加spark和scala系統環境變量###
#scale_path
export SCALA_HOME=/usr/local/src/scala-2.11.12
export PATH=$PATH:$SCALA_HOME/bin
#sparK_path
export SPARK_HOME=/usr/local/src/spark-2.0.2-bin-hadoop2.6
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

復制環境變量到其他節點

scp -r ~/.bashrc root@slave1:~/
scp -r ~/.bashrc root@slave2:~/

復制Scala包到從節點

[root@master conf]# scp -r /usr/local/src/scala-2.11.12 root@slave1:/usr/local/src/
[root@master conf]# scp -r /usr/local/src/scala-2.11.12 root@slave2:/usr/local/src/

復制Spark包到從節點

[root@master conf]# scp -r /usr/local/src/spark-2.0.2-bin-hadoop2.6 root@slave1:/usr/local/src/
[root@master conf]# scp -r /usr/local/src/spark-2.0.2-bin-hadoop2.6 root@slave2:/usr/local/src/
#重新加載環境變量

source ~/.bashrc

啟動spark集群

[root@master conf]# /usr/local/src/spark-2.0.2-bin-hadoop2.6/sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/src/spark-2.0.2-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/src/spark-2.0.2-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/src/spark-2.0.2-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out

查看服務進程

#Master: Master
如何在VmWare創建3節點Hadoop虛擬環境

#Slave1: Worker
如何在VmWare創建3節點Hadoop虛擬環境
#Slave2: Worker
如何在VmWare創建3節點Hadoop虛擬環境

登陸網頁控制臺  http://master:8080
如何在VmWare創建3節點Hadoop虛擬環境

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

仙游县| 虞城县| 铜川市| 华安县| 济南市| 宁蒗| 南丹县| 临颍县| 深州市| 和平区| 阳曲县| 贵阳市| 耿马| 调兵山市| 砚山县| 温宿县| 准格尔旗| 桑植县| 金阳县| 定州市| 通州区| 商都县| 河西区| 连山| 靖边县| 德江县| 丰都县| 保康县| 六安市| 磴口县| 渭源县| 望城县| 龙岩市| 花莲县| 常宁市| 沧源| 兰考县| 甘泉县| 建德市| 壤塘县| 武邑县|