您好,登錄后才能下訂單哦!
這篇文章主要介紹VM9+Debian6+hadoop0.23.9如何實現單點安裝,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
一、環境準備
1.1 Debian 6,安裝時根據提示安裝SSH;(如果是window中模擬,可先安裝VMware,本人選擇的是VMware workstation 9)
1.2 jdk1.7,hadoop0.23.9:下載位置http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz
二、安裝過程
2.1 為Debian安裝sudo
root@debian:apt-get install sudo
2.2 安裝jdk1.7
先通過SSH客戶端將jdk-7u45-linux-i586.tar.gz傳到/root/路徑下,然后執行下面命令
root@debian~:tar -zxvf jdk-7u45-linux-i586.tar.gz -C /usr/java/
2.3 hadoop下載&安裝
root@debian~:wget http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz root@debian~:tar zxvf hadoop-0.23.9.tar.gz -C /opt/ root@debian~:cd /opt/ root@debian:/opt/# ln -s hadoop-0.23.9/ hadoop
----------這里做了個hadoop0.23.9的映射,相當于windows下面的.link
2.4 添加hadoop用戶權限
root@debian~:groupadd hadoop root@debian~:useradd -g hadoop hadoop root@debian~:passwd hadoop root@debian~:vi /etc/sudoers
sudoers中添加hadoop用戶權限
root ALL=(ALL) ALL下方添加
hadoop ALL=(ALL:ALL) ALL
2.5 配置SSH登錄
root@debian:su – hadoop root@debian:ssh-keygen -t rsa -P "自己的密碼" 可以是無密碼 root@debian:cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys root@debian:chmod 600 ~/.ssh/authorized_keys
測試登錄
root@debian:ssh localhost
如果想設置空密碼登錄,還是提示輸入密碼的話,確認本機sshd的配置文件(需要root權限)
root @debian :vi /etc/ssh/sshd_config
找到以下內容,并去掉注釋符”#“
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
然后重啟sshd,不想設置空密碼登錄的不用重啟
root @debian :servicesshd restart
2.6 配置hadoop用戶
root@debian:chown -R hadoop:hadoop /opt/hadoop root@debian:chown -R hadoop:hadoop /opt/hadoop-0.23.9 root@debian:su – hadoop hadoop@debian6-01:~#:vi .bashrc
添加以下部分
export JAVA_HOME=/usr/java//usr/java/jdk1.7.0_45
export JRE_HOME=${JAVA_HOME}/jre
export HADOOP_HOME=/opt/hadoop
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$HADOOP_HOME/bin:$PATH
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
root@debian:cd /opt/hadoop/etc/hadoop/ root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-env.sh
追加以下
export HADOOP_FREFIX=/opt/hadoop
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop
root@debian6-01:/opt/hadoop/etc/hadoop# vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:12200</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/hadoop-root</value>
</property>
<property>
<name>fs.arionfs.impl</name>
<value>org.apache.hadoop.fs.pvfs2.Pvfs2FileSystem</value>
<description>The FileSystem for arionfs.</description>
</property>
</configuration>
root@debian6-01:/opt/hadoop/etc/hadoop# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop/data/dfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>file:/opt/hadoop/data/dfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permission</name>
<value>false</value>
</property>
</configuration>
root@debian6-01:/opt/hadoop/etc/hadoop#cp mapred-site.xml.templatemapred-site.xml root@debian6-01:/opt/hadoop/etc/hadoop# vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.job.tracker</name>
<value>hdfs://localhost:9001</value>
<final>true</final>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>1536</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx1024M</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>3072</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx2560M</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>100</value>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>50</value>
</property>
<property>
<name>mapreduce.system.dir</name>
<value>file:/opt/hadoop/data/mapred/system</value>
</property>
<property>
<name>mapreduce.local.dir</name>
<value>file:/opt/hadoop/data/mapred/local</value>
<final>true</final>
</property>
</configuration>
root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>user.name</name>
<value>hadoop</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:54311</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>localhost:54312</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>localhost:54313</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>localhost:54314</value>
</property>
<property>
<name>yarn.web-proxy.address</name>
<value>localhost:54315</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost</value>
</property>
</configuration>
2.7 啟動并運行wordcount程序
設置JAVA_HOME
root@debian6-01:vi /opt/hadoop/libexec/hadoop-config.sh
# Attempt to set JAVA_HOME if it is not set
export JAVA_HOME=/usr/java/jdk1.7.0_45 -添加
if [[ -z $JAVA_HOME ]]; then -------:wq!保存退出
格式化namenode
root@debian6-01:/opt/hadoop/lib# hadoop namenode -format
啟動
root@debian6-01:/opt/hadoop/sbin/start-dfs.sh root@debian6-01:/opt/hadoop/sbin/start-yarn.sh
檢查
root@debian6-01:jps
6365 SecondaryNameNode
7196 ResourceManager
6066 NameNode
7613 Jps
6188 DataNode
7311 NodeManager
以上是“VM9+Debian6+hadoop0.23.9如何實現單點安裝”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。