亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

hadoop和hbase的安全認證Kerberos部署

發布時間:2020-10-08 08:52:06 來源:網絡 閱讀:12099 作者:jinyiji121 欄目:關系型數據庫

(接上一篇)

五、Kerberos

1、jsvc

所有節點:

# cd ~/soft

# wget http://mirror.bit.edu.cn/apache/commons/daemon/source/commons-daemon-1.0.15-native-src.tar.gz

# tar zxfcommons-daemon-1.0.15-native-src.tar.gz

# cd commons-daemon-1.0.15-native-src/unix;./configure; make

# cp jsvc /usr/local/hadoop-2.4.0/libexec/

# cd ~/soft

# wgethttp://mirror.bit.edu.cn/apache//commons/daemon/binaries/commons-daemon-1.0.15-bin.tar.gz

# tar zxf commons-daemon-1.0.15-bin.tar.gz

# cpcommons-daemon-1.0.15/commons-daemon-1.0.15.jar/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/

# cpcommons-daemon-1.0.15/commons-daemon-1.0.15.jar/usr/local/hadoop-2.4.0/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/

# rm -f /usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar

# rm -f/usr/local/hadoop-2.4.0/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.13.jar

# # vim/usr/local/hadoop-2.4.0/etc/hadoop/hadoop-env.sh

                   exportJSVC_HOME=/usr/local/hadoop-2.4.0/libexec/

2256位加密

所有節點:

# wget–c http://download.oracle.com/otn-pub/java/jce/7/UnlimitedJCEPolicyJDK7.zip?AuthParam=1400207941_ee158c414c707a057960c521a7b29866

# unzipUnlimitedJCEPolicyJDK7.zip

# cp UnlimitedJCEPolicy/*.jar/usr/java/jdk1.7.0_65/jre/lib/security/

cp:是否覆蓋"/usr/java/jdk1.7.0_51/jre/lib/security/local_policy.jar" y

cp:是否覆蓋"/usr/java/jdk1.7.0_51/jre/lib/security/US_export_policy.jar" y

3、部署KDC

主機test3

安裝kdc server

# yum -y install krb5\*

配置文件krb5.conf

[logging]

 default = FILE:/var/log/krb5libs.log

 kdc= FILE:/var/log/krb5kdc.log

 admin_server = FILE:/var/log/kadmind.log

[libdefaults]

 default_realm = cc.cn

 dns_lookup_realm = false

 dns_lookup_kdc = false

 ticket_lifetime = 365d

 renew_lifetime = 365d

 forwardable = true

[realms]

 cc.cn = {

  kdc = test3

 admin_server = test3

 }

[kdc]

 profile = /var/kerberos/krb5kdc/kdc.conf

配置文件kdc.conf

# vim /var/kerberos/krb5kdc/kdc.conf

[kdcdefaults]

 kdc_ports = 88

 kdc_tcp_ports = 88

[realms]

 cc.cn = {

 #master_key_type = aes256-cts

 acl_file = /var/kerberos/krb5kdc/kadm5.acl

 dict_file = /usr/share/dict/words

 admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab

 supported_enctypes = aes256-cts:normal aes128-cts:normaldes3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normaldes-cbc-md5:normal des-cbc-crc:normal

 }

配置文件kadm5.acl

# vim /var/kerberos/krb5kdc/kadm5.acl

*/admin@cc.cn *

創建數據庫

# kdb5_util create -r cc.cn –s

Enter KDC database master key:

啟動及開機啟動

# service krb5kdc start

# service kadmin start

# chkconfig krb5kdc on

# chkconfig kadmin on

創建管理員用戶

# kadmin.local

kadmin.local:  addprinc root/admin

Enter password for principal "root/admin@cc.cn":

六、Hadoop整合Kerberos

1、配置節點認證

主機test1

# yum -y install krb5\*

# scp test3:/etc/krb5.conf /etc/

# kadmin –p root/admin

kadmin: addprinc -randkey root/test1

kadmin: addprinc -randkey HTTP/test1

kadmin: ktadd -k /hadoop/krb5.keytab root/test1 HTTP/test1

主機test2

# yum -y install krb5\*

# scp test3:/etc/krb5.conf /etc/

# kadmin -p root/admin

kadmin: addprinc -randkey root/test2

kadmin: addprinc -randkey HTTP/test2

kadmin: ktadd -k /hadoop/krb5.keytab root/test2 HTTP/test2

主機test3

# kadmin.local

kadmin.local:   addprinc -randkey root/test3

kadmin.lcoal:   addprinc -randkey HTTP/test3

kadmin.local:   ktadd -k /hadoop/krb5.keytab root/test3 HTTP/test3

2、添加配置

配置文件core-site.xml

主機test1

# vim/usr/local/hadoop-2.4.0/etc/hadoop/core-site.xml

<property>

       <name>hadoop.security.authentication</name>

       <value>kerberos</value>

</property>

<property>

       <name>hadoop.security.authorization</name>

       <value>true</value>

</property>

配置文件hdfs-site.xm

主機test1

# vim /usr/local/hadoop-2.4.0/etc/hadoop/hdfs-site.xml

<property>

       <name>dfs.journalnode.keytab.file</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>dfs.journalnode.kerberos.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

<property>

       <name>dfs.journalnode.kerberos.internal.spnego.principal</name>

       <value>HTTP/_HOST@cc.cn</value>

</property>

<property>

       <name>dfs.block.access.token.enable</name>

       <value>true</value>

</property>

<property>

       <name>dfs.namenode.keytab.file</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>dfs.namenode.kerberos.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

<property>

       <name>dfs.web.authentication.kerberos.keytab</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>dfs.web.authentication.kerberos.principal</name>

       <value>HTTP/_HOST@cc.cn</value>

</property>

<property>

       <name>ignore.secure.ports.for.testing</name>

       <value>true</value>

</property>

<property>

       <name>dfs.datanode.keytab.file</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>dfs.datanode.kerberos.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

<property>

       <name>hadoop.http.staticuser.user</name>

       <value>root</value>

</property>

配置文件yarn-site.xml

主機test1

# vim/usr/local/hadoop-2.4.0/etc/hadoop/yarn-site.xml

<property>

       <name>yarn.resourcemanager.keytab</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>yarn.resourcemanager.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

<property>

       <name>yarn.nodemanager.keytab</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>yarn.nodemanager.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

配置文件mapred-site.xml

主機test1

# vim /usr/local/hadoop-2.4.0/etc/hadoop/mapred-site.xml

<property>

       <name>mapreduce.jobhistory.keytab</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>mapreduce.jobhistory.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

3、同步配置文件

主機test1

# scp -r/usr/local/hadoop-2.4.0/ test2:/usr/local/

# scp -r/usr/local/hadoop-2.4.0/ test3:/usr/local/

4、啟動

主機test1

# start-all.sh

5、驗證

主機test3

# kinit -k -t /hadoop/krb5.keytab root/test3

# hdfs dfs –ls /

七、Hbase整合Kerberos

1、添加配置

配置文件hbase-site.xml

主機test1

# vim/usr/local/hbase-0.98.1/conf/hbase-site.xml

<property>

       <name>hbase.security.authentication</name>

       <value>kerberos</value>

</property>

<property>

       <name>hbase.security.authorization</name>

        <value>true</value>

</property>

<property>

       <name>hbase.rpc.engine</name>

       <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>

</property>

<property>

       <name>hbase.coprocessor.region.classes</name>

       <value>org.apache.hadoop.hbase.security.token.TokenProvider</value>

</property>

<property>

       <name>hbase.master.keytab.file</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>hbase.master.kerberos.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

<property>

       <name>hbase.regionserver.keytab.file</name>

       <value>/hadoop/krb5.keytab</value>

</property>

<property>

       <name>hbase.regionserver.kerberos.principal</name>

       <value>root/_HOST@cc.cn</value>

</property>

2、同步配置文件

主機test1

# scp/usr/local/hbase-0.98.1/conf/hbase-site.xml test2:/usr/local/hbase-0.98.1/conf/

# scp /usr/local/hbase-0.98.1/conf/hbase-site.xmltest3:/usr/local/hbase-0.98.1/conf/

3、啟動

主機test1

# start-hbase.sh

4、驗證

主機test3

# kinit -k -t /hadoop/krb5.keytab root/test3

# hbase shell

八、集群連接方式

1keytab文件位置

/etc/xiaofeiyun.keytab

創建過程

主機test1

# kadmin -p root/admin

Password for root/admin@cc.cn:

kadmin: addprinc -randkey data/xiaofeiyun

kadmin: addprinc -randkey platform/xiaofeiyun

kadmin: ktadd -k /etc/xiaofeiyun.keytab data/xiaofeiyun platform/xiaofeiyun

# scp /etc/xiaofeiyun.keytab test2:/etc/

# scp /etc/xiaofeiyun.keytab test3:/etc/

2krb5.conf文件位置

/etc/krb5.conf

3hadoop連接

conf.set("fs.defaultFS","hdfs://cluster1");

conf.set("dfs.nameservices","cluster1");

conf.set("dfs.ha.namenodes.cluster1","test1,test2");

conf.set("dfs.namenode.rpc-address.cluster1.test1","test1:9000");

conf.set("dfs.namenode.rpc-address.cluster1.test2","test2:9000");

conf.set("dfs.client.failover.proxy.provider.cluster1","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");

4hbase連接

<name>ha.zookeeper.quorum</name>

<value>test1:2181,test2:2181,test3:2181</value>


向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

雅安市| 沿河| 邹平县| 夏河县| 什邡市| 监利县| 洛宁县| 肥城市| 大同市| 雷州市| 山西省| 水富县| 普兰店市| 马关县| 若羌县| 岱山县| 黄平县| 库伦旗| 五莲县| 丽水市| 瑞安市| 长葛市| 廉江市| 柏乡县| 五台县| 临朐县| 开江县| 林芝县| 沙湾县| 兰州市| 仁布县| 五莲县| 乌兰浩特市| 恭城| 昭通市| 广安市| 长丰县| 华池县| 佛山市| 涿州市| 铜梁县|