您好,登錄后才能下訂單哦!
之前發布了一篇《Oracle_lhr_RAC 12cR1安裝》,但是其中的存儲并沒有使用多路徑,而是使用了VMware自身提供的存儲。所以,年前最后一件事就是把多路徑學習一下,本文介紹了OpenFiler、iSCSI和多路徑的配置。
本文內容:
OpenFile是在rPath Linux基礎上開發的,它能夠作為一個獨立的Linux操作系統發行。Openfiler是一款非常好的存儲管理操作系統,開源免費,通過web界面對存儲磁盤的管理,支持現在流行的網絡存儲技術IP-SAN和NAS,支持iSCSI、NFS、SMB/CIFS及FTP等協議。
本次安裝OpenFiler鎖需要的軟件如下所示:
序號 |
類型 |
內容 |
1 |
openfiler |
openfileresa-2.99.1-x86_64-disc1.iso |
注:這些軟件小麥苗已上傳到騰訊微云(http://blog.itpub.net/26736162/viewspace-1624453/),各位朋友可以去下載。另外,小麥苗已經將安裝好的虛擬機上傳到了云盤,里邊已集成了rlwrap軟件。
詳細安裝過程小麥苗就不一個一個截圖了,網上已經有網友貼出了一步一步的過程,OpenFiler的內存設置為1G大小或再小點也無所謂,磁盤選用IDE磁盤格式,由于后續要配置多路徑,所以需要安裝2塊網卡。安裝完成后,重新啟動,界面如下所示:
注意,方框中的內容,可以在瀏覽器中直接打開。可以用root用戶登錄進行用戶的維護,若進行存儲的維護則只能使用openfiler用戶。openfiler是在遠程使用Web界面進行管理的,小麥苗這里的管理地址是https://192.168.59.200:446,其管理的初始用戶名是openfiler(小寫的),密碼是password,可以在登錄之后,修改這個密碼。
配置靜態網卡地址:
[root@OFLHR ~]# more /etc/sysconfig/network-scripts/ifcfg-eth0 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.59.255 HWADDR=00:0C:29:98:1A:CD IPADDR=192.168.59.200 NETMASK=255.255.255.0 NETWORK=192.168.59.0 ONBOOT=yes [root@OFLHR ~]# more /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE=eth2 MTU=1500 USERCTL=no ONBOOT=yes BOOTPROTO=static IPADDR=192.168.2.200 NETMASK=255.255.255.0 HWADDR=00:0C:29:98:1A:D7 [root@OFLHR ~]# ip a 1: lo: mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0 inet6 fe80::20c:29ff:fe98:1acd/64 scope link valid_lft forever preferred_lft forever 3: eth2: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff inet 192.168.2.200/24 brd 192.168.2.255 scope global eth2 inet6 fe80::20c:29ff:fe98:1ad7/64 scope link valid_lft forever preferred_lft forever [root@OFLHR ~]#
|
添加一塊100G大小的IDE格式的硬盤作為存儲。
[root@OFLHR ~]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000adc2c
Device Boot Start End Blocks Id System /dev/sda1 * 63 610469 305203+ 83 Linux /dev/sda2 610470 17382329 8385930 83 Linux /dev/sda3 17382330 19486844 1052257+ 82 Linux swap / Solaris
Disk /dev/sdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table [root@OFLHR ~]#
|
為openfiler服務器配置了兩塊硬盤,其中10GB的硬盤已經用來安裝openfiler操作系統,而200GB的硬盤則會用做數據存儲。
登錄地址:https://192.168.59.200:446
初始用戶名和密碼:openfiler/password
在獨立存儲設備中,LUN(Logical Unit Number)是最重要的基本單位。LUN可以被SAN中的任何主機訪問,不管是透過HBA或是iSCSI。就算是軟件激活的iSCSI,也可以在不同的操作系統之下,在操作系統啟動之后利用軟件的iSCSI initiator訪問LUN。在OpenFiler之下,LUN被稱為Logical Volume(LV),因此在OpenFiler下創建LUN就是創建LV。
當你安裝好OpenFiler之后,接下來就是要將OpenFiler下的磁盤分享出來給虛擬機或網絡上的其他主機使用了。在標準的SAN之后,這些可以在RAID層面完成,但VG的好處及彈性是RAID無法比較的,下面看看OpenFiler下的VG是如何一步一步創建的。
創建VG的步驟:
(1)進入OpenFiler的接口,并且選擇要使用的實體硬盤。
(2)將要加入的實體硬盤格式化成Physical Volume格式。
(3)創建一個VG組,并且將格式化成為PV格式的實體硬盤加入。
(4)加入完畢之后,就成為一個大的VG組,被視為系統的一個大實體硬盤。
(5)在這個VG中添加邏輯分割區LUN,在OpenFiler中稱為Logical Volume。
(6)指定LUN的文件格式,如iSCSI、ext3或是NFS,并且格式化。
(7)如果是iSCSI則需要再配置,如果是其他文件格式,就可以用NAS的方式分享出去而
登錄后,點擊Volumes標簽
為openfiler服務器配置了兩塊硬盤,其中10GB的硬盤已經用來安裝openfiler操作系統,而200GB的硬盤則會用做數據存儲。
點擊create new physical volumes后點擊/dev/sdb
點擊頁面右下角Reset,然后點擊Create。分區類型為Physical volume
點擊Volume Groups
輸入名稱,勾選復選框,單擊Add volume group
[root@OFLHR ~]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000adc2c
Device Boot Start End Blocks Id System /dev/sda1 * 63 610469 305203+ 83 Linux /dev/sda2 610470 17382329 8385930 83 Linux /dev/sda3 17382330 19486844 1052257+ 82 Linux swap / Solaris
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sdb1 1 209715199 104857599+ ee GPT [root@OFLHR ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb1 vmlhr lvm2 a- 95.34g 95.34g [root@OFLHR ~]#
|
點擊Add Volume
輸入內容,調整磁盤大小為10G,卷類型選擇block(iSCSI,FC,etc)
依次共創建4個邏輯卷:
[root@OFLHR ~]# vgs VG #PV #LV #SN Attr VSize VFree vmlhr 1 4 0 wz--n- 95.34g 55.34g [root@OFLHR ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb1 vmlhr lvm2 a- 95.34g 55.34g [root@OFLHR ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv01 vmlhr -wi-a- 10.00g lv02 vmlhr -wi-a- 10.00g lv03 vmlhr -wi-a- 10.00g lv04 vmlhr -wi-a- 10.00g [root@OFLHR ~]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000adc2c
Device Boot Start End Blocks Id System /dev/sda1 * 63 610469 305203+ 83 Linux /dev/sda2 610470 17382329 8385930 83 Linux /dev/sda3 17382330 19486844 1052257+ 82 Linux swap / Solaris
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sdb1 1 209715199 104857599+ ee GPT
Disk /dev/dm-0: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/dm-3 doesn't contain a valid partition table [root@OFLHR ~]#
|
點擊Services標簽欄設置iSCSI Target 為Enable 開啟服務Start。
返回Volumes標簽頁,點擊iSCSI Targets
點擊Add
選擇LUN Mapping標簽 點擊Map
由于iSCSI是走IP網絡,因此我們要允許網絡中的計算機可以透過IP來訪問。下面就是OpenFiler中IP網絡和同一網段中其他主機的連接方法。
1.進入OpenFiler中的System,并且直接拉到頁面的下方。
2.在Network Access Configuration的地方輸入這個網絡訪問的名稱,如VM_LHR。
3.輸入主機的IP段。注意不可以輸入單一主機的IP,這樣會都無法訪問。我們在這邊輸入192.168.59.0,表示從192.168.59.1一直到192.168.59.254都能訪問。
4.在Netmask中選擇255.255.255.0,并且在Type下拉列表框中選擇Share,之后即可以單擊Update按鈕。
選擇完之后就更新
至此就可以在這個OpenFiler中看到被授權的網段了。
在iSCSI Targets中,點擊 Network ACL 標簽
設置Access為Allow 然后點擊Update
到此存儲的配置已經完成
注釋掉iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL:
[root@OFLHR ~]# more /etc/initiators.deny
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE! # This configuration file was autogenerated # by Openfiler. Any manual changes will be overwritten # Generated at: Sat Jan 21 1:49:55 CST 2017
#iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL
# End of Openfiler configuration
[root@OFLHR ~]#
|
iSCSI(Internet Small Computer System Interface)。iSCSI技術由IBM公司研究開發,是一個供硬件設備使用的、可以在IP協議的上層運行的SCSI指令集,這種指令集合可以實現在IP網絡上運行SCSI協議,使其能夠在諸如高速千兆以太網上進行路由選擇。iSCSI技術是一種新儲存技術,該技術是將現有SCSI接口與以太網絡(Ethernet)技術結合,使服務器可與使用IP網絡的儲存裝置互相交換資料。iSCSI是一種基于 TCP/IP 的協議,用來建立和管理 IP存儲設備、主機和客戶機等之間的相互連接,并創建存儲區域網絡(SAN)。
iSCSI target:就是儲存設備端,存放磁盤或RAID的設備,目前也能夠將Linux主機模擬成iSCSI target了!目的在提供其他主機使用的『磁盤』;
iSCSI initiator:就是能夠使用target的用戶端,通常是服務器。也就是說,想要連接到iSCSI target的服務器,也必須要安裝iSCSI initiator的相關功能后才能夠使用iSCSI target提供的磁盤。
[root@OFLHR ~]# service iscsi-target start Starting iSCSI target service: [ OK ] [root@OFLHR ~]# more /etc/ietd.conf ##### WARNING!!! - This configuration file generated by Openfiler. DO NOT MANUALLY EDIT. #####
Target iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 HeaderDigest None DataDigest None MaxConnections 1 InitialR2T Yes ImmediateData No MaxRecvDataSegmentLength 131072 MaxXmitDataSegmentLength 131072 MaxBurstLength 262144 FirstBurstLength 262144 DefaultTime2Wait 2 DefaultTime2Retain 20 MaxOutstandingR2T 8 DataPDUInOrder Yes DataSequenceInOrder Yes ErrorRecoveryLevel 0 Lun 0 Path=/dev/vmlhr/lv01,Type=blockio,ScsiSN=22llvD-CacO-MOMA,ScsiId=22llvD-CacO-MOMA,IOMode=wt Lun 1 Path=/dev/vmlhr/lv02,Type=blockio,ScsiSN=BgLpy9-u7PH-csDC,ScsiId=BgLpy9-u7PH-csDC,IOMode=wt Lun 2 Path=/dev/vmlhr/lv03,Type=blockio,ScsiSN=38KsSC-REKL-yPgW,ScsiId=38KsSC-REKL-yPgW,IOMode=wt Lun 3 Path=/dev/vmlhr/lv04,Type=blockio,ScsiSN=aN5blo-NyMp-L4Jl,ScsiId=aN5blo-NyMp-L4Jl,IOMode=wt
[root@OFLHR ~]# ps -ef|grep iscsi root 937 2 0 01:01 ? 00:00:00 [iscsi_eh] root 946 1 0 01:01 ? 00:00:00 iscsid root 947 1 0 01:01 ? 00:00:00 iscsid root 13827 1217 0 02:43 pts/1 00:00:00 grep iscsi [root@OFLHR ~]# cat /proc/net/iet/volume tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 lun:0 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv01 lun:1 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv02 lun:2 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv03 lun:3 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv04 [root@OFLHR ~]# cat /proc/net/iet/session tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 [root@OFLHR ~]#
|
RAC的2個節點分別安裝iSCSI initiator。
[root@raclhr-12cR1-N1 ~]# rpm -qa|grep iscsi iscsi-initiator-utils-6.2.0.873-10.el6.x86_64 [root@raclhr-12cR1-N1 ~]#
|
若未安裝可使用yum install iscsi-initiator-utils*進行安裝。
iscsi initiator主要通過iscsiadm命令管理,我們先查看提供服務的iscsi target機器上有哪些target:
[root@raclhr-12cR1-N1 ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.59.200 [ OK ] iscsid: [ OK ] 192.168.59.200:3260,1 iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 192.168.2.200:3260,1 iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 [root@raclhr-12cR1-N1 ~]# ps -ef|grep iscsi root 2619 2 0 11:32 ? 00:00:00 [iscsi_eh] root 2651 1 0 11:32 ? 00:00:00 iscsiuio root 2658 1 0 11:32 ? 00:00:00 iscsid root 2659 1 0 11:32 ? 00:00:00 iscsid root 2978 56098 0 11:33 pts/1 00:00:00 grep iscsi [root@raclhr-12cR1-N1 ~]#
|
到這一步就可以看出,你服務端創建的iSCSI Target 的編號和名稱。這條命令只需記住-p后面跟iSCSI服務的地址就行了,也可以是主機名,都可以!3260是服務的端口號,默認的!
然后就可以登陸某個target了,登陸成功某個target后,這個target下的硬盤也就都共享過來了:
[root@raclhr-12cR1-N1 ~]# fdisk -l | grep dev Disk /dev/sda: 21.5 GB, 21474836480 bytes /dev/sda1 * 1 26 204800 83 Linux /dev/sda2 26 1332 10485760 8e Linux LVM /dev/sda3 1332 2611 10279936 8e Linux LVM Disk /dev/sdb: 107.4 GB, 107374182400 bytes /dev/sdb1 1 1306 10485760 8e Linux LVM /dev/sdb2 1306 2611 10485760 8e Linux LVM /dev/sdb3 2611 3917 10485760 8e Linux LVM /dev/sdb4 3917 13055 73399296 5 Extended /dev/sdb5 3917 5222 10485760 8e Linux LVM /dev/sdb6 5223 6528 10485760 8e Linux LVM /dev/sdb7 6528 7834 10485760 8e Linux LVM /dev/sdb8 7834 9139 10485760 8e Linux LVM /dev/sdb9 9139 10445 10485760 8e Linux LVM /dev/sdb10 10445 11750 10485760 8e Linux LVM /dev/sdb11 11750 13055 10477568 8e Linux LVM Disk /dev/sde: 10.7 GB, 10737418240 bytes Disk /dev/sdc: 6442 MB, 6442450944 bytes Disk /dev/sdd: 10.7 GB, 10737418240 bytes Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes [root@raclhr-12cR1-N1 ~]# iscsiadm --mode node --targetname iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 –portal 192.168.59.200:3260 --login Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200,3260] (multiple) Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200,3260] (multiple) Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200,3260] successful. Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200,3260] successful. [root@raclhr-12cR1-N1 ~]# [root@raclhr-12cR1-N1 ~]# fdisk -l | grep dev Disk /dev/sda: 21.5 GB, 21474836480 bytes /dev/sda1 * 1 26 204800 83 Linux /dev/sda2 26 1332 10485760 8e Linux LVM /dev/sda3 1332 2611 10279936 8e Linux LVM Disk /dev/sdb: 107.4 GB, 107374182400 bytes /dev/sdb1 1 1306 10485760 8e Linux LVM /dev/sdb2 1306 2611 10485760 8e Linux LVM /dev/sdb3 2611 3917 10485760 8e Linux LVM /dev/sdb4 3917 13055 73399296 5 Extended /dev/sdb5 3917 5222 10485760 8e Linux LVM /dev/sdb6 5223 6528 10485760 8e Linux LVM /dev/sdb7 6528 7834 10485760 8e Linux LVM /dev/sdb8 7834 9139 10485760 8e Linux LVM /dev/sdb9 9139 10445 10485760 8e Linux LVM /dev/sdb10 10445 11750 10485760 8e Linux LVM /dev/sdb11 11750 13055 10477568 8e Linux LVM Disk /dev/sde: 10.7 GB, 10737418240 bytes Disk /dev/sdc: 6442 MB, 6442450944 bytes Disk /dev/sdd: 10.7 GB, 10737418240 bytes Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes Disk /dev/sdf: 10.7 GB, 10737418240 bytes Disk /dev/sdi: 10.7 GB, 10737418240 bytes Disk /dev/sdh: 10.7 GB, 10737418240 bytes Disk /dev/sdl: 10.7 GB, 10737418240 bytes Disk /dev/sdj: 10.7 GB, 10737418240 bytes Disk /dev/sdg: 10.7 GB, 10737418240 bytes Disk /dev/sdk: 10.7 GB, 10737418240 bytes Disk /dev/sdm: 10.7 GB, 10737418240 bytes
|
這里多出了8塊盤,在openfiler中只map了四次,為什么這里是8塊而不是4塊呢?因為openfiler有2塊網卡,使用兩個IP登錄兩次iscsi target,所以這里有兩塊是重復的
要查看各個iscsi的信息:
# iscsiadm -m session -P 3
[root@raclhr-12cR1-N1 ~]# [root@raclhr-12cR1-N1 ~]# iscsiadm -m session -P 3 iSCSI Transport Class version 2.0-870 version 6.2.0-873.10.el6 Target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 Current Portal: 192.168.59.200:3260,1 Persistent Portal: 192.168.59.200:3260,1 ********** Interface: ********** Iface Name: default Iface Transport: tcp Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355 Iface IPaddress: 192.168.59.160 Iface HWaddress: Iface Netdev: SID: 1 iSCSI Connection State: LOGGED IN iSCSI Session State: LOGGED_IN Internal iscsid Session State: NO CHANGE ********* Timeouts: ********* Recovery Timeout: 120 Target Reset Timeout: 30 LUN Reset Timeout: 30 Abort Timeout: 15 ***** CHAP: ***** username: password: ******** username_in: password_in: ******** ************************ Negotiated iSCSI params: ************************ HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 262144 MaxXmitDataSegmentLength: 131072 FirstBurstLength: 262144 MaxBurstLength: 262144 ImmediateData: No InitialR2T: Yes MaxOutstandingR2T: 1 ************************ Attached SCSI devices: ************************ Host Number: 4 State: running scsi4 Channel 00 Id 0 Lun: 0 Attached scsi disk sdg State: running scsi4 Channel 00 Id 0 Lun: 1 Attached scsi disk sdj State: running scsi4 Channel 00 Id 0 Lun: 2 Attached scsi disk sdk State: running scsi4 Channel 00 Id 0 Lun: 3 Attached scsi disk sdm State: running Current Portal: 192.168.2.200:3260,1 Persistent Portal: 192.168.2.200:3260,1 ********** Interface: ********** Iface Name: default Iface Transport: tcp Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355 Iface IPaddress: 192.168.2.100 Iface HWaddress: Iface Netdev: SID: 2 iSCSI Connection State: LOGGED IN iSCSI Session State: LOGGED_IN Internal iscsid Session State: NO CHANGE ********* Timeouts: ********* Recovery Timeout: 120 Target Reset Timeout: 30 LUN Reset Timeout: 30 Abort Timeout: 15 ***** CHAP: ***** username: password: ******** username_in: password_in: ******** ************************ Negotiated iSCSI params: ************************ HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 262144 MaxXmitDataSegmentLength: 131072 FirstBurstLength: 262144 MaxBurstLength: 262144 ImmediateData: No InitialR2T: Yes MaxOutstandingR2T: 1 ************************ Attached SCSI devices: ************************ Host Number: 5 State: running scsi5 Channel 00 Id 0 Lun: 0 Attached scsi disk sdf State: running scsi5 Channel 00 Id 0 Lun: 1 Attached scsi disk sdh State: running scsi5 Channel 00 Id 0 Lun: 2 Attached scsi disk sdi State: running scsi5 Channel 00 Id 0 Lun: 3 Attached scsi disk sdl State: running [root@raclhr-12cR1-N1 ~]#
|
登陸之后要對新磁盤進行分區,格式化,然后在掛載即可
完成這些命令后,iscsi initator會把這些信息記錄到/var/lib/iscsi目錄下:
/var/lib/iscsi/send_targets記錄了各個target的情況,/var/lib/iscsi/nodes記錄了各個target下的nodes情況。下次再啟動iscsi initator時(service iscsi start),就會自動登陸各個target上。如果想讓重新手工登陸各個target,需要把/var/lib/iscsi/send_targets目錄下的內容和/var/lib/iscsi/nodes下的內容全部刪除掉。
1、安裝多路徑軟件包:
[root@raclhr-12cR1-N1 ~]# mount /dev/sr0 /media/lhr/cdrom/ mount: block device /dev/sr0 is write-protected, mounting read-only [root@raclhr-12cR1-N1 ~]# cd /media/lhr/cdrom/Packages/ [root@raclhr-12cR1-N1 Packages]# ll device-mapper-*.x86_64.rpm -r--r--r-- 104 root root 168424 Oct 30 2013 device-mapper-1.02.79-8.el6.x86_64.rpm -r--r--r-- 104 root root 118316 Oct 30 2013 device-mapper-event-1.02.79-8.el6.x86_64.rpm -r--r--r-- 104 root root 112892 Oct 30 2013 device-mapper-event-libs-1.02.79-8.el6.x86_64.rpm -r--r--r-- 104 root root 199924 Oct 30 2013 device-mapper-libs-1.02.79-8.el6.x86_64.rpm -r--r--r-- 95 root root 118892 Oct 25 2013 device-mapper-multipath-0.4.9-72.el6.x86_64.rpm -r--r--r-- 95 root root 184760 Oct 25 2013 device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm -r--r--r-- 96 root root 2444388 Oct 30 2013 device-mapper-persistent-data-0.2.8-2.el6.x86_64.rpm [root@raclhr-12cR1-N1 Packages]# ll iscsi* -r--r--r-- 101 root root 702300 Oct 29 2013 iscsi-initiator-utils-6.2.0.873-10.el6.x86_64.rpm [root@raclhr-12cR1-N1 Packages]# rpm -qa|grep device-mapper device-mapper-persistent-data-0.2.8-2.el6.x86_64 device-mapper-1.02.79-8.el6.x86_64 device-mapper-event-libs-1.02.79-8.el6.x86_64 device-mapper-event-1.02.79-8.el6.x86_64 device-mapper-libs-1.02.79-8.el6.x86_64 [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-1.02.79-8.el6.x86_64.rpm warning: device-mapper-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ########################################### [100%] package device-mapper-1.02.79-8.el6.x86_64 is already installed [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-event-1.02.79-8.el6.x86_64.rpm warning: device-mapper-event-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ########################################### [100%] package device-mapper-event-1.02.79-8.el6.x86_64 is already installed [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY error: Failed dependencies: device-mapper-multipath-libs = 0.4.9-72.el6 is needed by device-mapper-multipath-0.4.9-72.el6.x86_64 libmpathpersist.so.0()(64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64 libmultipath.so()(64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64 [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm warning: device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ########################################### [100%] 1:device-mapper-multipath########################################### [100%] [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ########################################### [100%] 1:device-mapper-multipath########################################### [100%] [root@raclhr-12cR1-N1 Packages]# rpm -qa|grep device-mapper device-mapper-multipath-0.4.9-72.el6.x86_64 device-mapper-persistent-data-0.2.8-2.el6.x86_64 device-mapper-1.02.79-8.el6.x86_64 device-mapper-event-libs-1.02.79-8.el6.x86_64 device-mapper-event-1.02.79-8.el6.x86_64 device-mapper-multipath-libs-0.4.9-72.el6.x86_64 device-mapper-libs-1.02.79-8.el6.x86_64 [root@raclhr-12cR1-N2 Packages]#
|
rpm -ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm
|
將多路徑軟件添加至內核模塊中
modprobe dm-multipath modprobe dm-round-robin
|
檢查內核添加情況
[root@raclhr-12cR1-N1 Packages]# lsmod |grep multipath dm_multipath 17724 1 dm_round_robin dm_mod 84209 16 dm_multipath,dm_mirror,dm_log [root@raclhr-12cR1-N1 Packages]# |
將多路徑軟件multipath設置為開機自啟動
[root@raclhr-12cR1-N1 Packages]# chkconfig --level 2345 multipathd on [root@raclhr-12cR1-N1 Packages]# [root@raclhr-12cR1-N1 Packages]# chkconfig --list|grep multipathd multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@raclhr-12cR1-N1 Packages]# |
啟動multipath服務
[root@raclhr-12cR1-N1 Packages]# service multipathd restart ux_socket_connect: No such file or directory Stopping multipathd daemon: [FAILED] Starting multipathd daemon: [ OK ] [root@raclhr-12cR1-N1 Packages]# |
1、配置multipath軟件,編輯/etc/multipath.conf
注意:默認情況下,/etc/multipath.conf是不存在的,需要用如下命令生成multipath.conf文件:
/sbin/mpathconf --enable --find_multipaths y --with_module y --with_chkconfig y
[root@raclhr-12cR1-N1 ~]# multipath -ll Jan 23 12:52:54 | /etc/multipath.conf does not exist, blacklisting all devices. Jan 23 12:52:54 | A sample multipath.conf file is located at Jan 23 12:52:54 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf Jan 23 12:52:54 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf [root@raclhr-12cR1-N1 ~]# multipath -ll Jan 23 12:53:49 | /etc/multipath.conf does not exist, blacklisting all devices. Jan 23 12:53:49 | A sample multipath.conf file is located at Jan 23 12:53:49 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf Jan 23 12:53:49 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf [root@raclhr-12cR1-N1 ~]# /sbin/mpathconf --enable --find_multipaths y --with_module y --with_chkconfig y [root@raclhr-12cR1-N1 ~]# [root@raclhr-12cR1-N1 ~]# ll /etc/multipath.conf -rw------- 1 root root 2775 Jan 23 12:55 /etc/multipath.conf [root@raclhr-12cR1-N1 ~]#
|
2、查看并獲取存儲分配給服務器的邏輯盤lun的wwid信息
[root@raclhr-12cR1-N1 multipath]# multipath -v0 [root@raclhr-12cR1-N1 multipath]# more /etc/multipath/wwids # Multipath wwids, Version : 1.0 # NOTE: This file is automatically maintained by multipath and multipathd. # You should not need to edit this file in normal circumstances. # # Valid WWIDs: /14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/ /14f504e46494c455242674c7079392d753750482d63734443/ /14f504e46494c455233384b7353432d52454b4c2d79506757/ /14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/ [root@raclhr-12cR1-N1 multipath]#
|
將文件/etc/multipath/wwids和/etc/multipath/bindings的內容覆蓋節點2:
[root@raclhr-12cR1-N2 ~]# multipath -v0 [root@raclhr-12cR1-N2 ~]# more /etc/multipath/wwids # Multipath wwids, Version : 1.0 # NOTE: This file is automatically maintained by multipath and multipathd. # You should not need to edit this file in normal circumstances. # # Valid WWIDs: /14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/ /14f504e46494c455242674c7079392d753750482d63734443/ /14f504e46494c455233384b7353432d52454b4c2d79506757/ /14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/ [root@raclhr-12cR1-N1 ~]# more /etc/multipath/bindings # Multipath bindings, Version : 1.0 # NOTE: this file is automatically maintained by the multipath program. # You should not need to edit this file in normal circumstances. # # Format: # alias wwid # mpatha 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41 mpathb 14f504e46494c455242674c7079392d753750482d63734443 mpathc 14f504e46494c455233384b7353432d52454b4c2d79506757 mpathd 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c [root@raclhr-12cR1-N1 ~]#
|
[root@raclhr-12cR1-N2 ~]# [root@raclhr-12cR1-N1 multipath]# multipath -ll mpathd (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK size=10G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 5:0:0:3 sdk 8:160 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 4:0:0:3 sdm 8:192 active ready running mpathc (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK size=10G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 5:0:0:2 sdj 8:144 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 4:0:0:2 sdl 8:176 active ready running mpathb (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK size=10G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 4:0:0:1 sdh 8:112 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 5:0:0:1 sdi 8:128 active ready running mpatha (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK size=10G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 4:0:0:0 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 5:0:0:0 sdg 8:96 active ready running [root@raclhr-12cR1-N1 multipath]# fdisk -l | grep dev Disk /dev/sda: 21.5 GB, 21474836480 bytes /dev/sda1 * 1 26 204800 83 Linux /dev/sda2 26 1332 10485760 8e Linux LVM /dev/sda3 1332 2611 10279936 8e Linux LVM Disk /dev/sdb: 107.4 GB, 107374182400 bytes /dev/sdb1 1 1306 10485760 8e Linux LVM /dev/sdb2 1306 2611 10485760 8e Linux LVM /dev/sdb3 2611 3917 10485760 8e Linux LVM /dev/sdb4 3917 13055 73399296 5 Extended /dev/sdb5 3917 5222 10485760 8e Linux LVM /dev/sdb6 5223 6528 10485760 8e Linux LVM /dev/sdb7 6528 7834 10485760 8e Linux LVM /dev/sdb8 7834 9139 10485760 8e Linux LVM /dev/sdb9 9139 10445 10485760 8e Linux LVM /dev/sdb10 10445 11750 10485760 8e Linux LVM /dev/sdb11 11750 13055 10477568 8e Linux LVM Disk /dev/sdc: 6442 MB, 6442450944 bytes Disk /dev/sdd: 10.7 GB, 10737418240 bytes Disk /dev/sde: 10.7 GB, 10737418240 bytes Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes Disk /dev/sdf: 10.7 GB, 10737418240 bytes Disk /dev/sdg: 10.7 GB, 10737418240 bytes Disk /dev/sdh: 10.7 GB, 10737418240 bytes Disk /dev/sdi: 10.7 GB, 10737418240 bytes Disk /dev/sdj: 10.7 GB, 10737418240 bytes Disk /dev/sdk: 10.7 GB, 10737418240 bytes Disk /dev/sdl: 10.7 GB, 10737418240 bytes Disk /dev/sdm: 10.7 GB, 10737418240 bytes Disk /dev/mapper/mpatha: 10.7 GB, 10737418240 bytes Disk /dev/mapper/mpathb: 10.7 GB, 10737418240 bytes Disk /dev/mapper/mpathc: 10.7 GB, 10737418240 bytes Disk /dev/mapper/mpathd: 10.7 GB, 10737418240 bytes [root@raclhr-12cR1-N1 multipath]#
|
for i in f g h i j k l m ; do echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\"" done
|
[root@raclhr-12cR1-N1 multipath]# for i in f g h i j k l m ; > do > echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\"" > done KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskf",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskg",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diskh",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diski",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskj",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskk",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskl",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskm",OWNER="grid",GROUP="asmadmin",MODE="0660" [root@raclhr-12cR1-N1 multipath]# [root@raclhr-12cR1-N1 multipath]# more /etc/multipath.conf defaults { find_multipaths yes user_friendly_names yes }
blacklist { wwid 3600508b1001c5ae72efe1fea025cd2e5 devnode "^hd[a-z]" devnode "^sd[a-e]" devnode "^sda" }
multipaths { multipath { wwid 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41 alias VMLHRStorage000 path_grouping_policy multibus path_selector "round-robin 0" failback manual rr_weight priorities no_path_retry 5 } multipath { wwid 14f504e46494c455242674c7079392d753750482d63734443 alias VMLHRStorage001 path_grouping_policy multibus path_selector "round-robin 0" failback manual rr_weight priorities no_path_retry 5 } multipath { wwid 14f504e46494c455233384b7353432d52454b4c2d79506757 alias VMLHRStorage002 path_grouping_policy multibus path_selector "round-robin 0" failback manual rr_weight priorities no_path_retry 5 } multipath { wwid 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c alias VMLHRStorage003 path_grouping_policy multibus path_selector "round-robin 0" failback manual rr_weight priorities no_path_retry 5 } } devices { device { vendor "VMWARE" product "VIRTUAL-DISK" path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n" path_checker readsector0 path_selector "round-robin 0" hardware_handler "0" failback 15 rr_weight priorities no_path_retry queue } } [root@raclhr-12cR1-N1 multipath]#
|
啟動multipath配置
[root@raclhr-12cR1-N1 ~]# service multipathd restart ok Stopping multipathd daemon: [ OK ] Starting multipathd daemon: [ OK ] [root@raclhr-12cR1-N1 ~]# multipath -ll VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 5:0:0:3 sdk 8:160 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 4:0:0:3 sdm 8:192 active ready running VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 5:0:0:2 sdj 8:144 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 4:0:0:2 sdl 8:176 active ready running VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 4:0:0:1 sdh 8:112 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 5:0:0:1 sdi 8:128 active ready running VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 4:0:0:0 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 5:0:0:0 sdg 8:96 active ready running [root@raclhr-12cR1-N1 ~]# [root@raclhr-12cR1-N1 ~]# multipath -ll|grep LHR VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK [root@raclhr-12cR1-N1 ~]#
|
啟用multipath配置后,會在/dev/mapper下生成多路徑邏輯盤
[root@raclhr-12cR1-N1 ~]# cd /dev/mapper [root@raclhr-12cR1-N1 mapper]# ll total 0 crw-rw---- 1 root root 10, 58 Jan 23 12:49 control lrwxrwxrwx 1 root root 7 Jan 23 12:49 vg_orasoft-lv_orasoft_soft -> ../dm-3 lrwxrwxrwx 1 root root 7 Jan 23 12:49 vg_orasoft-lv_orasoft_u01 -> ../dm-2 lrwxrwxrwx 1 root root 7 Jan 23 12:50 vg_rootlhr-Vol00 -> ../dm-1 lrwxrwxrwx 1 root root 7 Jan 23 12:50 vg_rootlhr-Vol01 -> ../dm-4 lrwxrwxrwx 1 root root 7 Jan 23 12:49 vg_rootlhr-Vol02 -> ../dm-0 lrwxrwxrwx 1 root root 7 Jan 23 12:50 vg_rootlhr-Vol03 -> ../dm-5 lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage000 -> ../dm-6 lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage001 -> ../dm-7 lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage002 -> ../dm-8 lrwxrwxrwx 1 root root 7 Jan 23 13:55 VMLHRStorage003 -> ../dm-9 [root@raclhr-12cR1-N1 mapper]#
|
至此,多路徑multipath配置完成。
在6.2之前配置multipath設備的權限只需要在設備配置里增加uid,gid,mode就可以
uid 1100 #uid
gid 1020 #gid
如:
multipath { wwid 360050763008101d4e00000000000000a alias DATA03 uid 501 #uid gid 501 #gid }
|
在6.2之后配置multipath配置文件里去掉uid,gid,mode這三個參數,需要使用udev使用,示例文件在/usr/share/doc/device-mapper-version中有一個模板文件,名為12-dm-permissions.rules,您可以使用它并將其放在 /etc/udev/rules.d 目錄中使其生效。
[root@raclhr-12cR1-N1 rules.d]# ll /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules -rw-r--r--. 1 root root 3186 Aug 13 2013 /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules [root@raclhr-12cR1-N1 rules.d]# [root@raclhr-12cR1-N1 rules.d]# ll total 24 -rw-r--r-- 1 root root 77 Jan 23 18:06 12-dm-permissions.rules -rw-r--r-- 1 root root 190 Jan 23 15:40 55-usm.rules -rw-r--r-- 1 root root 549 Jan 23 15:17 70-persistent-cd.rules -rw-r--r-- 1 root root 585 Jan 23 15:09 70-persistent-net.rules -rw-r--r-- 1 root root 633 Jan 23 15:46 99-oracle-asmdevices.rules -rw-r--r-- 1 root root 916 Jan 23 15:16 99-oracleasm.rules [root@raclhr-12cR1-N1 rules.d]# more /etc/udev/rules.d/12-dm-permissions.rules ENV{DM_NAME}=="VMLHRStorage*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" [root@raclhr-12cR1-N1 rules.d]#
|
將文件/etc/udev/rules.d/12-dm-permissions.rules復制到節點2上。
腳本如下所示:
for i in f g h i j k l m ; do echo "KERNEL==\"dm-*\", BUS==\"block\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\"" >> /etc/udev/rules.d/99-oracleasm.rules done
|
由于多路徑的設置WWID有重復,所以應該去掉文件/etc/udev/rules.d/99-oracleasm.rules中的重復的行。
在節點1執行以下操作:
[root@raclhr-12cR1-N1 rules.d]# for i in f g h i j k l m ; > do > echo "KERNEL==\"dm-*\", BUS==\"block\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\"" >> /etc/udev/rules.d/99-oracleasm.rules > done
|
打開文件/etc/udev/rules.d/99-oracleasm.rules,去掉WWID重復的行只保留一行即可。
[root@raclhr-12cR1-N1 ~]# cat /etc/udev/rules.d/99-oracleasm.rules KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskf",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diskh",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskj",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskk",OWNER="grid",GROUP="asmadmin",MODE="0660" [root@raclhr-12cR1-N1 ~]#
|
將文件/etc/udev/rules.d/99-oracleasm.rules的內容拷貝到節點2,然后重啟udev。
[root@raclhr-12cR1-N1 ~]# start_udev Starting udev: [ OK ] [root@raclhr-12cR1-N1 ~]# [root@raclhr-12cR1-N1 ~]# ll /dev/asm-* brw-rw---- 1 grid asmadmin 8, 32 Jan 23 15:50 /dev/asm-diskc brw-rw---- 1 grid asmadmin 8, 48 Jan 23 15:48 /dev/asm-diskd brw-rw---- 1 grid asmadmin 8, 64 Jan 23 15:48 /dev/asm-diske brw-rw---- 1 grid asmadmin 253, 7 Jan 23 15:46 /dev/asm-diskf brw-rw---- 1 grid asmadmin 253, 9 Jan 23 15:46 /dev/asm-diskh brw-rw---- 1 grid asmadmin 253, 6 Jan 23 15:46 /dev/asm-diskj brw-rw---- 1 grid asmadmin 253, 8 Jan 23 15:46 /dev/asm-diskk [root@raclhr-12cR1-N1 ~]# [grid@raclhr-12cR1-N1 ~]$ $ORACLE_HOME/bin/kfod disks=all s=true ds=true -------------------------------------------------------------------------------- Disk Size Header Path Disk Group User Group ================================================================================ 1: 6144 Mb MEMBER /dev/asm-diskc OCR grid asmadmin 2: 10240 Mb MEMBER /dev/asm-diskd DATA grid asmadmin 3: 10240 Mb MEMBER /dev/asm-diske FRA grid asmadmin 4: 10240 Mb CANDIDATE /dev/asm-diskf # grid asmadmin 5: 10240 Mb CANDIDATE /dev/asm-diskh # grid asmadmin 6: 10240 Mb CANDIDATE /dev/asm-diskj # grid asmadmin 7: 10240 Mb CANDIDATE /dev/asm-diskk # grid asmadmin -------------------------------------------------------------------------------- ORACLE_SID ORACLE_HOME ================================================================================ +ASM2 /u01/app/12.1.0/grid +ASM1 /u01/app/12.1.0/grid [grid@raclhr-12cR1-N1 ~]$ asmcmd
ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 10240 6487 0 6487 0 N DATA/ MOUNTED EXTERN N 512 4096 1048576 10240 10144 0 10144 0 N FRA/ MOUNTED EXTERN N 512 4096 1048576 6144 1672 0 1672 0 Y OCR/ ASMCMD> lsdsk Path /dev/asm-diskc /dev/asm-diskd /dev/asm-diske ASMCMD> lsdsk --candidate -p Group_Num Disk_Num Incarn Mount_Stat Header_Stat Mode_Stat State Path 0 1 0 CLOSED CANDIDATE ONLINE NORMAL /dev/asm-diskf 0 3 0 CLOSED CANDIDATE ONLINE NORMAL /dev/asm-diskh 0 2 0 CLOSED CANDIDATE ONLINE NORMAL /dev/asm-diskj 0 0 0 CLOSED CANDIDATE ONLINE NORMAL /dev/asm-diskk ASMCMD>
|
CREATE DISKGROUP FRA external redundancy DISK '/dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' = '12.1', 'compatible.asm' = '12.1';
SQL> select path from v$asm_disk;
PATH -------------------------------------------------------------------------------- /dev/asm-diskk /dev/asm-diskf /dev/asm-diskj /dev/asm-diskh /dev/asm-diske /dev/asm-diskd /dev/asm-diskc
7 rows selected.
SQL> CREATE DISKGROUP TESTMUL external redundancy DISK '/dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' = '12.1', 'compatible.asm' = '12.1';
Diskgroup created.
SQL> ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 10240 6487 0 6487 0 N DATA/ MOUNTED EXTERN N 512 4096 1048576 10240 10144 0 10144 0 N FRA/ MOUNTED EXTERN N 512 4096 1048576 6144 1672 0 1672 0 Y OCR/ MOUNTED EXTERN N 512 4096 1048576 20480 20381 0 20381 0 N TESTMUL/ ASMCMD>
[root@raclhr-12cR1-N1 ~]# crsctl stat res -t | grep -2 TESTMUL ONLINE ONLINE raclhr-12cr1-n1 STABLE ONLINE ONLINE raclhr-12cr1-n2 STABLE ora.TESTMUL.dg ONLINE ONLINE raclhr-12cr1-n1 STABLE ONLINE ONLINE raclhr-12cr1-n2 STABLE [root@raclhr-12cR1-N1 ~]#
|
[oracle@raclhr-12cR1-N1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 23 16:17:28 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options
SQL> create tablespace TESTMUL datafile '+TESTMUL' size 10M;
Tablespace created.
SQL> select name from v$datafile;
NAME -------------------------------------------------------------------------------- +DATA/LHRRAC/DATAFILE/system.258.933550527 +DATA/LHRRAC/DATAFILE/undotbs2.269.933551323 +DATA/LHRRAC/DATAFILE/sysaux.257.933550483 +DATA/LHRRAC/DATAFILE/undotbs1.260.933550575 +DATA/LHRRAC/DATAFILE/example.268.933550723 +DATA/LHRRAC/DATAFILE/users.259.933550573 +TESTMUL/LHRRAC/DATAFILE/testmul.256.934042679
7 rows selected.
SQL>
|
將存儲停掉一塊網卡eth2:
[root@OFLHR ~]# ip a 1: lo: mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0 inet6 fe80::20c:29ff:fe98:1acd/64 scope link valid_lft forever preferred_lft forever 3: eth2: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff inet 192.168.2.200/24 brd 192.168.2.255 scope global eth2 inet6 fe80::20c:29ff:fe98:1ad7/64 scope link valid_lft forever preferred_lft forever [root@OFLHR ~]# ifconfig eth2 down [root@OFLHR ~]# ip a 1: lo: mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0 inet6 fe80::20c:29ff:fe98:1acd/64 scope link valid_lft forever preferred_lft forever 3: eth2: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff inet 192.168.2.200/24 brd 192.168.2.255 scope global eth2 [root@OFLHR ~]#
|
rac節點查看日志:
[root@raclhr-12cR1-N1 ~]# tail -f /var/log/messages Jan 23 16:20:51 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host) Jan 23 16:20:57 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host) Jan 23 16:21:03 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host) [root@raclhr-12cR1-N1 ~]# multipath -ll VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-8 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 5:0:0:3 sdm 8:192 failed faulty running `- 4:0:0:3 sdl 8:176 active ready running VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-9 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 5:0:0:2 sdj 8:144 failed faulty running `- 4:0:0:2 sdk 8:160 active ready running VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 4:0:0:1 sdi 8:128 active ready running `- 5:0:0:1 sdh 8:112 failed faulty running VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 4:0:0:0 sdf 8:80 active ready running `- 5:0:0:0 sdg 8:96 failed faulty running [root@raclhr-12cR1-N1 ~]#
|
表空間可以正常訪問:
SQL> create table tt tablespace TESTMUL as select * from dual;
Table created.
SQL> select * from tt;
D - X
SQL>
|
同理,將eth2進行up,而將eth0宕掉,表空間依然正常。重啟集群和存儲后,集群一切正常。
重新搭建一套多路徑的環境來測試多路徑。
最簡單的測試方法,是用dd往磁盤讀寫數據,然后用iostat觀察各通道的流量和狀態,以判斷Failover或負載均衡方式是否正常:
# dd if=/dev/zero of=/dev/mapper/mpath0
# iostat -k 2
[root@orcltest ~]# multipath -ll VMLHRStorage003 (14f504e46494c4552674a61727a472d523449782d5336784e) dm-3 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 35:0:0:2 sdf 8:80 active ready running `- 36:0:0:2 sdg 8:96 active ready running VMLHRStorage002 (14f504e46494c4552506a5a5954422d6f6f4e652d34423171) dm-2 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 35:0:0:3 sdh 8:112 active ready running `- 36:0:0:3 sdi 8:128 active ready running VMLHRStorage001 (14f504e46494c4552324b583573332d774e5a622d696d7334) dm-1 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 35:0:0:1 sdd 8:48 active ready running `- 36:0:0:1 sde 8:64 active ready running VMLHRStorage000 (14f504e46494c45523431576859532d643246412d5154564f) dm-0 OPNFILER,VIRTUAL-DISK size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 35:0:0:0 sdb 8:16 active ready running `- 36:0:0:0 sdc 8:32 active ready running [root@orcltest ~]# dd if=/dev/zero of=/dev/mapper/VMLHRStorage001
|
重新開一個窗口執行iostat -k 2可以看到:
avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 5.23 20.78 0.00 73.99
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 9.00 0.00 92.00 0 184 scd0 0.00 0.00 0.00 0 0 sdb 0.00 0.00 0.00 0 0 sdc 0.00 0.00 0.00 0 0 sdd 1197.50 4704.00 10886.00 9408 21772 sde 1197.50 4708.00 10496.00 9416 20992 sdh 0.00 0.00 0.00 0 0 sdi 0.00 0.00 0.00 0 0 sdf 0.00 0.00 0.00 0 0 sdg 0.00 0.00 0.00 0 0 dm-0 0.00 0.00 0.00 0 0 dm-4 0.00 0.00 0.00 0 0 dm-10 0.00 0.00 0.00 0 0 dm-1 2395.00 9412.00 21382.00 18824 42764 dm-2 0.00 0.00 0.00 0 0 dm-3 0.00 0.00 0.00 0 0 dm-5 0.00 0.00 0.00 0 0 dm-6 0.00 0.00 0.00 0 0 dm-7 0.00 0.00 0.00 0 0 dm-8 0.00 0.00 0.00 0 0 dm-9 0.00 0.00 0.00 0 0
|
用multipath生成映射后,會在/dev目錄下產生多個指向同一條鏈路的設備:
/dev/mapper/mpathn /dev/mpath/mpathn /dev/dm-n |
但它們的來源是完全不同的:
/dev/mapper/mpathn 是multipath虛擬出來的多路徑設備,我們應該使用這個設備;/dev/mapper 中的設備是在引導過程中生成的。可使用這些設備訪問多路徑設備,例如在生成邏輯卷時。
/dev/mpath/mpathn 是udev設備管理器創建的,實際上就是指向下面的dm-n設備,僅為了方便,不能用來掛載;提供 /dev/mpath 中的設備是為了方便,這樣可在一個目錄中看到所有多路徑設備。這些設備是由 udev 設備管理器生成的,且在系統需要訪問它們時不一定能啟動。請不要使用這些設備生成邏輯卷或者文件系統。
/dev/dm-n 是軟件內部自身使用的,不能被軟件以外使用,不可掛載。所有 /dev/dm-n 格式的設備都只能是作為內部使用,且應該永遠不要使用。
簡單來說,就是我們應該使用/dev/mapper/下的設備符。對該設備即可用fdisk進行分區,或創建為pv。
好了,有關使用OpenFiler來模擬存儲配置RAC中ASM共享盤及多路徑的測試就到此為止了。2016年結束了,今天是1月23日,明天是1月24日,小麥苗回家過年了,O(∩_∩)O~。在此也祝福各位網友和朋友身體健康,萬事如意,合家歡樂,生活美滿,事業有成,珠玉滿堂,多壽多富,財大氣粗,攻無不克,戰無不勝!
About Me
...............................................................................................................................
● 本文作者:小麥苗,只專注于數據庫的技術,更注重技術的運用
● 本文在itpub(http://blog.itpub.net/26736162)、博客園(http://www.cnblogs.com/lhrbest)和個人微信公眾號(xiaomaimiaolhr)上有同步更新
● 本文itpub地址:http://blog.itpub.net/26736162/viewspace-2132858/
● 本文博客園地址:http://www.cnblogs.com/lhrbest/p/6345157.html
● 本文pdf版及小麥苗云盤地址:http://blog.itpub.net/26736162/viewspace-1624453/
● QQ群:230161599 微信群:私聊
● 聯系我請加QQ好友(642808185),注明添加緣由
● 于 2017-01-22 08:00 ~ 2016-01-23 24:00 在農行完成
● 文章內容來源于小麥苗的學習筆記,部分整理自網絡,若有侵權或不當之處還請諒解
● 版權所有,歡迎分享本文,轉載請保留出處
...............................................................................................................................
拿起手機使用微信客戶端掃描下邊的左邊圖片來關注小麥苗的微信公眾號:xiaomaimiaolhr,掃描右邊的二維碼加入小麥苗的QQ群,學習最實用的數據庫技術。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。