亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

怎么在Hadoop+HBase上安裝snappy

發布時間:2021-08-25 16:13:02 來源:億速云 閱讀:192 作者:chen 欄目:云計算

這篇文章主要介紹“怎么在Hadoop+HBase上安裝snappy ”,在日常操作中,相信很多人在怎么在Hadoop+HBase上安裝snappy 問題上存在疑惑,小編查閱了各式資料,整理出簡單好用的操作方法,希望對大家解答”怎么在Hadoop+HBase上安裝snappy ”的疑惑有所幫助!接下來,請跟著小編一起來學習吧!

1、檢查snappy壓縮包是否安裝

命令為:bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/test.txt snappy

如果顯示信息為:

12/12/03 10:30:02 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:30:02 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.lang.RuntimeException: native snappy library not available
     at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:123)
     at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
     at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
     at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:264)
     at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.<init>(HFileBlock.java:739)
     at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit(HFileWriterV2.java:127)
     at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.<init>(HFileWriterV2.java:118)
     at org.apache.hadoop.hbase.io.hfile.HFileWriterV2$WriterFactoryV2.createWriter(HFileWriterV2.java:101)
     at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:394)
     at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:108)

 則說明snappy壓縮包沒有安裝;

2、下載snappy-*.tar.gz壓縮包(只要和hbase版本兼容就可以,我的是snappy-1.1.1.tar.gz),解壓;

3、進入snappy目錄,進行編譯,兩條命令:

      ./configure

       make

4、make完之后會產生一個libsnappy.so文件(這就是我們所需要的庫!!!),正常情況下出現在當前目錄./libs/libsnappy.so,但是很多時候不按套路出牌,跑到別的文件夾下了,如果make沒有出錯,可以在根目錄search一下,肯定能找到這個文件;

5、將生成的這個libsnappy.so拷貝到HBase的lib/native/Linux-ARCH目錄下,ARCH代表 amd64 或 i386-32,注意,對于amd64的HBase可能沒有這個目錄,此時,需要手動創建:

     mkdir /opt/hbase-0.98.6.1/lib/native/Linux-amd64-64

6、如果還是不確定HBase在哪里查找lib,那么可以修改log4j文件中的日志級別(log level)進行調試;

7、重新運行第1步中的命令,現在看到的信息應該為:

12/12/03 10:34:35 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12/12/03 10:34:35 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available. 
12/12/03 10:34:35 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12/12/03 10:34:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/12/03 10:34:35 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:34:35 WARN snappy.LoadSnappy: Snappy native library is available
12/12/03 10:34:35 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.lang.RuntimeException: native snappy library not available
    at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:123)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
    at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:264)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.<init>(HFileBlock.java:739)
    at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit(HFileWriterV2.java:127)
    at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.<init>(HFileWriterV2.java:118)
    at org.apache.hadoop.hbase.io.hfile.HFileWriterV2$WriterFactoryV2.createWriter(HFileWriterV2.java:101)
    at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:394)
    at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:108)
    at org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:138)

8、可以看到,snappy已經可以找到了,但是還沒有加載(not loaded)。想加載的話,還需要拷貝hadoop的本地庫到與libsnappy.so同一個路徑下,hadoop的本地庫路徑為:

      hadoop-1.2.1/lib/native/Linux-ARCH/libhadoop.so;

       如果這個路徑下沒有,可以根據所使用的hadoop版本到 https://archive.apache.org/dist/hadoop/core/ 下載相應的tar.gz包,解壓之后就能找到所需要的文件了;

9、再次運行測試命令(第1步中的命令),可以得到:

12/12/03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32 not available.
12/12/03 10:37:48 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12/12/03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available. 
12/12/03 10:37:48 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12/12/03 10:37:48 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/12/03 10:37:48 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:37:48 WARN snappy.LoadSnappy: Snappy native library is available
12/12/03 10:37:48 INFO snappy.LoadSnappy: Snappy native library loaded
12/12/03 10:37:48 INFO compress.CodecPool: Got brand-new compressor
12/12/03 10:37:48 DEBUG hfile.HFileWriterV2: Initialized with CacheConfig:disabled
12/12/03 10:37:49 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:37:49 INFO compress.CodecPool: Got brand-new decompressor
SUCCESS

      看到SUCCESS,說明安裝成功,snappy壓縮包可以使用,搞定。

到此,關于“怎么在Hadoop+HBase上安裝snappy ”的學習就結束了,希望能夠解決大家的疑惑。理論與實踐的搭配能更好的幫助大家學習,快去試試吧!若想繼續學習更多相關知識,請繼續關注億速云網站,小編會繼續努力為大家帶來更多實用的文章!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

凤城市| 怀安县| 北辰区| 韶关市| 永昌县| 驻马店市| 延川县| 溧水县| 灵宝市| 彭泽县| 准格尔旗| 澄江县| 德化县| 揭东县| 沭阳县| 芜湖县| 扬中市| 铜陵市| 孝感市| 大埔区| 无棣县| 金堂县| 绵阳市| 曲沃县| 花莲县| 闽侯县| 中方县| 尚志市| 迭部县| 凤阳县| 连云港市| 呈贡县| 淳化县| 微博| 东至县| 翁源县| 辽源市| 禹州市| 历史| 惠州市| 延边|