您好,登錄后才能下訂單哦!
hadoop2.7.0集群在使用中遇到的bug及解決辦法是什么,很多新手對此不是很清楚,為了幫助大家解決這個難題,下面小編將為大家詳細講解,有這方面需求的人可以來學習下,希望你能有所收獲。
hadoop環境是2.7.0的集群環境,使用sqoop 1.4.6執行從mysql向hive的數據導入。
執行過程中報錯,如下方的日志信息。但是查詢hive中的數據,發現實際數據已經過來了,但因為mysql的數據表較多,不能一一對應的比對一下。所以為了確保同步數據成功,需要重新的正確的執行一次。
15/09/28 10:22:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /data/hadoop/share/hadoop/mapreduce Note: /tmp/sqoop-hadoop/compile/60bb7ee51d4794512d28b8efc4029fbc/QueryResult.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/09/28 10:22:06 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/60bb7ee51d4794512d28b8efc4029fbc/QueryResult.jar 15/09/28 10:22:09 INFO tool.ImportTool: Destination directory /tmp/wfpuser_t0301 is not present, hence not deleting. 15/09/28 10:22:09 INFO mapreduce.ImportJobBase: Beginning query import. 15/09/28 10:22:09 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 15/09/28 10:22:09 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 15/09/28 10:22:09 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 15/09/28 10:22:09 INFO client.RMProxy: Connecting to ResourceManager at zhebuduan-bd-3/192.168.1.113:8032 15/09/28 10:22:15 INFO db.DBInputFormat: Using read commited transaction isolation 15/09/28 10:22:15 INFO mapreduce.JobSubmitter: number of splits:1 15/09/28 10:22:16 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1443364253801_0310 15/09/28 10:22:17 INFO impl.YarnClientImpl: Submitted application application_1443364253801_0310 15/09/28 10:22:18 INFO mapreduce.Job: The url to track the job: http://zhebuduan-bd-3:8088/proxy/application_1443364253801_0310/ 15/09/28 10:22:18 INFO mapreduce.Job: Running job: job_1443364253801_0310 15/09/28 10:22:31 INFO mapreduce.Job: Job job_1443364253801_0310 running in uber mode : false 15/09/28 10:22:31 INFO mapreduce.Job: map 0% reduce 0% 15/09/28 10:22:34 INFO mapreduce.Job: Task Id : attempt_1443364253801_0310_m_000000_0, Status : FAILED Rename cannot overwrite non empty destination directory /data/hadoop/data/tmp/nm-local-dir/usercache/hadoop/filecache/60 java.io.IOException: Rename cannot overwrite non empty destination directory /data/hadoop/data/tmp/nm-local-dir/usercache/hadoop/filecache/60 at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:735) at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:236) at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:678) at org.apache.hadoop.fs.FileContext.rename(FileContext.java:958) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:366) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 15/09/28 10:22:38 INFO mapreduce.Job: Task Id : attempt_1443364253801_0310_m_000000_1, Status : FAILED Rename cannot overwrite non empty destination directory /data/hadoop/data/tmp/nm-local-dir/usercache/hadoop/filecache/62 java.io.IOException: Rename cannot overwrite non empty destination directory /data/hadoop/data/tmp/nm-local-dir/usercache/hadoop/filecache/62 at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:735) at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:236) at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:678) at org.apache.hadoop.fs.FileContext.rename(FileContext.java:958) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:366) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 15/09/28 10:22:42 INFO mapreduce.Job: map 100% reduce 0% 15/09/28 10:22:42 INFO mapreduce.Job: Task Id : attempt_1443364253801_0310_m_000000_2, Status : FAILED Rename cannot overwrite non empty destination directory /data/hadoop/data/tmp/nm-local-dir/usercache/hadoop/filecache/64 java.io.IOException: Rename cannot overwrite non empty destination directory /data/hadoop/data/tmp/nm-local-dir/usercache/hadoop/filecache/64 at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:735) at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:236) at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:678) at org.apache.hadoop.fs.FileContext.rename(FileContext.java:958) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:366) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 15/09/28 10:22:43 INFO mapreduce.Job: map 0% reduce 0% 15/09/28 10:23:00 INFO mapreduce.Job: map 100% reduce 0% 15/09/28 10:23:00 INFO mapreduce.Job: Job job_1443364253801_0310 completed successfully 15/09/28 10:23:00 INFO mapreduce.Job: Counters: 31 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=140349 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=87 HDFS: Number of bytes written=3712573 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Failed map tasks=3 Launched map tasks=4 Other local map tasks=4 Total time spent by all maps in occupied slots (ms)=20017 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=20017 Total vcore-seconds taken by all map tasks=20017 Total megabyte-seconds taken by all map tasks=20497408 Map-Reduce Framework Map input records=12661 Map output records=12661 Input split bytes=87 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=177 CPU time spent (ms)=8810 Physical memory (bytes) snapshot=175165440 Virtual memory (bytes) snapshot=880988160 Total committed heap usage (bytes)=197132288 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=3712573
在網上查了一下問題原因,最終找到一個解決辦法,刪除/data/hadoop/data/tmp/nm-local-dir/usercache/hadoop/filecache目錄下的緩存文件,直接進入該目錄,執行rm -rf *,本來想備份一下,tar命令打包怎么也執行不完,所以直接刪掉了。注意在集群關閉的時候執行。啟動集群后,操作不再報錯。
但是還有個問題,執行hadoop dfsadmin -report后,提示datanode節點的狀態為
Decommission Status : Normal Configured Capacity: 1055816155136 (983.31 GB) DFS Used: 267768670295 (249.38 GB) Non DFS Used: 59758983081 (55.65 GB) DFS Remaining: 728288501760 (678.27 GB) DFS Used%: 25.36% DFS Remaining%: 68.98% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 10 Last contact: Mon Sep 28 15:43:20 CST 2015
cache used和cache remaining都是0了,這個不知道怎么搞定,雖然當前沒看出什么問題,看著膈應啊。誰知道幫個忙告訴我吧~
我在官網找到了這個bug,在2.7.1版本中已經修復了這個bug,對集群進行升級。
看完上述內容是否對您有幫助呢?如果還想對相關知識有進一步的了解或閱讀更多相關文章,請關注億速云行業資訊頻道,感謝您對億速云的支持。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。