您好,登錄后才能下訂單哦!
這篇文章主要介紹如何使用JAVA API操作HDFS,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
windows操作系統需要配置一下hadoop環境
mac本質上是unix系統,不需要配置
==參考文檔《Windows&Mac本地開發環境配置》==
鏈接:https://pan.baidu.com/s/1tFJSlRxn18YELUUAUkXXQA
提取碼:g9ka
<properties>
<hadoop.version>3.1.4</hadoop.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>${hadoop.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/junit/junit -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>RELEASE</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
<!-- <verbal>true</verbal>-->
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<minimizeJar>true</minimizeJar>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
//簡化版
@Test
public void mkDirOnHDFS() throws IOException {
//配置項
Configuration configuration = new Configuration();
//設置要連接的hdfs集群
configuration.set("fs.defaultFS", "hdfs://node01:8020");
//獲得文件系統
FileSystem fileSystem = FileSystem.get(configuration);
//調用方法創建目錄;若目錄已經存在,則創建失敗,返回false
boolean mkdirs = fileSystem.mkdirs(new Path("/kaikeba/dir1"));
//釋放資源
fileSystem.close();
}
//指定目錄所屬用戶
@Test
public void mkDirOnHDFS2() throws IOException, URISyntaxException, InterruptedException {
//配置項
Configuration configuration = new Configuration();
//獲得文件系統
FileSystem fileSystem = FileSystem.get(new URI("hdfs://node01:8020"), configuration, "test");
//調用方法創建目錄
boolean mkdirs = fileSystem.mkdirs(new Path("/kaikeba/dir2"));
//釋放資源
fileSystem.close();
}
//創建目錄時,指定目錄權限
@Test
public void mkDirOnHDFS3() throws IOException {
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS", "hdfs://node01:8020");
FileSystem fileSystem = FileSystem.get(configuration);
FsPermission fsPermission = new FsPermission(FsAction.ALL, FsAction.READ, FsAction.READ);
boolean mkdirs = fileSystem.mkdirs(new Path("hdfs://node01:8020/kaikeba/dir3"), fsPermission);
if (mkdirs) {
System.out.println("目錄創建成功");
}
fileSystem.close();
}
注意:我們一定按照之前的環境搭建要求配置好了相應的hadoop集群,相關host,并且hadoop啟動成功的情況下才能正常運行以上程序。
以上是“如何使用JAVA API操作HDFS”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。