您好,登錄后才能下訂單哦!
本篇文章為大家展示了怎樣使用sbt構建spark的項目,內容簡明扼要并且容易理解,絕對能使你眼前一亮,通過這篇文章的詳細介紹希望你能有所收獲。
用Intellij 構建sbt項目 scala 使用2.10.4
name := "gstorm" version := "1.0" version := "1.0" //Older Scala Version scalaVersion := "2.10.4" val overrideScalaVersion = "2.11.8" val sparkVersion = "2.0.0" val sparkXMLVersion = "0.3.3" val sparkCsvVersion = "1.4.0" val sparkElasticVersion = "2.3.4" val sscKafkaVersion = "2.0.1" val sparkMongoVersion = "1.0.0" val sparkCassandraVersion = "1.6.0" //Override Scala Version to the above 2.11.8 version ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) } resolvers ++= Seq( "All Spark Repository -> bintray-spark-packages" at "https://dl.bintray.com/spark-packages/maven/" ) libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % sparkVersion exclude("jline", "2.12"), "org.apache.spark" %% "spark-sql" % sparkVersion excludeAll(ExclusionRule(organization = "jline"), ExclusionRule("name", "2.12")), "org.apache.spark" %% "spark-hive" % sparkVersion, "org.apache.spark" %% "spark-yarn" % sparkVersion, "com.databricks" %% "spark-xml" % sparkXMLVersion, "com.databricks" %% "spark-csv" % sparkCsvVersion, "org.apache.spark" %% "spark-graphx" % sparkVersion, "org.apache.spark" %% "spark-catalyst" % sparkVersion, "org.apache.spark" %% "spark-streaming" % sparkVersion, // "com.101tec" % "zkclient" % "0.9", "org.elasticsearch" %% "elasticsearch-spark" % sparkElasticVersion, // "org.apache.spark" %% "spark-streaming-kafka-0-10_2.11" % sscKafkaVersion, "org.mongodb.spark" % "mongo-spark-connector_2.11" % sparkMongoVersion, "com.stratio.datasource" % "spark-mongodb_2.10" % "0.11.1", "dibbhatt" % "kafka-spark-consumer" % "1.0.8", "net.liftweb" %% "lift-webkit" % "2.6.2" )
WordCount.scala
import org.apache.spark.sql.SparkSession object WordCount { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() .appName("Spark SQL Example") .master("local[2]") .config("spark.sql.codegen.WordCount", "true") .getOrCreate() val sc = spark.sparkContext val textFile = sc.textFile("hdfs://hadoop:9000/words.txt") val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b) wordCounts.collect.foreach(println) } }
上述內容就是怎樣使用sbt構建spark的項目,你們學到知識或技能了嗎?如果還想學到更多技能或者豐富自己的知識儲備,歡迎關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。