您好,登錄后才能下訂單哦!
hu本期內容:
1、Kafka解密
背景:
目前No Receivers在企業中使用的越來越多,No Receivers具有更強的控制度,語義一致性。No Receivers是我們操作數據來源自然方式,操作數據來源使用一個封裝器,且是RDD類型的。
所以Spark Streaming就產生了自定義RDD –> KafkaRDD.
源碼分析:
1、KafkaRDD源碼
private[kafka] class KafkaRDD[ K: ClassTag, V: ClassTag, U <: Decoder[_]: ClassTag, T <: Decoder[_]: ClassTag, R: ClassTag] private[spark] ( sc: SparkContext, kafkaParams: Map[String, String], val offsetRanges: Array[OffsetRange], //指定數據范圍 leaders: Map[TopicAndPartition, (String, Int)], messageHandler: MessageAndMetadata[K, V] => R ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges { override def getPartitions: Array[Partition] = { offsetRanges.zipWithIndex.map { case (o, i) => val (host, port) = leaders(TopicAndPartition(o.topic, o.partition)) new KafkaRDDPartition(i, o.topic, o.partition, o.fromOffset, o.untilOffset, host, port) }.toArray }
2、HasOffsetRanges
/** * Represents any object that has a collection of [[OffsetRange]]s. This can be used to access the * offset ranges in RDDs generated by the direct Kafka DStream (see * [[KafkaUtils.createDirectStream()]]). * {{{ * KafkaUtils.createDirectStream(...).foreachRDD { rdd => * val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges * ... * } * }}} */ trait HasOffsetRanges { def offsetRanges: Array[OffsetRange] }
3、KafkaRDD中的compute
override def compute(thePart: Partition, context: TaskContext): Iterator[R] = { val part = thePart.asInstanceOf[KafkaRDDPartition] assert(part.fromOffset <= part.untilOffset, errBeginAfterEnd(part)) if (part.fromOffset == part.untilOffset) { log.info(s"Beginning offset ${part.fromOffset} is the same as ending offset " + s"skipping ${part.topic} ${part.partition}") Iterator.empty } else { new KafkaRDDIterator(part, context) } }
SparkStreaming一般使用KafkaUtils的createDirectStream讀取數據
def createDirectStream[ K: ClassTag, V: ClassTag, KD <: Decoder[K]: ClassTag, VD <: Decoder[V]: ClassTag] ( ssc: StreamingContext, kafkaParams: Map[String, String], topics: Set[String] ): InputDStream[(K, V)] = { val messageHandler = (mmd: MessageAndMetadata[K, V]) => (mmd.key, mmd.message) val kc = new KafkaCluster(kafkaParams) val fromOffsets = getFromOffsets(kc, kafkaParams, topics) new DirectKafkaInputDStream[K, V, KD, VD, (K, V)]( ssc, kafkaParams, fromOffsets, messageHandler) }
4、通過getFromOffsets的方法獲取topic的fromOffset值
[kafka] ( kc: KafkaClusterkafkaParams: []topics: [] ): [TopicAndPartition] = { reset = kafkaParams.get().map(_.toLowerCase) result = { topicPartitions <- kc.getPartitions(topics).right leaderOffsets <- ((reset == ()) { kc.getEarliestLeaderOffsets(topicPartitions) } { kc.getLatestLeaderOffsets(topicPartitions) }).right } { leaderOffsets.map { (tplo) => (tplo.offset) } } KafkaCluster.(result) }
createDirectStream其實生成的是DirectKafkaInputDStream對象,通過compute方法會產生KafkaRDD
(validTime: Time): Option[KafkaRDD[]] = { untilOffsets = clamp(latestLeaderOffsets()) rdd = []( context.sparkContextkafkaParamsuntilOffsetsmessageHandler) offsetRanges = .map { (tpfo) => uo = untilOffsets(tp) (tp.topictp.partitionfouo.offset) } description = offsetRanges.filter { offsetRange => offsetRange.fromOffset != offsetRange.untilOffset }.map { offsetRange => {offsetRange.topic}{offsetRange.partition}+ {offsetRange.fromOffset}{offsetRange.untilOffset}}.mkString() metadata = ( -> offsetRanges.toListStreamInputInfo.-> description) inputInfo = (rdd.countmetadata) ssc...reportInfo(validTimeinputInfo) = untilOffsets.map(kv => kv._1 -> kv._2.offset) (rdd) }
采用Direct的好處?
1. Direct方式沒有數據緩存,因此不會出現內存溢出,但是如果采用Receiver的話就需要緩存。
2. 如果采用Receiver的方式,不方便做分布式,而Direct方式默認數據就在多臺機器上。
3. 在實際操作的時候如果采用Receiver的方式的弊端是假設數據來不及處理,但是Direct就不會,因為是直接讀取數據。
4. 語義一致性,Direct的方式數據一定會被執行。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。