使用idea编写SparkStreaming_reduceByKeyAndWindow

使用idea编写SparkStreaming_reduceByKeyAndWindow
窗口函数:
使用idea编写SparkStreaming_reduceByKeyAndWindow
代码演示:

package cn.itcast.streaming
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
object WordCount3 {
def main(args: Array[String]): Unit = {
//1.创建StreamingContext
//spark.master should be set as local[n], n > 1
val conf = new SparkConf().setAppName(“wc”).setMaster(“local[*]”)
val sc = new SparkContext(conf)
sc.setLogLevel(“WARN”)
val ssc = new StreamingContext(sc,Seconds(5))//5表示5秒中对数据进行切分形成一个RDD
//2.监听Socket接收数据
//ReceiverInputDStream就是接收到的所有的数据组成的RDD,封装成了DStream,接下来对DStream进行操作就是对RDD进行操作
val dataDStream: ReceiverInputDStream[String] = ssc.socketTextStream(“hadoop01”,9999)
//3.操作数据
val wordDStream: DStream[String] = dataDStream.flatMap(.split(" "))
val wordAndOneDStream: DStream[(String, Int)] = wordDStream.map((
,1))
val wordAndCount: DStream[(String, Int)] = wordAndOneDStream.reduceByKeyAndWindow((a:Int,b:Int)=>a+b,Seconds(10),Seconds(5))
//4.使用窗口函数进行WordCount计数
//reduceFunc: (V, V) => V,集合函数
//windowDuration: Duration,窗口长度/宽度
//slideDuration: Duration,窗口滑动间隔
//注意:windowDuration和slideDuration必须是batchDuration的倍数
//windowDuration=slideDuration:数据不会丢失也不会重复计算开发中会使用
//windowDuration>slideDuration:数据会重复计算
开发中会使用
//windowDuration<slideDuration:数据会丢失
//代码表示:
//windowDuration=10
//slideDuration=5
//那么执行结果就是每隔5s计算最近10s的数据
//比如开发中让你统计最近1小时的数据,每隔1分钟计算一次,那么参数该如何设置?
wordAndCount.print()
ssc.start()//开启
ssc.awaitTermination()//等待优雅停止
}
}

执行:
1.先执行nc -lk 9999
2.然后执行以上代码
3.不断的在1中输入不同的单词
hadoop spark sqoop hadoop spark hive hadoop
4.观察IDEA控制台输出
现象:sparkStreaming每隔5s计算一次当前在窗口大小为10s内的数据,然后将结果数据输出。