整理两个row_number() over()的例子

整理两个row_number() over()的例子

  1. 准备数据如下
1|zhangzhan|shuxue|80|2015
2|lisi     |yuwen |92|2016
3|wangwu   |huaxue|70|2017
4|zhangsan |yuwen |80|2015
5|zhangsan |huaxue|90|2015
6|lisi     |shuxue|85|2015
7|wangwu   |yuwen |99|2016
8|zhangsan |huaxue|80|2017
9|wangwu   |yuwen |90|2016
  1. 建表语句

create   table IF NOT EXISTS course_score
( 
id                       string, 
userid                string,
course               string ,
score                 int ,
term                  string
)

ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
stored as textfile
;
  1. 加载数据并查看原始数据
hive> load data local inpath './s.txt' overwrite into table course_score;
Loading data to table default.course_score
Table default.course_score stats: [numFiles=1, totalSize=243]
OK
Time taken: 0.463 seconds
hive> select * from course_score;
OK
1       zhangzhan       shuxue  80      2015
2       lisi            yuwen   92      2016
3       wangwu          huaxue  70      2017
4       zhangsan        yuwen   80      2015
5       zhangsan        huaxue  90      2015
6       lisi            shuxue  85      2015
7       wangwu          yuwen   99      2016
8       zhangsan        huaxue  80      2017
9       wangwu          yuwen   90      2016
Time taken: 0.193 seconds, Fetched: 9 row(s)
  1. 每个学期每门功课分数
    >  select  * ,row_number() over(partition by course,term order by score desc) rn  from course_score ;
Query ID = hdfs_20181203130505_05b33809-9aff-4cc6-abd6-bf4ee34dd54b
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Kill Command = /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1541413124518_0075
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-12-03 13:05:38,842 Stage-1 map = 0%,  reduce = 0%
2018-12-03 13:05:44,037 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.98 sec
2018-12-03 13:05:49,221 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.94 sec
MapReduce Total cumulative CPU time: 5 seconds 940 msec
Ended Job = job_1541413124518_0075
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 5.94 sec   HDFS Read: 10016 HDFS Write: 261 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 940 msec
OK
5       zhangsan        huaxue  90      2015    1
8       zhangsan        huaxue  80      2017    1
3       wangwu          huaxue  70      2017    2
6       lisi            shuxue  85      2015    1
1       zhangzhan       shuxue  80      2015    2
4       zhangsan        yuwen   80      2015    1
7       wangwu          yuwen   99      2016    1
2       lisi            yuwen   92      2016    2
9       wangwu          yuwen   90      2016    3
  1. 再把最高分过滤出来
hive> select id,userid,course,score,term  from (select  * ,row_number() over(partition by course,term order by score desc) rn  from course_score) t where rn=1 ;
Query ID = hdfs_20181203130909_5c6b4f33-02e2-418b-9e97-1054a9a9fbe2
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Kill Command = /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1541413124518_0076
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-12-03 13:09:15,218 Stage-1 map = 0%,  reduce = 0%
2018-12-03 13:09:20,357 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.91 sec
2018-12-03 13:09:25,500 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.02 sec
MapReduce Total cumulative CPU time: 6 seconds 20 msec
Ended Job = job_1541413124518_0076
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.02 sec   HDFS Read: 10556 HDFS Write: 135 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 20 msec
OK
5       zhangsan        huaxue  90      2015
8       zhangsan        huaxue  80      2017
6       lisi            shuxue  85      2015
4       zhangsan        yuwen   80      2015
7       wangwu          yuwen   99      2016
Time taken: 16.427 seconds, Fetched: 5 row(s)
  1. 第二个小题:先每个学期分组
hive> select  * ,row_number() over(partition by term order by score desc) rn  from course_score  ;
Query ID = hdfs_20181203131515_b25c97f2-7d91-4233-b788-38fff2b1935c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Kill Command = /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1541413124518_0081
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-12-03 13:15:37,536 Stage-1 map = 0%,  reduce = 0%
2018-12-03 13:15:41,664 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.05 sec
2018-12-03 13:15:47,807 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.09 sec
MapReduce Total cumulative CPU time: 6 seconds 90 msec
Ended Job = job_1541413124518_0081
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.09 sec   HDFS Read: 9908 HDFS Write: 261 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 90 msec
OK
course_score.id course_score.userid     course_score.course     course_score.score      course_score.term       rn
5       zhangsan        huaxue  90      2015    1
6       lisi            shuxue  85      2015    2
4       zhangsan        yuwen   80      2015    3
1       zhangzhan       shuxue  80      2015    4
7       wangwu          yuwen   99      2016    1
2       lisi            yuwen   92      2016    2
9       wangwu          yuwen   90      2016    3
8       zhangsan        huaxue  80      2017    1
3       wangwu          huaxue  70      2017    2
  1. 然后语文成绩90分以上(我造的数据里yunwen这个科目可能存在空格,所以加了trim)
hive> select  * ,row_number() over(partition by term order by score desc) rn  from course_score where trim(course)="yuwen" and score>90 ;
Query ID = hdfs_20181203131818_5c445634-1928-4f6c-bf3c-a2394bb27425
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Kill Command = /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1541413124518_0084
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-12-03 13:18:50,969 Stage-1 map = 0%,  reduce = 0%
2018-12-03 13:18:56,088 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.72 sec
2018-12-03 13:19:02,223 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.66 sec
MapReduce Total cumulative CPU time: 6 seconds 660 msec
Ended Job = job_1541413124518_0084
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.66 sec   HDFS Read: 11293 HDFS Write: 58 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 660 msec
OK
course_score.id course_score.userid     course_score.course     course_score.score      course_score.term       rn
7       wangwu          yuwen   99      2016    1
2       lisi            yuwen   92      2016    2
  1. 在从原表里找出对应userid,并过滤出shuxue科目
>  select * from course_score s where s.userid in (select t1.userid from (select  * ,row_number() over(partition by term order by score desc) rn  from course_score where trim(course)="yuwen" and score>90)t1) and course="shuxue";
Query ID = hdfs_20181203132626_e007c3f2-0913-4679-b4de-c92bfff85bdb
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Kill Command = /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1541413124518_0089
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2018-12-03 13:27:02,777 Stage-2 map = 0%,  reduce = 0%
2018-12-03 13:27:07,903 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 2.89 sec
2018-12-03 13:27:14,027 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 6.85 sec
MapReduce Total cumulative CPU time: 6 seconds 850 msec
Ended Job = job_1541413124518_0089
Stage-5 is selected by condition resolver.
Stage-1 is filtered out by condition resolver.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hbase/lib/phoenix-4.14.0-cdh5.14.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hbase/lib/phoenix-4.14.0-cdh5.14.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Execution log at: /tmp/hdfs/hdfs_20181203132626_e007c3f2-0913-4679-b4de-c92bfff85bdb.log
2018-12-03 01:27:17     Starting to launch local task to process map join;      maximum memory = 2025848832
2018-12-03 01:27:18     Dump the side-table for tag: 1 with group count: 2 into file: file:/tmp/hdfs/90232833-d2d2-44e1-b876-cf8a583d38ef/hive_2018-12-03_13-26-57_775_2247371996617660666-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile21--.hashtable
2018-12-03 01:27:18     Uploaded 1 File to: file:/tmp/hdfs/90232833-d2d2-44e1-b876-cf8a583d38ef/hive_2018-12-03_13-26-57_775_2247371996617660666-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile21--.hashtable (314 bytes)
2018-12-03 01:27:18     End of local task; Time Taken: 0.748 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 3 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Kill Command = /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1541413124518_0090
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
2018-12-03 13:27:24,151 Stage-3 map = 0%,  reduce = 0%
2018-12-03 13:27:29,259 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 3.45 sec
MapReduce Total cumulative CPU time: 3 seconds 450 msec
Ended Job = job_1541413124518_0090
MapReduce Jobs Launched: 
Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 6.85 sec   HDFS Read: 10158 HDFS Write: 150 SUCCESS
Stage-Stage-3: Map: 1   Cumulative CPU: 3.45 sec   HDFS Read: 7252 HDFS Write: 27 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 300 msec
OK
s.id    s.userid        s.course        s.score s.term
6       lisi            shuxue  85      2015
Time taken: 33.549 seconds, Fetched: 1 row(s)