5.4 YARN集群运行
第5章 YARN:资源调度平台
5.4 YARN集群运行
HDFS已经启动
-
[root@node1 ~]# jps
-
2247 NameNode
-
2584 Jps
-
2348 DataNode
- 1
- 2
- 3
- 4
-
[root@node2 ~]# jps
-
2279 Jps
-
2137 DataNode
-
2201 SecondaryNameNode
- 1
- 2
- 3
- 4
-
[root@node3 ~]# jps
-
5179 DataNode
-
7295 Jps
- 1
- 2
- 3
5.4.1 分发文件
-
[root@node1 hadoop]# scp yarn-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
-
yarn-site.xml 100% 938 0.9KB/s 00:00
-
[root@node1 hadoop]# scp mapred-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
-
mapred-site.xml 100% 856 0.8KB/s 00:00
-
[root@node1 hadoop]# scp yarn-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
-
yarn-site.xml 100% 938 0.9KB/s 00:00
-
[root@node1 hadoop]# scp mapred-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
-
mapred-site.xml 100% 856 0.8KB/s 00:00
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
5.4.2 启动YARN
-
[[email protected] ~]# start-yarn.sh
-
starting yarn daemons
-
starting resourcemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-resourcemanager-node1.out
-
node3: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node3.out
-
node2: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node2.out
-
node1: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node1.out
- 1
- 2
- 3
- 4
- 5
- 6
-
[root@node1 ~]# jps
-
2753 NodeManager
-
3041 Jps
-
2247 NameNode
-
2649 ResourceManager
-
2348 DataNode
- 1
- 2
- 3
- 4
- 5
- 6
-
[root@node2 ~]# jps
-
2341 NodeManager
-
2137 DataNode
-
2201 SecondaryNameNode
-
2443 Jps
- 1
- 2
- 3
- 4
- 5
-
[root@node3 ~]# jps
-
7350 NodeManager
-
5179 DataNode
-
7451 Jps
- 1
- 2
- 3
- 4
5.4.3 Web页面
5.4.4 Hadoop自带样例程序
-
-
-
total 4972
-
-rw-r--r-- 1 root root 537521 Aug 17 2016 hadoop-mapreduce-client-app-2.7.3.jar
-
-rw-r--r-- 1 root root 773501 Aug 17 2016 hadoop-mapreduce-client-common-2.7.3.jar
-
-rw-r--r-- 1 root root 1554595 Aug 17 2016 hadoop-mapreduce-client-core-2.7.3.jar
-
-rw-r--r-- 1 root root 189714 Aug 17 2016 hadoop-mapreduce-client-hs-2.7.3.jar
-
-rw-r--r-- 1 root root 27598 Aug 17 2016 hadoop-mapreduce-client-hs-plugins-2.7.3.jar
-
-rw-r--r-- 1 root root 61745 Aug 17 2016 hadoop-mapreduce-client-jobclient-2.7.3.jar
-
-rw-r--r-- 1 root root 1551594 Aug 17 2016 hadoop-mapreduce-client-jobclient-2.7.3-tests.jar
-
-rw-r--r-- 1 root root 71310 Aug 17 2016 hadoop-mapreduce-client-shuffle-2.7.3.jar
-
-rw-r--r-- 1 root root 295812 Aug 17 2016 hadoop-mapreduce-examples-2.7.3.jar
-
drwxr-xr-x 2 root root 4096 Aug 17 2016 lib
-
drwxr-xr-x 2 root root 30 Aug 17 2016 lib-examples
-
drwxr-xr-x 2 root root 4096 Aug 17 2016 sources
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
求解PI值
-
-
Number of Maps = 3
-
Samples per Map = 3
-
Wrote input for Map #0
-
Wrote input for Map #1
-
Wrote input for Map #2
-
Starting Job
-
17/05/23 10:57:55 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.80.131:8032
-
17/05/23 10:57:56 INFO input.FileInputFormat: Total input paths to process : 3
-
17/05/23 10:57:56 INFO mapreduce.JobSubmitter: number of splits:3
-
17/05/23 10:57:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_0001
-
17/05/23 10:57:58 INFO impl.YarnClientImpl: Submitted application application_1495550966527_0001
-
17/05/23 10:57:58 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1495550966527_0001/
-
17/05/23 10:57:58 INFO mapreduce.Job: Running job: job_1495550966527_0001
-
17/05/23 10:58:17 INFO mapreduce.Job: Job job_1495550966527_0001 running in uber mode : false
-
17/05/23 10:58:17 INFO mapreduce.Job: map 0% reduce 0%
-
17/05/23 10:59:02 INFO mapreduce.Job: map 100% reduce 0%
-
17/05/23 10:59:15 INFO mapreduce.Job: map 100% reduce 100%
-
17/05/23 10:59:16 INFO mapreduce.Job: Job job_1495550966527_0001 completed successfully
-
17/05/23 10:59:16 INFO mapreduce.Job: Counters: 49
-
File System Counters
-
FILE: Number of bytes read=72
-
FILE: Number of bytes written=475761
-
FILE: Number of read operations=0
-
FILE: Number of large read operations=0
-
FILE: Number of write operations=0
-
HDFS: Number of bytes read=777
-
HDFS: Number of bytes written=215
-
HDFS: Number of read operations=15
-
HDFS: Number of large read operations=0
-
HDFS: Number of write operations=3
-
Job Counters
-
Launched map tasks=3
-
Launched reduce tasks=1
-
Data-local map tasks=3
-
Total time spent by all maps in occupied slots (ms)=127167
-
Total time spent by all reduces in occupied slots (ms)=9302
-
Total time spent by all map tasks (ms)=127167
-
Total time spent by all reduce tasks (ms)=9302
-
Total vcore-milliseconds taken by all map tasks=127167
-
Total vcore-milliseconds taken by all reduce tasks=9302
-
Total megabyte-milliseconds taken by all map tasks=130219008
-
Total megabyte-milliseconds taken by all reduce tasks=9525248
-
Map-Reduce Framework
-
Map input records=3
-
Map output records=6
-
Map output bytes=54
-
Map output materialized bytes=84
-
Input split bytes=423
-
Combine input records=0
-
Combine output records=0
-
Reduce input groups=2
-
Reduce shuffle bytes=84
-
Reduce input records=6
-
Reduce output records=0
-
Spilled Records=12
-
Shuffled Maps =3
-
Failed Shuffles=0
-
Merged Map outputs=3
-
GC time elapsed (ms)=1847
-
CPU time spent (ms)=12410
-
Physical memory (bytes) snapshot=711430144
-
Virtual memory (bytes) snapshot=8312004608
-
Total committed heap usage (bytes)=436482048
-
Shuffle Errors
-
BAD_ID=0
-
CONNECTION=0
-
IO_ERROR=0
-
WRONG_LENGTH=0
-
WRONG_MAP=0
-
WRONG_REDUCE=0
-
File Input Format Counters
-
Bytes Read=354
-
File Output Format Counters
-
Bytes Written=97
-
Job Finished in 81.368 seconds
-
Estimated value of Pi is 3.55555555555555555556
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
-
[[email protected] mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /user/root/input /user/root/output
-
17/05/23 11:01:34 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.80.131:8032
-
17/05/23 11:01:36 INFO input.FileInputFormat: Total input paths to process : 2
-
17/05/23 11:01:36 INFO mapreduce.JobSubmitter: number of splits:2
-
17/05/23 11:01:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_0002
-
17/05/23 11:01:37 INFO impl.YarnClientImpl: Submitted application application_1495550966527_0002
-
17/05/23 11:01:37 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1495550966527_0002/
-
17/05/23 11:01:37 INFO mapreduce.Job: Running job: job_1495550966527_0002
-
17/05/23 11:01:58 INFO mapreduce.Job: Job job_1495550966527_0002 running in uber mode : false
-
17/05/23 11:01:58 INFO mapreduce.Job: map 0% reduce 0%
-
17/05/23 11:02:15 INFO mapreduce.Job: map 100% reduce 0%
-
17/05/23 11:02:25 INFO mapreduce.Job: map 100% reduce 100%
-
17/05/23 11:02:26 INFO mapreduce.Job: Job job_1495550966527_0002 completed successfully
-
17/05/23 11:02:26 INFO mapreduce.Job: Counters: 49
-
File System Counters
-
FILE: Number of bytes read=89
-
FILE: Number of bytes written=355953
-
FILE: Number of read operations=0
-
FILE: Number of large read operations=0
-
FILE: Number of write operations=0
-
HDFS: Number of bytes read=301
-
HDFS: Number of bytes written=46
-
HDFS: Number of read operations=9
-
HDFS: Number of large read operations=0
-
HDFS: Number of write operations=2
-
Job Counters
-
Launched map tasks=2
-
Launched reduce tasks=1
-
Data-local map tasks=2
-
Total time spent by all maps in occupied slots (ms)=29625
-
Total time spent by all reduces in occupied slots (ms)=7154
-
Total time spent by all map tasks (ms)=29625
-
Total time spent by all reduce tasks (ms)=7154
-
Total vcore-milliseconds taken by all map tasks=29625
-
Total vcore-milliseconds taken by all reduce tasks=7154
-
Total megabyte-milliseconds taken by all map tasks=30336000
-
Total megabyte-milliseconds taken by all reduce tasks=7325696
-
Map-Reduce Framework
-
Map input records=6
-
Map output records=14
-
Map output bytes=140
-
Map output materialized bytes=95
-
Input split bytes=216
-
Combine input records=14
-
Combine output records=7
-
Reduce input groups=6
-
Reduce shuffle bytes=95
-
Reduce input records=7
-
Reduce output records=6
-
Spilled Records=14
-
Shuffled Maps =2
-
Failed Shuffles=0
-
Merged Map outputs=2
-
GC time elapsed (ms)=574
-
CPU time spent (ms)=4590
-
Physical memory (bytes) snapshot=514162688
-
Virtual memory (bytes) snapshot=6236823552
-
Total committed heap usage (bytes)=301146112
-
Shuffle Errors
-
BAD_ID=0
-
CONNECTION=0
-
IO_ERROR=0
-
WRONG_LENGTH=0
-
WRONG_MAP=0
-
WRONG_REDUCE=0
-
File Input Format Counters
-
Bytes Read=85
-
File Output Format Counters
-
Bytes Written=46
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
求解wordcount过程中,我们可以观察页面http://192.168.80.131:8088,
第5章 YARN:资源调度平台
5.4 YARN集群运行
HDFS已经启动
-
[root@node1 ~]# jps
-
2247 NameNode
-
2584 Jps
-
2348 DataNode
- 1
- 2
- 3
- 4
-
[root@node2 ~]# jps
-
2279 Jps
-
2137 DataNode
-
2201 SecondaryNameNode
- 1
- 2
- 3
- 4
-
[root@node3 ~]# jps
-
5179 DataNode
-
7295 Jps
- 1
- 2
- 3
5.4.1 分发文件
-
[root@node1 hadoop]# scp yarn-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
-
yarn-site.xml 100% 938 0.9KB/s 00:00
-
[root@node1 hadoop]# scp mapred-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
-
mapred-site.xml 100% 856 0.8KB/s 00:00
-
[root@node1 hadoop]# scp yarn-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
-
yarn-site.xml 100% 938 0.9KB/s 00:00
-
[root@node1 hadoop]# scp mapred-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
-
mapred-site.xml 100% 856 0.8KB/s 00:00
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
5.4.2 启动YARN
-
[[email protected] ~]# start-yarn.sh
-
starting yarn daemons
-
starting resourcemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-resourcemanager-node1.out
-
node3: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node3.out
-
node2: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node2.out
-
node1: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node1.out
- 1
- 2
- 3
- 4
- 5
- 6
-
[root@node1 ~]# jps
-
2753 NodeManager
-
3041 Jps
-
2247 NameNode
-
2649 ResourceManager
-
2348 DataNode
- 1
- 2
- 3
- 4
- 5
- 6
-
[root@node2 ~]# jps
-
2341 NodeManager
-
2137 DataNode
-
2201 SecondaryNameNode
-
2443 Jps
- 1
- 2
- 3
- 4
- 5
-
[root@node3 ~]# jps
-
7350 NodeManager
-
5179 DataNode
-
7451 Jps
- 1
- 2
- 3
- 4
5.4.3 Web页面
5.4.4 Hadoop自带样例程序
-
-
-
total 4972
-
-rw-r--r-- 1 root root 537521 Aug 17 2016 hadoop-mapreduce-client-app-2.7.3.jar
-
-rw-r--r-- 1 root root 773501 Aug 17 2016 hadoop-mapreduce-client-common-2.7.3.jar
-
-rw-r--r-- 1 root root 1554595 Aug 17 2016 hadoop-mapreduce-client-core-2.7.3.jar
-
-rw-r--r-- 1 root root 189714 Aug 17 2016 hadoop-mapreduce-client-hs-2.7.3.jar
-
-rw-r--r-- 1 root root 27598 Aug 17 2016 hadoop-mapreduce-client-hs-plugins-2.7.3.jar
-
-rw-r--r-- 1 root root 61745 Aug 17 2016 hadoop-mapreduce-client-jobclient-2.7.3.jar
-
-rw-r--r-- 1 root root 1551594 Aug 17 2016 hadoop-mapreduce-client-jobclient-2.7.3-tests.jar
-
-rw-r--r-- 1 root root 71310 Aug 17 2016 hadoop-mapreduce-client-shuffle-2.7.3.jar
-
-rw-r--r-- 1 root root 295812 Aug 17 2016 hadoop-mapreduce-examples-2.7.3.jar
-
drwxr-xr-x 2 root root 4096 Aug 17 2016 lib
-
drwxr-xr-x 2 root root 30 Aug 17 2016 lib-examples
-
drwxr-xr-x 2 root root 4096 Aug 17 2016 sources
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
求解PI值
-
-
Number of Maps = 3
-
Samples per Map = 3
-
Wrote input for Map #0
-
Wrote input for Map #1
-
Wrote input for Map #2
-
Starting Job
-
17/05/23 10:57:55 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.80.131:8032
-
17/05/23 10:57:56 INFO input.FileInputFormat: Total input paths to process : 3
-
17/05/23 10:57:56 INFO mapreduce.JobSubmitter: number of splits:3
-
17/05/23 10:57:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_0001
-
17/05/23 10:57:58 INFO impl.YarnClientImpl: Submitted application application_1495550966527_0001
-
17/05/23 10:57:58 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1495550966527_0001/
-
17/05/23 10:57:58 INFO mapreduce.Job: Running job: job_1495550966527_0001
-
17/05/23 10:58:17 INFO mapreduce.Job: Job job_1495550966527_0001 running in uber mode : false
-
17/05/23 10:58:17 INFO mapreduce.Job: map 0% reduce 0%
-
17/05/23 10:59:02 INFO mapreduce.Job: map 100% reduce 0%
-
17/05/23 10:59:15 INFO mapreduce.Job: map 100% reduce 100%
-
17/05/23 10:59:16 INFO mapreduce.Job: Job job_1495550966527_0001 completed successfully
-
17/05/23 10:59:16 INFO mapreduce.Job: Counters: 49
-
File System Counters
-
FILE: Number of bytes read=72
-
FILE: Number of bytes written=475761
-
FILE: Number of read operations=0
-
FILE: Number of large read operations=0
-
FILE: Number of write operations=0
-
HDFS: Number of bytes read=777
-
HDFS: Number of bytes written=215
-
HDFS: Number of read operations=15
-
HDFS: Number of large read operations=0
-
HDFS: Number of write operations=3
-
Job Counters
-
Launched map tasks=3
-
Launched reduce tasks=1
-
Data-local map tasks=3
-
Total time spent by all maps in occupied slots (ms)=127167
-
Total time spent by all reduces in occupied slots (ms)=9302
-
Total time spent by all map tasks (ms)=127167
-
Total time spent by all reduce tasks (ms)=9302
-
Total vcore-milliseconds taken by all map tasks=127167
-
Total vcore-milliseconds taken by all reduce tasks=9302
-
Total megabyte-milliseconds taken by all map tasks=130219008
-
Total megabyte-milliseconds taken by all reduce tasks=9525248
-
Map-Reduce Framework
-
Map input records=3
-
Map output records=6
-
Map output bytes=54
-
Map output materialized bytes=84
-
Input split bytes=423
-
Combine input records=0
-
Combine output records=0
-
Reduce input groups=2
-
Reduce shuffle bytes=84
-
Reduce input records=6
-
Reduce output records=0
-
Spilled Records=12
-
Shuffled Maps =3
-
Failed Shuffles=0
-
Merged Map outputs=3
-
GC time elapsed (ms)=1847
-
CPU time spent (ms)=12410
-
Physical memory (bytes) snapshot=711430144
-
Virtual memory (bytes) snapshot=8312004608
-
Total committed heap usage (bytes)=436482048
-
Shuffle Errors
-
BAD_ID=0
-
CONNECTION=0
-
IO_ERROR=0
-
WRONG_LENGTH=0
-
WRONG_MAP=0
-
WRONG_REDUCE=0
-
File Input Format Counters
-
Bytes Read=354
-
File Output Format Counters
-
Bytes Written=97
-
Job Finished in 81.368 seconds
-
Estimated value of Pi is 3.55555555555555555556
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
-
[[email protected] mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /user/root/input /user/root/output
-
17/05/23 11:01:34 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.80.131:8032
-
17/05/23 11:01:36 INFO input.FileInputFormat: Total input paths to process : 2
-
17/05/23 11:01:36 INFO mapreduce.JobSubmitter: number of splits:2
-
17/05/23 11:01:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_0002
-
17/05/23 11:01:37 INFO impl.YarnClientImpl: Submitted application application_1495550966527_0002
-
17/05/23 11:01:37 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1495550966527_0002/
-
17/05/23 11:01:37 INFO mapreduce.Job: Running job: job_1495550966527_0002
-
17/05/23 11:01:58 INFO mapreduce.Job: Job job_1495550966527_0002 running in uber mode : false
-
17/05/23 11:01:58 INFO mapreduce.Job: map 0% reduce 0%
-
17/05/23 11:02:15 INFO mapreduce.Job: map 100% reduce 0%
-
17/05/23 11:02:25 INFO mapreduce.Job: map 100% reduce 100%
-
17/05/23 11:02:26 INFO mapreduce.Job: Job job_1495550966527_0002 completed successfully
-
17/05/23 11:02:26 INFO mapreduce.Job: Counters: 49
-
File System Counters
-
FILE: Number of bytes read=89
-
FILE: Number of bytes written=355953
-
FILE: Number of read operations=0
-
FILE: Number of large read operations=0
-
FILE: Number of write operations=0
-
HDFS: Number of bytes read=301
-
HDFS: Number of bytes written=46
-
HDFS: Number of read operations=9
-
HDFS: Number of large read operations=0
-
HDFS: Number of write operations=2
-
Job Counters
-
Launched map tasks=2
-
Launched reduce tasks=1
-
Data-local map tasks=2
-
Total time spent by all maps in occupied slots (ms)=29625
-
Total time spent by all reduces in occupied slots (ms)=7154
-
Total time spent by all map tasks (ms)=29625
-
Total time spent by all reduce tasks (ms)=7154
-
Total vcore-milliseconds taken by all map tasks=29625
-
Total vcore-milliseconds taken by all reduce tasks=7154
-
Total megabyte-milliseconds taken by all map tasks=30336000
-
Total megabyte-milliseconds taken by all reduce tasks=7325696
-
Map-Reduce Framework
-
Map input records=6
-
Map output records=14
-
Map output bytes=140
-
Map output materialized bytes=95
-
Input split bytes=216
-
Combine input records=14
-
Combine output records=7
-
Reduce input groups=6
-
Reduce shuffle bytes=95
-
Reduce input records=7
-
Reduce output records=6
-
Spilled Records=14
-
Shuffled Maps =2
-
Failed Shuffles=0
-
Merged Map outputs=2
-
GC time elapsed (ms)=574
-
CPU time spent (ms)=4590
-
Physical memory (bytes) snapshot=514162688
-
Virtual memory (bytes) snapshot=6236823552
-
Total committed heap usage (bytes)=301146112
-
Shuffle Errors
-
BAD_ID=0
-
CONNECTION=0
-
IO_ERROR=0
-
WRONG_LENGTH=0
-
WRONG_MAP=0
-
WRONG_REDUCE=0
-
File Input Format Counters
-
Bytes Read=85
-
File Output Format Counters
-
Bytes Written=46
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
求解wordcount过程中,我们可以观察页面http://192.168.80.131:8088,