首页 > 代码库 > 【转】reduce端缓存数据过多出现FGC,导致reduce生成的数据无法写到hdfs

【转】reduce端缓存数据过多出现FGC,导致reduce生成的数据无法写到hdfs

转自  http://blog.csdn.net/bigdatahappy/article/details/41726389

转这个目的,是因为该贴子中调优思路不错,值得学习

 

     搜索推荐有一个job,1000多个map,200个reduce,运行到最后只剩一个reduce(10.39.6.130上)的时候,出现以下异常,导致job失败:

 

[plain] view plaincopy技术分享技术分享
 
  1. 2014-12-04 15:49:04,297 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 12 segments left of total size: 11503294914 bytes  
  2. 2014-12-04 15:49:04,314 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id  
  3. 2014-12-04 15:49:04,394 INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.lzo_deflate]  
  4. 2014-12-04 16:02:26,889 WARN [ResponseProcessor for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086  
  5. java.io.IOException: Bad response ERROR for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086 from datanode 10.39.5.193:50010  
  6.     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:819)  
  7. 2014-12-04 16:02:26,889 WARN [ResponseProcessor for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223  
  8. java.io.IOException: Bad response ERROR for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223 from datanode 10.39.1.90:50010  
  9.     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:819)  
  10. 2014-12-04 16:02:26,891 WARN [DataStreamer for file /dw_ext/recmd/mds6/mds_filter_relation_10/20141203/_temporary/1/_temporary/attempt_1415948652989_195149_r_000158_3/user-r-00158 block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086 in pipeline 10.39.6.130:50010, 10.39.5.185:50010, 10.39.5.193:50010: bad datanode 10.39.5.193:50010  
  11. 2014-12-04 16:02:26,891 WARN [DataStreamer for file /dw_ext/recmd/mds6/mds_filter_relation_10/20141203/_temporary/1/_temporary/attempt_1415948652989_195149_r_000158_3/exposure-r-00158 block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223 in pipeline 10.39.6.130:50010, 10.39.1.89:50010, 10.39.1.90:50010: bad datanode 10.39.1.90:50010  
  12. java.io.EOFException: Premature EOF: no length prefix available  
  13.     at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1987)  
  14.     at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)  
  15.     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:796)  
  16.   
  17. 2014-12-04 16:05:23,743 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space  
  18.     at java.util.Arrays.copyOf(Arrays.java:2734)  
  19.     at java.util.Vector.ensureCapacityHelper(Vector.java:226)  
  20.     at java.util.Vector.add(Vector.java:728)  
  21.     at rec.CommonUtil.pack_Treeset(CommonUtil.java:395)  
  22.     at rec.ConvertExposure10$MyReducer.collect_exposure(ConvertExposure10.java:259)  
  23.     at rec.ConvertExposure10$MyReducer.reduce(ConvertExposure10.java:329)  
  24.     at rec.ConvertExposure10$MyReducer.reduce(ConvertExposure10.java:234)  
  25.     at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)  
  26.     at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)  
  27.     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)  
  28.     at java.security.AccessController.doPrivileged(Native Method)  
  29.     at javax.security.auth.Subject.doAs(Subject.java:396)  
  30.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1550)  
  31.     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)  
从异常上看,首先是reduce在往hdfs写数据时,发现建pipeline时,没有收到pipeline上最后一个节点的回应:
[plain] view plaincopy技术分享技术分享
 
  1. 2014-12-04 16:02:26,889 WARN [ResponseProcessor for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086  
  2. java.io.IOException: Bad response ERROR for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086 from datanode 10.39.5.193:50010  
  3.     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:819)  
  4. 2014-12-04 16:02:26,889 WARN [ResponseProcessor for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223  
  5. java.io.IOException: Bad response ERROR for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223 from datanode 10.39.1.90:50010  
  6.     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:819)  
  7. 2014-12-04 16:02:26,891 WARN [DataStreamer for file /dw_ext/recmd/mds6/mds_filter_relation_10/20141203/_temporary/1/_temporary/attempt_1415948652989_195149_r_000158_3/user-r-00158 block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086 in pipeline 10.39.6.130:50010, 10.39.5.185:50010, 10.39.5.193:50010: bad datanode 10.39.5.193:50010  
  8. 2014-12-04 16:02:26,891 WARN [DataStreamer for file /dw_ext/recmd/mds6/mds_filter_relation_10/20141203/_temporary/1/_temporary/attempt_1415948652989_195149_r_000158_3/exposure-r-00158 block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-1386326728-10.39.2.131-1382089338395:blk_1394153869_320473223 in pipeline 10.39.6.130:50010, 10.39.1.89:50010, 10.39.1.90:50010: bad datanode 10.39.1.90:50010  
  9. java.io.EOFException: Premature EOF: no length prefix available  
  10.     at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1987)  
  11.     at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)  
  12.     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:796)  
这里以写block:blk_1394149732_320469086为例,pipeline[10.39.6.130:50010, 10.39.5.185:50010, 10.39.5.193:50010]上面的最后一个DN是10.39.5.193,到10.39.5.193查看该block的日志信息:
2014-12-04 16:00:57,424 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086
[plain] view plaincopy技术分享技术分享
 
  1. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.39.5.193:50010 remote=/10.39.5.185:58225]  
  2.         at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)  
  3.         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)  
  4.         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)  
  5.         at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)  
  6.         at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)  
  7.         at java.io.BufferedInputStream.read(BufferedInputStream.java:317)  
  8.         at java.io.DataInputStream.read(DataInputStream.java:132)  
  9.         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)  
  10.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)  
  11.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)  
  12.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)  
  13.         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)  
  14.         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)  
  15.         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:739)  
  16.         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)  
  17.         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)  
  18.         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)  
  19.         at java.lang.Thread.run(Thread.java:662)  
  20. 2014-12-04 16:00:57,424 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.  
10.39.5.193上面日志显示,在读取pipeline上一个节点10.39.5.185的Packet时,一直读取不到,直到10分钟超时:
[plain] view plaincopy技术分享技术分享
 
  1. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.39.5.193:50010 remote=/10.39.5.185:58225]  
那我们来看以下pipeline上第二个节点10.39.5.185,dn日志如下:
2014-12-04 16:00:57,988 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086, type=HAS_DOWNSTREAM_IN_PIPELINE
[plain] view plaincopy技术分享技术分享
 
  1. java.io.EOFException: Premature EOF: no length prefix available  
  2.         at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1987)  
  3.         at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)  
  4.         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1083)  
  5.         at java.lang.Thread.run(Thread.java:662)  
  6. 2014-12-04 16:00:58,008 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086  
  7. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.39.5.185:50010 remote=/10.39.6.130:59083]  
  8.         at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)  
  9.         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)  
  10.         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)  
  11.         at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)  
  12.         at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)  
  13.         at java.io.BufferedInputStream.read(BufferedInputStream.java:317)  
  14.         at java.io.DataInputStream.read(DataInputStream.java:132)  
  15.         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)  
  16.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)  
  17.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)  
  18.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)  
  19.         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)  
  20.         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)  
  21.         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:739)  
  22.         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)  
  23.         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)  
  24.         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)  
  25.         at java.lang.Thread.run(Thread.java:662)  
  26. 2014-12-04 16:00:58,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.  
和10.39.5.193日志类似,也是在等待读取pipeline的第一个节点10.39.6.130的Packet时超时:
[plain] view plaincopy技术分享技术分享
 
  1. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.39.5.185:50010 remote=/10.39.6.130:59083]  
这样说来,问题出在10.39.6.130上,也即reduce任务运行的节点上,该节点DN日志如下:
[plain] view plaincopy技术分享技术分享
 
  1. 2014-12-04 16:00:59,987 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1386326728-10.39.2.131-1382089338395:blk_1394149732_320469086  
  2. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.39.6.130:50010 remote=/10.39.6.130:45259]  
  3.         at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)  
  4.         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)  
  5.         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)  
  6.         at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)  
  7.         at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)  
  8.         at java.io.BufferedInputStream.read(BufferedInputStream.java:317)  
  9.         at java.io.DataInputStream.read(DataInputStream.java:132)  
  10.         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)  
  11.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)  
  12.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)  
  13.         at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)  
  14.         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)  
  15.         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)  
  16.         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:739)  
  17.         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)  
  18.         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)  
  19.         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)  
  20.         at java.lang.Thread.run(Thread.java:662)  
但是根据日志信息,10.39.6.130的DN也是在等待Packet,但是一直等到超时也没等到:

java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.39.6.130:50010 remote=/10.39.6.130:45259]

看来不是10.39.6.130节点DN的问题,如果pipeline上面三个DN都没有问题,那问题肯定就出在dfsclient端了,也就是reduce任务在往hdfs写数据的时候根本就没有写得出去,在dfsclient上面就给堵住了,接下来查看dfsclient,也就是reduce任务进程的执行情况:

在10.39.6.130上,根据任务id:attempt_1415948652989_195149_r_000158_3 找到进程id 31050,查看内存使用情况:

 

[plain] view plaincopy技术分享技术分享
 
  1. jstat -gcutil 31050 1000:  
  2.     S0     S1     E      O      P     YGC     YGCT    FGC    FGCT     GCT  
  3.  68.95   0.00   0.00  92.98  66.32    111   16.825    10   25.419   42.244  
  4.  68.95   0.00   0.00  92.98  66.32    111   16.825    10   25.419   42.244  
  5.  68.95   0.00   0.00  92.98  66.32    111   16.825    10   25.419   42.244  
  6.  68.95   0.00   0.00  92.98  66.32    111   16.825    10   25.419   42.244  
  7.  68.95   0.00   0.00  92.98  66.32    111   16.825    10   25.419   42.244  
  8.  68.95   0.00   0.00  92.98  66.32    111   16.825    10   25.419   42.244  
  9.  68.95   0.00  26.75 100.00  66.32    111   16.825    10   25.419   42.244  
  10.   0.00   0.00  31.85 100.00  68.16    111   16.825    10   44.767   61.591  
  11.   0.00   0.00  35.37 100.00  68.16    111   16.825    10   44.767   61.591  
  12.   0.00   0.00  40.64 100.00  68.16    111   16.825    10   44.767   61.591  
  13.   0.00   0.00  45.35 100.00  68.16    111   16.825    10   44.767   61.591  
  14.   0.00   0.00  48.87 100.00  68.16    111   16.825    10   44.767   61.591  
  15.   0.00   0.00  54.14 100.00  68.16    111   16.825    10   44.767   61.591  
  16.   0.00   0.00  58.85 100.00  68.16    111   16.825    10   44.767   61.591  

果然,JVM老年代被占满,不断执行FGC,直接stop the world,导致JVM没法对外提供服务,导致dfsclient挂起,一直不能往pipeline上面的节点写Packet,直到socket超时。

根据reduce任务最后的日志,也从侧面验证了这种观点:

Error running child : java.lang.OutOfMemoryError: Java heap space

既然是OOM导致的job失败,那是什么对象导致的内存泄露呢:

执行:

 

[plain] view plaincopy技术分享技术分享
 
  1. jmap -histo:live 31050 > jmap.log  
  2. cat jmap.log :  
  3. num     #instances         #bytes  class name  
  4. ----------------------------------------------  
  5.    1:      71824177     2872967080  java.util.TreeMap$Entry  
  6.    2:      71822939     1723750536  java.lang.Long  
  7.    3:         10684       24777776  [B  
  8.    4:         47174        6425152  <methodKlass>  
  9.    5:         47174        6408120  <constMethodKlass>  
  10.    6:          3712        4429776  <constantPoolKlass>  
  11.    7:         66100        3979224  <symbolKlass>  
  12.    8:          3712        2938192  <instanceKlassKlass>  
  13.    9:          3125        2562728  <constantPoolCacheKlass>  
  14.   10:          3477        1267752  [I  
  15.   11:         12923        1180224  [C  
  16.   12:          1794         772488  <methodDataKlass>  
  17.   13:         13379         428128  java.lang.String  
  18.   14:          4034         419536  java.lang.Class  
  19.   15:          6234         410312  [S  
  20.   16:          6409         352576  [[I  
  21.   17:          7567         242144  java.util.HashMap$Entry  
  22.   18:           293         171112  <objArrayKlassKlass>  
  23.   19:          4655         148960  java.util.Hashtable$Entry  
  24.   20:          1535         135080  java.lang.reflect.Method  
  25.   21:           842         121696  [Ljava.util.HashMap$Entry;  

果然啊,reduce代码中使用了TreeMap,往里面放置了大量对象,导致出现OOM,TreeMap的Entry就站用了2.8G内存,而我们reduce设置的内存也就1.5G。

 

总结:对该job出现的异常,一般在以下几种情况下发生:

1、写数据块的DN出现问题,不能写入,就像之前出现的DN由于本地读问题导致xceivers(每个DN用于并发数据传输处理最大线程数)达到4096,耗尽了所有的线程,没法对新发起的输出写入做出相应。

2、网络出现异常,DN节点进或出的带宽被耗尽,导致数据写不出去或者写不进来,这种情况可以观察ganglia看节点带宽使用情况,这种情况一般比较少。当该job出现问题的时候,也怀疑过是带宽被耗尽的问题,查看了一下相关节点ganglia带宽使用情况,最大in/out 85M/s左右,最后排除是带宽问题。

3、dfsclient出现问题,长时间没反应,导致已经发起的socket超时。由于dfsclient情况比较复杂,出现问题的情况比较多,比如本问就是因为reduce出现内存溢出,jvm不断进行FGC,导致dfsclient挂起,最终socket出现超时。

 

【转】reduce端缓存数据过多出现FGC,导致reduce生成的数据无法写到hdfs