首页 > 代码库 > Secondarynamenode无法正常备份:ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
Secondarynamenode无法正常备份:ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
原先使用hadoop默认设置(hadoop1.2.1),secondarynamenode会正常进行备份,定时从namenode拷贝image文件到SNN。但是具体SNN备份的时间周期和log文件的大小无法定制,后来楼主就修改了SNN的设置,将fs.checkpoint.period修改为3600s,fs.checkpoint.size修改为64兆。在core-site.xml配置文件中添加这两个参数之后,却发现SNN总是无法备份。后来google查找发现还是配置文件没有配置完整造成的,修改配置文件core-site.xml 和hdfs-site.xml文件后问题解决。
贴一下这两个文件内容:
core-site.xml:
1 <!-- ****************************************************************************************--> 2 <!-- This file only used in secondnamenode!!--> 3 <!-- ****************************************************************************************--> 4 5 <configuration> 6 7 <property> 8 <name>hadoop.tmp.dir</name> 9 <value>/bigdata/hadoop/tmp/</value>10 <description>A base for other temporary directories.</description>11 </property>12 13 <property>14 <name>fs.default.name</name>15 <value>hdfs://namenode:54310</value>16 </property>17 18 <property>19 <name>fs.checkpoint.period</name>20 <value>3600</value>21 <description>The number of seconds between two periodic checkpoints. </description>22 </property>23 24 <property>25 <name>fs.checkpoint.size</name>26 <value>67108864</value>27 <description>The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn‘t28 expired. </description>29 </property>30 31 32 <property>33 <name>fs.checkpoint.dir</name>34 <value>/bigdata/hadoop/namesecondary/</value>35 </property>36 </configuration>
hdfs-site.xml
1 <!-- ****************************************************************************************--> 2 <!-- This file only used in secondnamenode!!--> 3 <!-- ****************************************************************************************--> 4 5 <configuration> 6 7 8 <property> 9 <name>fs.checkpoint.period</name>10 <value>3600</value>11 <description>The number of seconds between two periodic checkpoints. </description>12 </property>13 14 15 <property>16 <name>dfs.secondary.http.address</name>17 <value>secondnamenode:50090</value>18 </property>19 20 21 <property>22 <name>dfs.http.address</name>23 <value>namenode:50070</value>24 <final>true</final>25 </property>26 27 28 <property>29 <name>dfs.replication</name>30 <value>2</value>31 </property>32 33 <property>34 <name>dfs.name.dir</name>35 <value>/bigdata/hadoop/secondnamenodelogs/</value>36 </property>
......
其中红色部分为关键参数。楼主刚开始以为hdfs-site.xml不需要做修改,后来发现问题主要是出现在这个文件中,真是坑爹@!!!
在hdfs-site.xml文件中需要加上core-site.xml文件中的参数fs.checkpoint.period 或者fs.checkpoint.size;dfs.http.address指定namenode的访问地址,SNN根据这个地址来获取NN保存的image。dfs.secondary.http.address则是SNN自己Web接口,这个参数必须配置,楼主就是因为没有配置这个参数一直报下面这个错误:
1 2014-06-25 14:17:40,408 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint: 2 2014-06-25 14:17:40,408 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: java.io.FileNotFoundException: http://namenode:50070/ 3 getimage?putimage=1&port=50090&machine=0.0.0.0&token=-41:620270652:0:1403579817000:1403578915285&newChecksum=7fcdd4793ce44f017d290e7db78870e7 4 at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1434) 5 at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:177) 6 at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(SecondaryNameNode.java:462) 7 at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:525) 8 at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:396) 9 at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:360)10 at java.lang.Thread.run(Thread.java:662)
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。