设置hdfs和hbase副本数。hadoop2.5.2 hbase0.98.6
2024-09-17 12:44:20 218人阅读
hdfs副本和基本读写。
core-site.xml hdfs-site.xml 从/etc/hdfs1/conf下拷贝到工作空间 |
import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; // * hadoop2.5.2 public class CopyOfHadoopDFSFileReadWrite { static void printAndExit(String str) { System.err.println(str); System.exit(1); } public static void main (String[] argv) throws IOException { Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); argv=new String[]{"/tmp/hello.txt"}; Path outFile = new Path(argv[0]); if (fs.exists(outFile)) printAndExit("Output already exists"); FSDataOutputStream out = fs.create(outFile,(short)2); try { out.write("hello 扒拉扒拉了吧啦啦啦不".getBytes()); } catch (IOException e) { System.out.println("Error while copying file"); } finally { out.close(); } } }
|
hbase-site.xml 从/etc/hyperbase1/conf下拷贝 http://192.168.146.128:8180/#/dashboard 确保hyperbase1服务启动状态 |
// * 副本数量 hbase0.98.6 public class HbaseCreateTable { public static void main(String[] args) throws IOException { Configuration conf = HBaseConfiguration.create(); HBaseAdmin ha = new HBaseAdmin(conf); HTableDescriptor htd = new HTableDescriptor("testReplication".getBytes()); HColumnDescriptor hcd1 = new HColumnDescriptor("s").setMaxVersions(30) .setBloomFilterType(BloomType.ROW); hcd1.setConfiguration("DFS_REPLICATION", "2"); //set columnfamily replication htd.addFamily(hcd1); ha.createTable(htd); ha.close(); } } |
hdfs dfs -ls /hyperbase1/data/default/testReplication/c38f234712a99d45797ef1bdd6c3b09a/s 用ls命令查看副本数 |
|
设置hdfs和hbase副本数。hadoop2.5.2 hbase0.98.6
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉:
投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。