首页 > 代码库 > kafka的三种部署模式

kafka的三种部署模式

/*************
*kafka 0.8.1.1的安装部署
*blog:www.r66r.net
*qq:26571864

**************/

相关部署视频地址:http://edu.51cto.com/course/course_id-2374.html




kafka的部署模式为3种模式
1)单broker模式

2)单机多broker模式 (伪集群)

3)多机多broker模式 (真正的集群模式)


第一种模式安装

1.在hadoopdn2机器上面上传kafka文件,并解压到 /opt/hadoop/kafka下面

2.修改 /opt/hadoop/kafka/kafka_2.9.2-0.8.1.1/config 下面的server.properties 配置文件
broker.id=0 默认不用修改
修改
log.dirs=/opt/hadoop/kafka/kafka-logs  
log.flush.interval.messages=10000 默认不用修改
log.flush.interval.ms=1000        默认不用修改
zookeeper.connect=hadoopdn2:2181

3.启动kafka的broker

> bin/kafka-server-start.sh config/server.properties

正常启动如下:
[2014-11-18 10:36:32,196] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:os.version=2.6.32-220.el6.x86_64 (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:user.name=hadoop (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:user.home=/home/hadoop (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:user.dir=/opt/hadoop/kafka/kafka_2.9.2-0.8.1.1 (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,197] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@c2f8b5a (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,231] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2014-11-18 10:36:32,238] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2014-11-18 10:36:32,262] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x349c07dcd7a0002, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2014-11-18 10:36:32,266] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2014-11-18 10:36:32,415] INFO Starting log cleanup with a period of 60000 ms. (kafka.log.LogManager)
[2014-11-18 10:36:32,422] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[2014-11-18 10:36:32,502] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2014-11-18 10:36:32,503] INFO [Socket Server on Broker 0], Started (kafka.network.SocketServer)
[2014-11-18 10:36:32,634] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2014-11-18 10:36:32,716] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2014-11-18 10:36:32,887] INFO Registered broker 0 at path /brokers/ids/0 with address JobTracker:9092. (kafka.utils.ZkUtils$)
[2014-11-18 10:36:32,941] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2014-11-18 10:36:33,034] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)


4.创建topics
> bin/kafka-topics.sh --create --zookeeper hadoopdn2:2181 --replication-factor 1 --partitions 1 --topic test

查看队列列表
> bin/kafka-topics.sh --list --zookeeper hadoopdn2:2181

查看队列明细
> bin/kafka-topics.sh --describe  --zookeeper hadoopdn2:2181 --topic test

Topic[队列]:test    PartitionCount[分区数量]:1    ReplicationFactor:1    Configs:
    Topic: test    Partition: 0    Leader: 0    Replicas: 0    Isr: 0
第一行是所有Partition的总结。后面的行是每个partition一行。
    
查看帮助文档
> bin/kafka-topics.sh --help 查看与topics相关的指令


第二种模式部署:

1.为第二个broker创建server的配置文件
> cp server.properties server1.properties

2.修改server1.properties

broker.id=1
port=9093   
log.dirs=/opt/hadoop/kafka/kafka-logs-server1
zookeeper.connect=hadoopdn2:2181


3.启动kafka的broker

> nohup bin/kafka-server-start.sh config/server1.properties &

4.通过zookeeper的客户端可以查看当前的broker

[zk: hadoopdn2:2181(CONNECTED) 7] ls /                              
[zookeeper, admin, consumers, config, controller, brokers, controller_epoch]
[zk: hadoopdn2:2181(CONNECTED) 8] ls /brokers
[topics, ids]
[zk: hadoopdn2:2181(CONNECTED) 9] ls /brokers/ids
[1, 0]

5.查看队列情况

$ bin/kafka-topics.sh --describe test --zookeeper hadoopdn2:2181
Topic:test    PartitionCount:1    ReplicationFactor:1    Configs:
    Topic: test    Partition: 0    Leader: 0    Replicas: 0    Isr: 0
    
6.修改test队列的参数
$ bin/kafka-topics.sh   --zookeeper hadoopdn2:2181  --partitions 3 --topic test --alter

$ bin/kafka-topics.sh --describe test --zookeeper hadoopdn2:2181
Topic:test    PartitionCount:3    ReplicationFactor:1    Configs:
    Topic: test    Partition: 0    Leader: 0    Replicas: 0[在broker0上面]    Isr: 0
    Topic: test    Partition: 1    Leader: 1    Replicas: 1[在broker1上面]    Isr: 1
    Topic: test    Partition: 2    Leader: 0    Replicas: 0[在broker0上面]    Isr: 0
    
    
第三种部署方式:

1.在hadoopdn3机器上面上传kafka文件,并解压到 /opt/hadoop/kafka下面    

2.修改 /opt/hadoop/kafka/kafka_2.9.2-0.8.1.1/config 下面的server.properties 配置文件
broker.id=2 必须修改保证每个broker的ID唯一
修改
log.dirs=/opt/hadoop/kafka/kafka-logs  
log.flush.interval.messages=10000 默认不用修改
log.flush.interval.ms=1000        默认不用修改
zookeeper.connect=hadoopdn2:2181

3.通过zookeeper的客户端查看

[zk: hadoopdn2:2181(CONNECTED) 10] ls /brokers/ids
[2, 1, 0]

broker的id为2的已经注册到zookeeper上面了

到此为止,kafka的部署模式已经完整。

kafka的三种部署模式