首页 > 代码库 > zookeeper与kafka安装部署及java环境搭建

zookeeper与kafka安装部署及java环境搭建

1. ZooKeeper安装部署

本文在一台机器上模拟3zk server的集群安装。

1.1. 创建目录、解压

cd /usr/

#创建项目目录

mkdir zookeeper

 

cd zookeeper

mkdir tmp

mkdir zookeeper-1

mkdir zookeeper-2

mkdir zookeeper-3

 

cd tmp

mkdir zk1

mkdir zk2

mkdir zk3

 

cd zk1

mkdir data

mkdir log

 

cd zk2

mkdir data

mkdir log

 

cd zk3

mkdir data

mkdir log

 

#将压缩包分别解压一份到 zookeeper-1, zookeeper-2, zookeeper-3目录下

tar -zxvf zookeeper-3.4.10.tgz

 

1.2. 创建每个目录下conf/zoo.cfg配置文件 

/home/hadoop/zookeeper-1/conf/zoo.cfg 内容如下:

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/home/hadoop/tmp/zk1/data

dataLogDir=/home/hadoop/tmp/zk1/log

clientPort=2181

server.1=192.168.68.128:2287:3387

server.2=192.168.68.128:2288:3388

server.3=192.168.68.128:2289:3389

 

/home/hadoop/zookeeper-2/conf/zoo.cfg 内容如下:

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/home/hadoop/tmp/zk2/data

dataLogDir=/home/hadoop/tmp/zk2/log

clientPort=2182

server.1=192.168.68.128:2287:3387

server.2=192.168.68.128:2288:3388

server.3=192.168.68.128:2289:3389

 

/home/hadoop/zookeeper-3/conf/zoo.cfg 内容如下:

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/home/hadoop/tmp/zk3/data

dataLogDir=/home/hadoop/tmp/zk3/log

clientPort=2183

server.1=192.168.68.128:2287:3387

server.2=192.168.68.128:2288:3388

server.3=192.168.68.128:2289:3389

 

注:红色部分192.168.68.128为服务器的ip

为是在一台机器上模拟集群,所以端口不能重复,这里用2181~21832287~2289,以及3387~3389相互错开。

另外每个zkinstance,都需要设置独立的数据存储目录、日志存储目录,所以dataDirdataLogDir这二个节点对应的目录,需要手动先创建好。即1.1所述的

 

/usr/zookeeper/tmp/zk1/data

/usr/zookeeper/tmp/zk1/log

 

/usr/zookeeper/tmp/zk2/data

/usr/zookeeper/tmp/zk2/log

 

/usr/zookeeper/tmp/zk3/data

/usr/zookeeper/tmp/zk3/log

 

1.3. 创建每个目录下data/myid文件

另外还有一个非常关键的设置,在每个zk server配置文件的dataDir所对应的目录下,必须创建一个名为myid的文件,其中的内容必须与zoo.cfgserver.x中的x相同,即:

 

/usr/zookeeper/tmp/zk1/data/myid 中的内容为1,对应server.1中的1

/usr/zookeeper/tmp/zk1/data/myid 中的内容为2,对应server.2中的2

/usr/zookeeper/tmp/zk1/data/myid 中的内容为3,对应server.3中的3

 

生产环境中,分布式集群部署的步骤与上面基本相同,只不过因为各zk server分布在不同的机器,上述配置文件中的localhost换成各服务器的真实Ip即可。分布在不同的机器后,不存在端口冲突问题,可以让每个服务器的zk均采用相同的端口,这样管理起来比较方便。

 

1.4. 启动验证 

/usr/zookeeper/zookeeper-1/bin/zkServer.sh start &

 

/usr/zookeeper/zookeeper-3/bin/zkServer.sh start &

 

/usr/zookeeper/zookeeper-3/bin/zkServer.sh start &

 

注:&符号表示后台启动,启动后可以退出命令行窗口。

 

启用成功后,输入 jps 看下进程

 

20351 ZooKeeperMain

20791 QuorumPeerMain

20822 QuorumPeerMain

20865 QuorumPeerMain

 

应该至少能看到以上几个进程。

 

可以启动客户端测试下:

bin/zkCli.sh -server 192.168.68.128:2181

 

注:如果是远程连接,把localhost换成指定的IP即可

 

成功后,应该会进到提示符下,类似下面这样:

 

[zk: localhost:2181(CONNECTED) 0]  

 

然后,就可以用一些基础命令,比如 ls ,create ,delete ,get 来测试了(关于这些命令,大家可以查看文档),特别提一个很有用的命令rmr 用来递归删除某个节点及其所有子节点

 

查看zk状态:

bin/zkServer.sh status

分别查看zk状态,可以看到:

ZooKeeper JMX enabled by default

Using config: /usr/zookeeper/zookeeper-1/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: follower

 

Mode:leader

 

至此,zookeeper集群已经部署完成了。

 

2. Kafka安装部署

2.1. 创建目录、解压

cd /usr/

#创建项目目录

mkdir kafka

cd kafka

mkdir tmp

cd tmp

#创建kafka消息目录,主要存放kafka消息

mkdir  kafka-logs-1

mkdir  kafka-logs-2

mkdir  kafka-logs-3

#将压缩包放到usr/kafka内,解压

tar -zxvf kafka_2.10-0.10.1.0.tgz

 

2.2. 修改配置文件

进入到config目录

cd /usr/kafka/kafka_2.10-0.10.1.0/config

 

主要关注:server.properties 这个文件即可。拷贝三份到同级目录:

 

config/server-1.properties

config/server-3.properties

config/server-2.properties

 

以下为默认配置:

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

 

# see kafka.server.KafkaConfig for additional details and defaults

 

############################# Server Basics #############################

 

# The id of the broker. This must be set to a unique integer for each broker.

broker.id=0

 

# Switch to enable topic deletion or not, default value is false

#delete.topic.enable=true

 

############################# Socket Server Settings #############################

 

# The address the socket server listens on. It will get the value returned from

# java.net.InetAddress.getCanonicalHostName() if not configured.

#   FORMAT:

#     listeners = security_protocol://host_name:port

#   EXAMPLE:

#     listeners = PLAINTEXT://your.host.name:9092

#listeners=PLAINTEXT://:9092

 

# Hostname and port the broker will advertise to producers and consumers. If not set,

# it uses the value for "listeners" if configured.  Otherwise, it will use the value

# returned from java.net.InetAddress.getCanonicalHostName().

#advertised.listeners=PLAINTEXT://your.host.name:9092

 

# The number of threads handling network requests

num.network.threads=3

 

# The number of threads doing disk I/O

num.io.threads=8

 

# The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=102400

 

# The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=102400

 

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600

 

 

############################# Log Basics #############################

 

# A comma seperated list of directories under which to store log files

log.dirs=/tmp/kafka-logs

 

# The default number of log partitions per topic. More partitions allow greater

# parallelism for consumption, but this will also result in more files across

# the brokers.

num.partitions=1

 

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

# This value is recommended to be increased for installations with data dirs located in RAID array.

num.recovery.threads.per.data.dir=1

 

############################# Log Flush Policy #############################

 

# Messages are immediately written to the filesystem but by default we only fsync() to sync

# the OS cache lazily. The following configurations control the flush of data to disk.

# There are a few important trade-offs here:

#    1. Durability: Unflushed data may be lost if you are not using replication.

#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.

#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.

# The settings below allow one to configure the flush policy to flush data after a period of time or

# every N messages (or both). This can be done globally and overridden on a per-topic basis.

 

# The number of messages to accept before forcing a flush of data to disk

#log.flush.interval.messages=10000

 

# The maximum amount of time a message can sit in a log before we force a flush

#log.flush.interval.ms=1000

 

############################# Log Retention Policy #############################

 

# The following configurations control the disposal of log segments. The policy can

# be set to delete segments after a period of time, or after a given size has accumulated.

# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens

# from the end of the log.

 

# The minimum age of a log file to be eligible for deletion

log.retention.hours=168

 

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

# segments don‘t drop below log.retention.bytes.

#log.retention.bytes=1073741824

 

# The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=1073741824

 

# The interval at which log segments are checked to see if they can be deleted according

# to the retention policies

log.retention.check.interval.ms=300000

 

############################# Zookeeper #############################

 

# Zookeeper connection string (see zookeeper docs for details).

# This is a comma separated host:port pairs, each corresponding to a zk

# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".

# You can also append an optional chroot string to the urls to specify the

# root directory for all kafka znodes.

zookeeper.connect=localhost:2181

 

# Timeout in ms for connecting to zookeeper

zookeeper.connection.timeout.ms=6000

 

需要修改的只有四处:

 

broker.id=0

#listeners=PLAINTEXT://:9092

log.dirs=/tmp/kafka-logs

zookeeper.connect=localhost:2181

 

分别修改三个配置文件,修改上面四处为:

config/server-1.properties

broker.id=1

listeners=PLAINTEXT://192.168.68.128:9092

log.dirs=/usr/kafka/tmp/kafka-logs-1

zookeeper.connect=192.168.68.128:2181,192.168.68.128:2182,192.168.68.128:2183

 

config/server-2.properties

broker.id=2

listeners=PLAINTEXT://192.168.68.128:9092

log.dirs=/usr/kafka/tmp/kafka-logs-2

zookeeper.connect=192.168.68.128:2181,192.168.68.128:2182,192.168.68.128:2183

 

config/server-3.properties

broker.id=3

listeners=PLAINTEXT://192.168.68.128:9092

log.dirs=/usr/kafka/tmp/kafka-logs-3

zookeeper.connect=192.168.68.128:2181,192.168.68.128:2182,192.168.68.128:2183

 

注:红色部分为服务器的ip

 

2.3. 启动验证

进入kafka目录,后台启动kafka集群:

bin/kafka-server-start.sh ./config/server-1.properties &

 

bin/kafka-server-start.sh ./config/server-2.properties &

 

bin/kafka-server-start.sh ./config/server-3.properties &

 

执行命令jps验证是否启动:

 

2820 QuorumPeerMain

9366 Kafka

9655 Kafka

9924 Kafka

2877 QuorumPeerMain

2923 QuorumPeerMain

10189 Jps

 

至此,kafka集群已经部署完成了。

 

3. Kafkajava开发环境搭建

3.1. 导入jar

解压kafka压缩包,进入kafka_2.10-0.10.1.0\libs,拷贝一下jar包到java工程的lib目录下:

 技术分享

 

3.2. Producer

package com.pers.producer;

 

import java.util.Properties;

import java.util.concurrent.TimeUnit;

 

import kafka.javaapi.producer.Producer;

import kafka.producer.KeyedMessage;

import kafka.producer.ProducerConfig;

import kafka.serializer.StringEncoder;

 

/**

* @author liangyadong

* @date 2017年5月26日 下午3:04:07

* @version 1.0

*/

public class KafkaProducer {

 

private String topic;

public KafkaProducer(String topic){

super();

this.topic = topic;

}

 

public void run(){

 

Producer producer = createProducer();

 

int i = 0;

while(true){

producer.send(new KeyedMessage<Integer, String>(topic, "message:" + i++));

 

try{

TimeUnit.SECONDS.sleep(1);

} catch(InterruptedException e) {

e.printStackTrace();

}

}

}

 

private Producer createProducer(){

 

Properties properties = new Properties();

properties.put("zookeeper.connect", "192.168.68.128:2181,192.168.68.128:2182,192.168.68.128:2183");// 声明zookeeper

properties.put("serializer.class", StringEncoder.class.getName());

properties.put("metadata.broker.list", "192.168.68.128:9092,192.168.68.128:9093,192.168.68.128:9094");// 声明kafka

 

return new Producer<Integer,String>(new ProducerConfig(properties));

}

 

public static void main(String[] args) {

new KafkaProducer("test111").run();// 创建主题,发送消息

}

 

}

 

3.3. Consumer

package com.pers.consumer;

 

import java.util.HashMap;

import java.util.List;

import java.util.Map;

import java.util.Properties;

 

import kafka.consumer.Consumer;

import kafka.consumer.ConsumerConfig;

import kafka.consumer.ConsumerIterator;

import kafka.consumer.KafkaStream;

import kafka.javaapi.consumer.ConsumerConnector;

 

/**

* @author liangyadong

* @date 2017年5月26日 下午4:01:37

* @version 1.0

*/

public class KafkaConsumer extends Thread{

 

private String topic;

 

public KafkaConsumer(String topic){

super();

this.topic = topic;

}

 

public void run() {    

        ConsumerConnector consumer = createConsumer();    

        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();    

        topicCountMap.put(topic, 1); // 一次从主题中获取一个数据    

         Map<String, List<KafkaStream<byte[], byte[]>>>  messageStreams = consumer.createMessageStreams(topicCountMap);    

         KafkaStream<byte[], byte[]> stream = messageStreams.get(topic).get(0);// 获取每次接收到的这个数据    

         ConsumerIterator<byte[], byte[]> iterator =  stream.iterator();    

         while(iterator.hasNext()){    

             String message = new String(iterator.next().message());    

             System.out.println("接收到: " + message);    

         }    

    }   

 

private ConsumerConnector createConsumer(){

 

Properties properties = new Properties();

properties.put("zookeeper.connect", "192.168.68.128:2181,192.168.68.128:2182,192.168.68.128:2183");// 声明zookeeper

properties.put("group.id", "group5");// 必须要使用别的组名称, 如果生产者和消费者都在同一组,则不能访问同一组内的topic数据    

        return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties));

 

}

 

public static void main(String[] args) {    

        new KafkaConsumer("test111").run();// 使用kafka集群中创建好的主题 test     

            

}

 

}

 

3.4. 启动验证

1、启动生产者

运行KafkaProducer.java中的main方法。

2、启动消费者

运行KafkaConsumer.java中的main方法。

 

控制台输出内容如下:

接收到: message:1

接收到: message:2

接收到: message:3

接收到: message:4

接收到: message:5

接收到: message:6

...

 

至此,搭建完成。

 

4. 常用命令

4.1. Zookeeper

4.1.1. 启动服务

bin/kafka-server-start.sh ./config/server-1.properties &

 

4.1.2. 关闭服务

zkServer.sh stop

 

4.2. Kafka

4.2.1. 启动服务(先启动zookeeper

bin/kafka-server-start.sh ./config/server-1.properties &

 

4.2.2. 关闭服务(先关闭zookeeper,再关闭kafka

kafka-server-stop.sh

 

4.2.3. 查看当前主题列表

./kafka-topics.sh --zookeeper 192.168.68.128:2181 --list

 

4.2.4. 创建主题(注意partitions分区数目)

kafka-topics.sh --zookeeper 192.168.68.128:2181 --create --topic XXX --partitions 2 --replication-factor 1

 

4.2.5. 删除主题

kafka-topics.sh --zookeeper 192.168.68.128:2181 --delete --topic XXX

 

4.2.6. 创建生产者

kakfa-console-producer.sh --broker-list 192.168.68.128:9092 --topic XXX

 

4.2.7. 创建消费者

kafka-console-consumer.sh --zookeeper 192.168.68.128:2181 --topic XXX  [--from-beginning 添加改选项则重置offset从头开始接收,若不配置,从启动时开始接收]

zookeeper与kafka安装部署及java环境搭建