首页 > 代码库 > Kafka 应用实例
Kafka 应用实例
Kafka是一种分布式的,基于发布/订阅的消息系统。主要设计目标如下:
- 通过O(1)的磁盘数据结构提供消息的持久化,这种结构对于即使数以TB的消息存储也能够保持长时间的稳定性能。
- 高吞吐量:即使是非常普通的硬件kafka也可以支持每秒数十万的消息。
- Consumer客户端pull,随机读,利用sendfile系统调用进行zero-copy ,批量拉数据
- 消费状态保存在客户端
- 支持Kafka Server间的消息分区,及分布式消费,同时保证每个Partition内的消息顺序传输。
- 数据迁移、扩容对用户透明
- 支持Hadoop并行数据加载。
- 支持online(在线)和offline(离线)的场景。
- 持久化:通过将数据持久化到硬盘以及replication防止数据丢失。
- scale out:无需停机即可扩展机器。
- 定期删除机制,支持设定partitions的segment file保留时间。
项目实例:https://github.com/windwant/kafka-test
kafka.properties
value.serializer=org.apache.kafka.common.serialization.StringSerializer key.serializer=org.apache.kafka.common.serialization.StringSerializer request.required.acks=1 bootstrap.servers=localhost:9092 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer key.deserializer=org.apache.kafka.common.serialization.StringDeserializer group.id=test-consumer-group
Producer:
package org.windwant.kafka; import org.apache.commons.configuration.ConfigurationException; import org.apache.commons.configuration.PropertiesConfiguration; import org.apache.commons.configuration.reloading.FileChangedReloadingStrategy; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.Producer; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.clients.producer.RecordMetadata; import java.io.IOException; import java.util.Properties; import java.util.concurrent.ExecutionException; /** * Producer */ public class MyKafkaProducer { private Properties props; public static void main(String[] args) throws ConfigurationException { new MyKafkaProducer().start(); } public MyKafkaProducer() throws ConfigurationException { props = new Properties(); PropertiesConfiguration config = new PropertiesConfiguration("kafka.properties"); config.setReloadingStrategy(new FileChangedReloadingStrategy()); //×??ˉ±£′? config.setAutoSave(true); props.put("value.serializer", config.getString("value.serializer")); props.put("key.serializer", config.getString("key.serializer")); props.put("request.required.acks", config.getString("request.required.acks")); props.put("bootstrap.servers", config.getString("bootstrap.servers")); } public void start(){ try { Producer<String, String> producer = new KafkaProducer<>(props); for(int i = 0; i < 100; i++) { RecordMetadata result = producer.send(new ProducerRecord<>("mykafka", "kafka key: " + Integer.toString(i), "kafka value: " + Integer.toString(i))).get(); System.out.println("producer send: " + result); Thread.sleep(1000); } producer.close(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } }
Consumer:
package org.windwant.kafka; import org.apache.commons.configuration.ConfigurationException; import org.apache.commons.configuration.PropertiesConfiguration; import org.apache.commons.configuration.reloading.FileChangedReloadingStrategy; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import java.util.Arrays; import java.util.Properties; /** * Consumer. */ public class MyKafkaConsumer { private Properties props; public static void main(String[] args) throws ConfigurationException { new MyKafkaConsumer().start(); } public MyKafkaConsumer() throws ConfigurationException { props = new Properties(); PropertiesConfiguration config = new PropertiesConfiguration("kafka.properties"); config.setReloadingStrategy(new FileChangedReloadingStrategy()); //自动保存 config.setAutoSave(true); props.put("value.deserializer", config.getString("value.deserializer")); props.put("key.deserializer", config.getString("key.deserializer")); props.put("bootstrap.servers", config.getString("bootstrap.servers")); props.put("group.id", config.getString("group.id")); } public void start(){ KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props); consumer.subscribe(Arrays.asList("mykafka")); while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) { System.out.printf("offset = %d, key = %s, value = http://www.mamicode.com/%s", record.offset(), record.key(), record.value()); System.out.println(); } } } }
Kafka 应用实例
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。