首页 > 代码库 > Storm笔记——技术点汇总

Storm笔记——技术点汇总

目录

· 概述

· 手工搭建集群

    · 引言

    · 安装Python

    · 配置文件

    · 启动与测试

· 应用部署

    · 参数配置

    · Storm命令

· 原理

    · Storm架构

    · Storm组件

    · Stream Grouping

    · 守护进程容错性(Daemon Fault Tolerance)

    · 数据可靠性(Guaranteeing Message Processing)

    · 消息传输机制

· API

    · WordCount示例

    · 应用部署方式

    · 组件接口

    · 组件实现类

    · 数据连接方式

    · 常用Topology模式

    · 日志(集群模式)

    · 并行度设置

    · tick定时机制

    · 序列化

    · 与其他系统集成

· 性能调优


 

 

概述

1. Apache StormTwitter开源的分布式实时计算框架。

2. Storm的主要开发语言是JavaClojure,其中Java定义骨架,Clojure编写核心逻辑。

3. Storm应用(Topology):Spout是水龙头,源源不断读取消息并发送出去;Bolt是水管的每个转接口,通过Stream分组的策略转发消息流。

技术分享

4. Storm特性

    a) 易用性:只要遵循Topology、Spout和Bolt编程规范即可开发出扩展性极好的应用,无需了解底层RPC、Worker之间冗余及数据分流等。

    b) 容错性:守护进程(Nimbus、Supervisor等)是无状态的,状态保存在ZooKeeper可随意重启;Worker失效或机器故障时,Storm自动分配新Worker替换失效Worker。

    c) 伸缩性:可线性伸缩。

    d) 完整性:采用Acker机制,保证数据不丢失;采用事务机制,保证数据准确性。

5. Storm应用方向

    a) 流处理(Stream Processing):最基本的应用,Storm处理源源不断流进来的消息,处理后将结果写到某存储。

    b) 连续计算(Continuous Computation):Storm保证计算永远运行,直到用户结束计算进程。

    c) 分布式RPC(Distributed RPC):可作为分布式RPC框架使用。

6. 与Spark Streaming比较

    a) 联合历史数据:Spark Streaming将流数据分成小的时间片段(几秒到几分钟),以类似batch批量处理的方式处理这些小部分数据,因此可同时兼容批量和实时数据处理的逻辑和算法,便于历史数据和实时数据联合分析。

    b) 延迟:Storm处理每次传入的一个事件,Spark Streaming处理某个时间段窗口内的事件流,因此Storm延迟极低,Spark Streaming相对较高。

手工搭建集群

引言

1. 环境:

Role

Host name

Nimbus

centos1

Supervisor

centos2

centos3

2. 假设已成功安装JDK、ZooKeeper集群。

安装Python

1. [Nimbus、Supervisor]登录root用户安装Python到/usr/local/python目录下。

tar zxvf Python-2.7.13.tgz // Ubuntu要先安装依赖sudo apt-get install build-essential zlib1g-devcd Python-2.7.13/./configure --prefix=/usr/local/pythonmakesudo make install

2. [Nimbus、Supervisor]配置命令。

ln -s /usr/local/python/bin/python /usr/bin/python // 软链接python -V // 验证

配置文件

3. [Nimbus]

tar zxvf apache-storm-1.1.0.tar.gz -C /opt/appcd /opt/app/apache-storm-1.1.0vi conf/storm.yaml
storm.zookeeper.servers:    - "centos1"    - "centos2"storm.zookeeper.port: 2181nimbus.seeds: ["centos1"]supervisor.slots.ports:    - 6700    - 6701    - 6702    - 6703storm.local.dir: "/opt/data/storm.local.dir"ui.port: 8080

4. [Nimbus]从Nimbus复制Storm目录到各Supervisor。

scp -r /opt/app/apache-storm-1.1.0 hadoop@centos2:/opt/appscp -r /opt/app/apache-storm-1.1.0 hadoop@centos3:/opt/app

启动与测试

5. [Nimbus、Supervisor]配置Storm环境变量。

export STORM_HOME=/opt/app/apache-storm-1.1.0export PATH=$PATH:$STORM_HOME/bin

6. [Nimbus]启动守护进程。

nohup bin/storm nimbus 1>/dev/null 2>&1 &nohup bin/storm ui 1>/dev/null 2>&1 &nohup bin/storm logviewer 1>/dev/null 2>&1 &jps
nimbus # Nimbus守护进程core # Storm UI守护进程logviewer # LogViewer守护进程

7. [Supervisor]启动守护进程。

nohup bin/storm supervisor 1>/dev/null 2>&1 &nohup bin/storm logviewer 1>/dev/null 2>&1 &jps
supervisor # Supervisor守护进程logviewer # LogViewer守护进程

8. [Nimbus]测试。

storm jar teststorm.jar teststorm.WordCountTopology wordcount

9. 监控页面。

http://centos1:8080

Storm UI

10. [Nimbus]关闭守护进程。

kill -s TERM ${PID} # PID为各守护进程ID

应用部署

参数配置

1. 配置方式

    a) External Component Specific Configuration:通过TopologyBuilder的setSpout和setBold方法返回的SpoutDeclarer和BoldDeclarer对象的一系列方法。

    b) Internal Component Specific Configuration:Override Spout和Bold的getComponentConfiguration方法并返回Map。

    c) Topology Specific Configuration:命令传参“bin/storm -c conf1=v1 -c conf2=v2”。

    d) storm.yaml:“$STORM_HOME/conf/storm.yaml”。

    e) defaults.yaml:“$STORM_HOME/lib/storm-core-x.y.z.jar/defaults.yaml”。

2. 参数优先级:

    defaults.yaml

    < storm.yaml

    < Topology Specific Configuration

    < Internal Component Specific Configuration

    < External Component Specific Configuration

Storm命令

常用命令,详情参考官方文档。

    a) storm jar topology-jar-path class ...

    b) storm kill topology-name [-w wait-time-secs]

    c) storm activate topology-name

    d) storm deactivate topology-name

    e) storm rebalance topology-name [-w wait-time-secs] [-n new-num-workers] [-e component=parallelism]*

    f) storm classpath

    g) storm localconfvalue conf-name

    h) storm remoteconfvalue conf-name

    i) storm nimbus

    j) storm supervisor

    k) storm ui

    l) storm get-errors topology-name

    m) storm kill_workers

    n) storm list

    o) storm logviewer

    p) storm set_log_level -l [logger name]=[log level][:optional timeout] -r [logger name] topology-name

    q) storm version

    r) storm help [command]

原理

Storm架构

技术分享

Storm组件

名称

说明

Nimbus

负责资源分配和任务调度,类似HadoopJobTracker

Supervisor

负责接受Nimbus分配的任务,启动和停止属于自己管理的Worker进程,类似HadoopTaskTracker

Worker

运行具体处理组件逻辑的进程。

Executor

ExecutorWorker进程中具体的物理线程,同一个Spout/BoltTask可能会共享一个物理线程,一个Executor中只能运行隶属于同一个Spout/BoltTask

Task

每一个Spout/Bolt具体要做的工作,也是各节点之间进行分组的单位。

Topology

一个实时计算应用程序逻辑上被封装在Topology对象中,类似Hadoop的作业。与作业不同的是,Topology会一直运行直到显式被杀死。

Spout

a) 在Topology中产生源数据流。

b) 通常Spout获取数据源的数据(如MQ),然后调用nextTuple方法,发射数据供Bolt消费。

c) 可通过OutputFieldsDeclarerdeclareStream方法声明1或多个流,并通过OutputCollectoremit方法向指定流发射数据。

Bolt

a) 在Topology中接受Spout的数据并执行处理。

b) 当处理复杂逻辑时,可分成多个Bolt处理。

c) Bolt在接受到消息后调用execute方法,在此可执行过滤、合并、写数据库等操作。

d) 可通过OutputFieldsDeclarerdeclareStream方法声明1或多个流,并通过OutputCollectoremit方法向指定流发射数据。

Tuple

消息传递的基本单元。

Stream

源源不断传递的Tuple组成了Stream

Stream Grouping

即消息的分区(partition),内置了7种分组策略。

Stream Grouping

1. Stream Grouping定义了数据流在Bolt间如何被切分。

2. 内置7种Stream Grouping策略

    a) Shuffle grouping:随机分组,保证各Bolt接受的Tuple数量一致。

    b) Fields grouping:根据Tuple中某一个或多个字段分组,相同分组字段的Tuple发送至同一Bolt。

    c) All grouping:数据流被复制发送给所有Bolt,慎用。

    d) Global grouping:整个数据流只发给ID值最小的Bolt。

    e) None grouping:不关心如何分组,当前与Shuffle grouping相同。

    f) Direct grouping:生产Tuple的Spout/Bolt指定该Tuple的消费者Bolt。通过OutputCollector的emitDirect方法实现。

    g) Local or shuffle grouping:如果目标Bolt有一个或多个Task与当前生产Tuple的Task在同一Worker进程,那么将该Tuple发送给该目标Bolt;否则Shuffle grouping。

3. 自定义Stream Grouping策略:实现CustomStreamGrouping接口。

守护进程容错性(Daemon Fault Tolerance)

1. Worker:如果Worker故障,则Supervisor重启该Worker;如果其仍然故障,则Nimbus重新分配Worker资源。

2. 节点:如果节点机器故障,则该节点上的Task将超时,Nimbus将这些Task分配到其他节点。

3. Nimbus和Supervisor:Nimbus和Supervisor都是fail-fast(故障时进程自销毁)和stateless(状态保存在ZooKeeper或磁盘),故障后重启进程即可;Worker进程不会

受Nimbus或Supervisor故障影响,但Worker进程的跨节点迁移会受影响。

4. Nimbus:Storm v1.0.0开始引入Nimbus HA。

数据可靠性(Guaranteeing Message Processing)

1. MessageId:Storm允许用户在Spout发射新Tuple时为其指定MessageId(Object类型);多个Tuple可共用同一MessageId,表示它们是同一消息单元。

2. 可靠性:Tuple超时时间内,该MessageId绑定的Stream Tuple及其衍生的所有Tuple都已经过Topology中应该到达的Bolt处理;Storm使用Acker解决Tuple消息可靠性问

题(调用OutputCollector的ack和fail方法告知Storm该Tuple处理成功和失败)。

3. Tuple超时时间:通过参数“topology.message.timeout.secs”配置,默认30秒。

4. 锚定(Anchoring)

    a) Tuple从Spout到Bolt形成了Tuple tree,以WordCount为例:

技术分享

    b) 锚定:Tuple被锚定后,如果Tuple未被下游ack,根节点的Spout将稍后重发Tuple。

    c) 锚定的API写法:

1 // Spout2 collector.emit(new Values(content1), uniqueMessageId);
1 // Bold2 collector.emit(tuple, new Values(content2));3 collector.ack(tuple);

    d) 未锚定:Tuple未被锚定,如果Tuple未被下游ack,根节点的Spout不会重发Tuple。
    e) 未锚定的API写法:

1 // Bold2 collector.emit(new Values(content));3 collector.ack(tuple);

    f) 复合锚定:一个输出Tuple可被锚定到多个输入Tuple。复合锚定会打破树结构,形成有向无环图(DAG)。

技术分享

    g) 复合锚定API写法:

1 // Bold2 List<Tuple> anchors = new ArrayList<>();3 anchors.add(tuple1);4 anchors.add(tuple2);5 collector.emit(anchors, new Values(content));6 collector.ack(tuple);

    h) ack和fail:每一个Tuple必须执行ack或fail,Storm使用内存跟踪每个Tuple,如果未ack或fail,任务最终会内存耗尽。

    i) Acker任务:Topology有一组特殊的Acker任务跟踪Tuple树或有向无环图,通过参数“topology.acker.executors”或“Config.TOPOLOGY_ACKER_EXECUTORS”配置Acker任务数量,默认为1。处理量大时应增大该值。

5. 关闭可靠性:如果对可靠性要求不高,可关闭以提高性能。

    a) 方法1:设置“Config.TOPOLOGY_ACKER_EXECUTORS”为0。

    b) 方法2:采用未锚定的API写法写法。

消息传输机制

自Storm 0.9.0开始使用Netty作为消息通信解决方案,已不再需要ZeroMQ。

API

WordCount示例

1. WordCountTopology.java

 1 import org.apache.storm.Config; 2 import org.apache.storm.LocalCluster; 3 import org.apache.storm.StormSubmitter; 4 import org.apache.storm.generated.AlreadyAliveException; 5 import org.apache.storm.generated.AuthorizationException; 6 import org.apache.storm.generated.InvalidTopologyException; 7 import org.apache.storm.topology.TopologyBuilder; 8 import org.apache.storm.tuple.Fields; 9 import org.slf4j.Logger;10 import org.slf4j.LoggerFactory;11 12 public class WordCountTopology {13     14     private static final Logger logger = LoggerFactory.getLogger(WordCountTopology.class);15 16     public static void main(String[] args) throws InterruptedException {17         final String inputFile = "/opt/app/apache-storm-1.1.0/LICENSE";18         final String outputDir = "/opt/workspace/wordcount";19         20         TopologyBuilder builder = new TopologyBuilder();21         builder.setSpout(FileReadSpout.class.getSimpleName(), new FileReadSpout(inputFile));22         builder.setBolt(LineSplitBolt.class.getSimpleName(), new LineSplitBolt())23                 .shuffleGrouping(FileReadSpout.class.getSimpleName());24         // 最终生成4个文件25         builder.setBolt(WordCountBolt.class.getSimpleName(), new WordCountBolt(outputDir), 2)26                 .setNumTasks(4)27                 .fieldsGrouping(LineSplitBolt.class.getSimpleName(), new Fields("word"));28         29         Config conf = new Config();30         conf.setDebug(true);31         if (args != null && args.length > 0) {32             try {33                 StormSubmitter.submitTopology(args[0], conf, builder.createTopology());34             } catch (AlreadyAliveException | InvalidTopologyException | AuthorizationException e) {35                 logger.error("Failed to submit " + WordCountTopology.class.getName() + ".", e);36             }37         } else {38             conf.setDebug(true);39             LocalCluster cluster = new LocalCluster();40             cluster.submitTopology(WordCountTopology.class.getSimpleName(), conf, builder.createTopology());41             Thread.sleep(30 * 1000);42             cluster.shutdown();43         }44     }45 46 }

2. FileReadSpout.java

 1 import java.io.BufferedReader; 2 import java.io.FileNotFoundException; 3 import java.io.FileReader; 4 import java.io.IOException; 5 import java.util.Map; 6  7 import org.apache.storm.spout.SpoutOutputCollector; 8 import org.apache.storm.task.TopologyContext; 9 import org.apache.storm.topology.OutputFieldsDeclarer;10 import org.apache.storm.topology.base.BaseRichSpout;11 import org.apache.storm.tuple.Fields;12 import org.apache.storm.tuple.Values;13 14 public class FileReadSpout extends BaseRichSpout {15 16     private static final long serialVersionUID = 8543601286964250940L;17     18     private String inputFile;19     20     private BufferedReader reader;21     22     private SpoutOutputCollector collector;23     24     public FileReadSpout(String inputFile) {25         this.inputFile = inputFile;26     }27 28     @Override29     @SuppressWarnings("rawtypes")30     public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {31         try {32             reader = new BufferedReader(new FileReader(inputFile));33         } catch (FileNotFoundException e) {34             throw new RuntimeException("Cannot find file [" + inputFile + "].", e);35         }36         this.collector = collector;37     }38     39     @Override40     public void nextTuple() {41         try {42             String line = null;43             while ((line = reader.readLine()) != null) {44                 collector.emit(new Values(line));45             }46         } catch (IOException e) {47             throw new RuntimeException("Encountered a file read error.", e);48         }49     }50 51     @Override52     public void declareOutputFields(OutputFieldsDeclarer declarer) {53         declarer.declare(new Fields("line"));54     }55 56     @Override57     public void close() {58         if (reader != null) {59             try {60                 reader.close();61             } catch (IOException e) {62                 // Ignore63             }64         }65         super.close();66     }67 68 }

3. LineSplitBolt.java

 1 import java.util.Map; 2  3 import org.apache.storm.task.OutputCollector; 4 import org.apache.storm.task.TopologyContext; 5 import org.apache.storm.topology.OutputFieldsDeclarer; 6 import org.apache.storm.topology.base.BaseRichBolt; 7 import org.apache.storm.tuple.Fields; 8 import org.apache.storm.tuple.Tuple; 9 import org.apache.storm.tuple.Values;10 11 public class LineSplitBolt extends BaseRichBolt {12 13     private static final long serialVersionUID = -2045688041930588092L;14 15     private OutputCollector collector;16     17     @Override18     @SuppressWarnings("rawtypes")19     public void prepare(Map conf, TopologyContext context, OutputCollector collector) {20         this.collector = collector;21     }22     23     @Override24     public void execute(Tuple tuple) {25         String line = tuple.getStringByField("line");26         String[] words = line.split(" ");27         for (String word : words) {28             word = word.trim();29             if (!word.isEmpty()) {30                 word = word.toLowerCase();31                 collector.emit(new Values(word, 1));32             }33         }34         35         collector.ack(tuple);36     }37 38     @Override39     public void declareOutputFields(OutputFieldsDeclarer declarer) {40         declarer.declare(new Fields("word", "count"));41     }42 43     @Override44     public void cleanup() {45         super.cleanup();46     }47 48 }

4. WordCountBolt.java

 1 import java.io.IOException; 2 import java.io.RandomAccessFile; 3 import java.util.HashMap; 4 import java.util.Map; 5 import java.util.UUID; 6  7 import org.apache.storm.task.OutputCollector; 8 import org.apache.storm.task.TopologyContext; 9 import org.apache.storm.topology.OutputFieldsDeclarer;10 import org.apache.storm.topology.base.BaseRichBolt;11 import org.apache.storm.tuple.Tuple;12 import org.slf4j.Logger;13 import org.slf4j.LoggerFactory;14 15 public class WordCountBolt extends BaseRichBolt {16 17     private static final long serialVersionUID = 8239697869626573368L;18     19     private static final Logger logger = LoggerFactory.getLogger(WordCountBolt.class);20     21     private String outputDir;22 23     private OutputCollector collector;24     25     private Map<String, Integer> wordCounter;26     27     public WordCountBolt(String outputDir) {28         this.outputDir = outputDir;29     }30 31     @Override32     @SuppressWarnings("rawtypes")33     public void prepare(Map conf, TopologyContext context, OutputCollector collector) {34         this.collector = collector;35         wordCounter = new HashMap<>();36     }37     38     @Override39     public void execute(Tuple tuple) {40         String word = tuple.getStringByField("word");41         Integer count = tuple.getIntegerByField("count");42         Integer wordCount = wordCounter.get(word);43         if (wordCount == null) {44             wordCounter.put(word, count);45         } else {46             wordCounter.put(word, count + wordCount);47         }48         49         collector.ack(tuple);50     }51 52     @Override53     public void declareOutputFields(OutputFieldsDeclarer declarer) {54     }55     56     @Override57     public void cleanup() {58         if (wordCounter != null) {59             outputResult(wordCounter);60             wordCounter.clear();61         }62         super.cleanup();63     }64     65     private void outputResult(Map<String, Integer> wordCounter) {66         String filePath = outputDir + "/" + UUID.randomUUID().toString();67         RandomAccessFile randomAccessFile = null;68         try {69             randomAccessFile = new RandomAccessFile(filePath, "rw");70             for (Map.Entry<String, Integer> entry : wordCounter.entrySet()) {71                 randomAccessFile.writeChars(entry.getKey());72                 randomAccessFile.writeChar(‘\t‘);73                 randomAccessFile.writeChars(String.valueOf(entry.getValue()));74                 randomAccessFile.writeChar(‘\n‘);75             }76         } catch (IOException e) {77             logger.error("Failed to write file [" + filePath + "].", e);78         } finally {79             if (randomAccessFile != null) {80                 try {81                     randomAccessFile.close();82                 } catch (IOException e) {83                     logger.warn("Failed to close output stream.", e);84                 }85             }86         }87     }88 89 }

应用部署方式

应用程序部署(Topology提交)分类

    a) 本地模式:在进程中模拟Storm集群,用于开发、测试。

    b) 集群模式:用于生产。

组件接口

1. IComponent

 1 package org.apache.storm.topology; 2  3 import java.io.Serializable; 4 import java.util.Map; 5  6 /** 7  * Common methods for all possible components in a topology. This interface is used 8  * when defining topologies using the Java API.  9  */10 public interface IComponent extends Serializable {11 12     /**13      * Declare the output schema for all the streams of this topology.14      *15      * @param declarer this is used to declare output stream ids, output fields, and whether or not each output stream is a direct stream16      */17     void declareOutputFields(OutputFieldsDeclarer declarer);18 19     /**20      * Declare configuration specific to this component. Only a subset of the "topology.*" configs can21      * be overridden. The component configuration can be further overridden when constructing the 22      * topology using {@link TopologyBuilder}23      *24      */25     Map<String, Object> getComponentConfiguration();26 27 }

2. ISpout

 1 package org.apache.storm.spout; 2  3 import org.apache.storm.task.TopologyContext; 4 import java.util.Map; 5 import java.io.Serializable; 6  7 /** 8  * ISpout is the core interface for implementing spouts. A Spout is responsible 9  * for feeding messages into the topology for processing. For every tuple emitted by10  * a spout, Storm will track the (potentially very large) DAG of tuples generated11  * based on a tuple emitted by the spout. When Storm detects that every tuple in12  * that DAG has been successfully processed, it will send an ack message to the Spout.13  *14  * If a tuple fails to be fully processed within the configured timeout for the15  * topology (see {@link org.apache.storm.Config}), Storm will send a fail message to the spout16  * for the message.17  *18  * When a Spout emits a tuple, it can tag the tuple with a message id. The message id19  * can be any type. When Storm acks or fails a message, it will pass back to the20  * spout the same message id to identify which tuple it‘s referring to. If the spout leaves out21  * the message id, or sets it to null, then Storm will not track the message and the spout22  * will not receive any ack or fail callbacks for the message.23  *24  * Storm executes ack, fail, and nextTuple all on the same thread. This means that an implementor25  * of an ISpout does not need to worry about concurrency issues between those methods. However, it 26  * also means that an implementor must ensure that nextTuple is non-blocking: otherwise 27  * the method could block acks and fails that are pending to be processed.28  */29 public interface ISpout extends Serializable {30     /**31      * Called when a task for this component is initialized within a worker on the cluster.32      * It provides the spout with the environment in which the spout executes.33      *34      * This includes the:35      *36      * @param conf The Storm configuration for this spout. This is the configuration provided to the topology merged in with cluster configuration on this machine.37      * @param context This object can be used to get information about this task‘s place within the topology, including the task id and component id of this task, input and output information, etc.38      * @param collector The collector is used to emit tuples from this spout. Tuples can be emitted at any time, including the open and close methods. The collector is thread-safe and should be saved as an instance variable of this spout object.39      */40     void open(Map conf, TopologyContext context, SpoutOutputCollector collector);41 42     /**43      * Called when an ISpout is going to be shutdown. There is no guarentee that close44      * will be called, because the supervisor kill -9‘s worker processes on the cluster.45      *46      * The one context where close is guaranteed to be called is a topology is47      * killed when running Storm in local mode.48      */49     void close();50     51     /**52      * Called when a spout has been activated out of a deactivated mode.53      * nextTuple will be called on this spout soon. A spout can become activated54      * after having been deactivated when the topology is manipulated using the 55      * `storm` client. 56      */57     void activate();58     59     /**60      * Called when a spout has been deactivated. nextTuple will not be called while61      * a spout is deactivated. The spout may or may not be reactivated in the future.62      */63     void deactivate();64 65     /**66      * When this method is called, Storm is requesting that the Spout emit tuples to the 67      * output collector. This method should be non-blocking, so if the Spout has no tuples68      * to emit, this method should return. nextTuple, ack, and fail are all called in a tight69      * loop in a single thread in the spout task. When there are no tuples to emit, it is courteous70      * to have nextTuple sleep for a short amount of time (like a single millisecond)71      * so as not to waste too much CPU.72      */73     void nextTuple();74 75     /**76      * Storm has determined that the tuple emitted by this spout with the msgId identifier77      * has been fully processed. Typically, an implementation of this method will take that78      * message off the queue and prevent it from being replayed.79      */80     void ack(Object msgId);81 82     /**83      * The tuple emitted by this spout with the msgId identifier has failed to be84      * fully processed. Typically, an implementation of this method will put that85      * message back on the queue to be replayed at a later time.86      */87     void fail(Object msgId);88 }

3. IBolt

 1 package org.apache.storm.task; 2  3 import org.apache.storm.tuple.Tuple; 4 import java.util.Map; 5 import java.io.Serializable; 6  7 /** 8  * An IBolt represents a component that takes tuples as input and produces tuples 9  * as output. An IBolt can do everything from filtering to joining to functions10  * to aggregations. It does not have to process a tuple immediately and may11  * hold onto tuples to process later.12  *13  * A bolt‘s lifecycle is as follows:14  *15  * IBolt object created on client machine. The IBolt is serialized into the topology16  * (using Java serialization) and submitted to the master machine of the cluster (Nimbus).17  * Nimbus then launches workers which deserialize the object, call prepare on it, and then18  * start processing tuples.19  *20  * If you want to parameterize an IBolt, you should set the parameters through its21  * constructor and save the parameterization state as instance variables (which will22  * then get serialized and shipped to every task executing this bolt across the cluster).23  *24  * When defining bolts in Java, you should use the IRichBolt interface which adds25  * necessary methods for using the Java TopologyBuilder API.26  */27 public interface IBolt extends Serializable {28     /**29      * Called when a task for this component is initialized within a worker on the cluster.30      * It provides the bolt with the environment in which the bolt executes.31      *32      * This includes the:33      * 34      * @param stormConf The Storm configuration for this bolt. This is the configuration provided to the topology merged in with cluster configuration on this machine.35      * @param context This object can be used to get information about this task‘s place within the topology, including the task id and component id of this task, input and output information, etc.36      * @param collector The collector is used to emit tuples from this bolt. Tuples can be emitted at any time, including the prepare and cleanup methods. The collector is thread-safe and should be saved as an instance variable of this bolt object.37      */38     void prepare(Map stormConf, TopologyContext context, OutputCollector collector);39 40     /**41      * Process a single tuple of input. The Tuple object contains metadata on it42      * about which component/stream/task it came from. The values of the Tuple can43      * be accessed using Tuple#getValue. The IBolt does not have to process the Tuple44      * immediately. It is perfectly fine to hang onto a tuple and process it later45      * (for instance, to do an aggregation or join).46      *47      * Tuples should be emitted using the OutputCollector provided through the prepare method.48      * It is required that all input tuples are acked or failed at some point using the OutputCollector.49      * Otherwise, Storm will be unable to determine when tuples coming off the spouts50      * have been completed.51      *52      * For the common case of acking an input tuple at the end of the execute method,53      * see IBasicBolt which automates this.54      * 55      * @param input The input tuple to be processed.56      */57     void execute(Tuple input);58 59     /**60      * Called when an IBolt is going to be shutdown. There is no guarentee that cleanup61      * will be called, because the supervisor kill -9‘s worker processes on the cluster.62      *63      * The one context where cleanup is guaranteed to be called is when a topology64      * is killed when running Storm in local mode.65      */66     void cleanup();67 }

4. IRichSpout

 1 package org.apache.storm.topology; 2  3 import org.apache.storm.spout.ISpout; 4  5 /** 6  * When writing topologies using Java, {@link IRichBolt} and {@link IRichSpout} are the main interfaces 7  * to use to implement components of the topology. 8  * 9  */10 public interface IRichSpout extends ISpout, IComponent {11 12 }

5. IRichBolt

 1 package org.apache.storm.topology; 2  3 import org.apache.storm.task.IBolt; 4  5 /** 6  * When writing topologies using Java, {@link IRichBolt} and {@link IRichSpout} are the main interfaces 7  * to use to implement components of the topology. 8  * 9  */10 public interface IRichBolt extends IBolt, IComponent {11 12 }

6. IBasicBolt

 1 package org.apache.storm.topology; 2  3 import org.apache.storm.task.TopologyContext; 4 import org.apache.storm.tuple.Tuple; 5 import java.util.Map; 6  7 public interface IBasicBolt extends IComponent { 8     void prepare(Map stormConf, TopologyContext context); 9     /**10      * Process the input tuple and optionally emit new tuples based on the input tuple.11      * 12      * All acking is managed for you. Throw a FailedException if you want to fail the tuple.13      */14     void execute(Tuple input, BasicOutputCollector collector);15     void cleanup();16 }

7. IStateSpout(Storm内部未完成)

 1 package org.apache.storm.state; 2  3 import org.apache.storm.task.TopologyContext; 4 import java.io.Serializable; 5 import java.util.Map; 6  7 public interface IStateSpout extends Serializable { 8     void open(Map conf, TopologyContext context); 9     void close();10     void nextTuple(StateSpoutOutputCollector collector);11     void synchronize(SynchronizeOutputCollector collector);12 }

8. IRichStateSpout(Storm内部未完成)

1 package org.apache.storm.topology;2 3 import org.apache.storm.state.IStateSpout;4 5 6 public interface IRichStateSpout extends IStateSpout, IComponent {7 8 }

组件实现类

1. BaseComponent

 1 package org.apache.storm.topology.base; 2  3 import org.apache.storm.topology.IComponent; 4 import java.util.Map; 5  6 public abstract class BaseComponent implements IComponent { 7     @Override 8     public Map<String, Object> getComponentConfiguration() { 9         return null;10     }    11 }

2. BaseRichSpout

 1 package org.apache.storm.topology.base; 2  3 import org.apache.storm.topology.IRichSpout; 4  5 public abstract class BaseRichSpout extends BaseComponent implements IRichSpout { 6     @Override 7     public void close() { 8     } 9 10     @Override11     public void activate() {12     }13 14     @Override15     public void deactivate() {16     }17 18     @Override19     public void ack(Object msgId) {20     }21 22     @Override23     public void fail(Object msgId) {24     }25 }

3. BaseRichBolt

1 package org.apache.storm.topology.base;2 3 import org.apache.storm.topology.IRichBolt;4 5 public abstract class BaseRichBolt extends BaseComponent implements IRichBolt {6     @Override7     public void cleanup() {8     }    9 }

4. BaseBasicBolt

 1 package org.apache.storm.topology.base; 2  3 import org.apache.storm.task.TopologyContext; 4 import org.apache.storm.topology.IBasicBolt; 5 import java.util.Map; 6  7 public abstract class BaseBasicBolt extends BaseComponent implements IBasicBolt { 8  9     @Override10     public void prepare(Map stormConf, TopologyContext context) {11     }12 13     @Override14     public void cleanup() {15     }    16 }

数据连接方式

1. 直接连接(Direct Connection)

    a) 场景:特别适合消息发射器是已知设备或设备组的场景。已知设备指在Topology启动时已知并在Topology生命周期中保持不变的设备。对于变化的设备可协调器通知Topology创建新Spout连接。

    b) 直接连接架构图

技术分享

    c) 设备组直接连接架构图

技术分享

    d) 基于协调器的直接连接

技术分享

2. 消息队列(Enqueued Messages)

技术分享

常用Topology模式

1. BasicBolt

    a) 含义:Storm自动在Bolt的execute方法后ack输入的Tuple。

    b) 方法:实现org.apache.storm.topology.IBasicBolt接口。

2. 流连接(Stream Join)

    a) 含义:基于某些字段,把两个或更多数据流结合到一起,形成一个新的数据流。

    b) 方法:

1 builder.setBolt("join", new Joiner(), parallelism)2         .fieldGrouping("1", new Field("1-joinfield1", "1-joinfield2"))3         .fieldGrouping("2", new Field("2-joinfield1", "2-joinfield2"))4         .fieldGrouping("3", new Field("3-joinfield1", "3-joinfield2"));

3. 批处理(Batching)

    a) 含义:对一组Tuple处理而不是单个处理。

    b) 方法:在Bolt成员变量保存Tuple引用以便批处理,处理完成后,ack该批Tuple。

4. TopN

    a) 含义:按照某个统计指标(如出现次数)计算TopN,然后每隔一段时间输出TopN结果。例如微博的热门话题、热门点击图片。

    b) 方法:为处理大数据量的流,可先由多个Bolt并行计算TopN,再合并结果到一个Bolt计算全局TopN。

1 builder.setBolt("rank", new RankBolt(), parallelism)2         .fieldGrouping("spout", new Fields("count"));3 builder.setBolt("merge_rank", new MergeRank())4         .globalGrouping("rank");

日志(集群模式)

1. 提交日志:“$STORM_HOME/logs/nimbus.log”。

2. 运行日志

    a) 设置日志:Storm UI或“$STORM_HOME/logback/cluster.xml”。

    b) 查看日志:Storm UI或各节点的“$STORM_HOME/logs/worker-port.log”(port为具体数字)。

3. 日志框架冲突:Storm使用logback日志框架,logback和log4j作为两套slf4j的实现不可共存,应在Maven中剔除其他框架引入的log4j:

 1 <dependency> 2     <groupId>...</groupId> 3     <artifactId>...</artifactId> 4     <version>...</version> 5     <exclusions> 6         <exclusion> 7             <groupId>org.slf4j</groupId> 8             <artifactId>slf4j-log4j12</artifactId> 9         </exclusion>10         <exclusion>11             <groupId>log4j</groupId>12             <artifactId>log4j</artifactId>13         </exclusion>14     </exclusions>15 </dependency>

并行度设置

1. 组件与并行度关系:

    a) 一个运行的Topology由集群中各机器运行的多个Worker进程组成,一个Worker进程仅属于一个Topology;

    b) 一个Worker进程中包含一或多个Executor线程;

    c) 一个Executor线程中包含一或多个Task,Task是Spout或Bolt;

    d) 默认,一个Executor对应一个Task。

2. 示例

    a) 代码(提交时设置并行度)

 1 Config conf = new Config(); 2 conf.setNumWorkers(2); // use two worker processes 3 topologyBuilder.setSpout("blue-spout", new BlueSpout(), 2); 4 topologyBuilder.setBolt("green-bolt", new GreenBolt(), 2) 5                .setNumTasks(4) 6                .shuffleGrouping("blue-spout"); 7 topologyBuilder.setBolt("yellow-bolt", new YellowBolt(), 6) 8                .shuffleGrouping("green-bolt"); 9 StormSubmitter.submitTopology(10         "mytopology",11         conf,12         topologyBuilder.createTopology()13     );

    b) 并行度:mytopology总共包含2个Worker进程、10个Executor线程(2个blue-spout + 2个green-bolt + 6个yellow-bolt)、12个Task(2个blue-spout + 4个green-bolt + 6个yellow-bolt),其中2个green-bolt Executor各运行2个Bolt Task。

技术分享

    c) 运行时修改并行度:重新设置为5个Worker进程、3个blue-spout Executor、10个yellow-bolt Executor。

storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10

tick定时机制

1. 场景:定时处理某些业务逻辑,例如每5分钟统计一次并将结果保存到数据库。

2. 原理:让Topology的系统组件定时发送tick消息,Bolt接收到tick消息后,触发相应的业务逻辑。

3. 代码:修改WordCount代码,定时5秒输出一次。

  1 import java.io.IOException;  2 import java.io.RandomAccessFile;  3 import java.util.HashMap;  4 import java.util.Map;  5 import java.util.UUID;  6   7 import org.apache.storm.Config;  8 import org.apache.storm.task.OutputCollector;  9 import org.apache.storm.task.TopologyContext; 10 import org.apache.storm.topology.OutputFieldsDeclarer; 11 import org.apache.storm.topology.base.BaseRichBolt; 12 import org.apache.storm.tuple.Tuple; 13 import org.apache.storm.utils.TupleUtils; 14 import org.slf4j.Logger; 15 import org.slf4j.LoggerFactory; 16  17 public class WordCountBolt extends BaseRichBolt { 18  19     private static final long serialVersionUID = 8239697869626573368L; 20      21     private static final Logger logger = LoggerFactory.getLogger(WordCountBolt.class); 22      23     private String outputDir; 24  25     private OutputCollector collector; 26      27     private Map<String, Integer> wordCounter; 28      29     public WordCountBolt(String outputDir) { 30         this.outputDir = outputDir; 31     } 32  33     @Override 34     @SuppressWarnings("rawtypes") 35     public void prepare(Map conf, TopologyContext context, OutputCollector collector) { 36         this.collector = collector; 37         wordCounter = new HashMap<>(); 38     } 39      40     @Override 41     public void execute(Tuple tuple) { 42         if (TupleUtils.isTick(tuple)) {    // tick tuple 43             outputResult(wordCounter); 44             wordCounter.clear(); 45         } else {    // 正常tuple 46             String word = tuple.getStringByField("word"); 47             Integer count = tuple.getIntegerByField("count"); 48             Integer wordCount = wordCounter.get(word); 49             if (wordCount == null) { 50                 wordCounter.put(word, count); 51             } else { 52                 wordCounter.put(word, count + wordCount); 53             } 54         } 55          56         collector.ack(tuple); 57     } 58  59     @Override 60     public void declareOutputFields(OutputFieldsDeclarer declarer) { 61     } 62      63     @Override 64     public void cleanup() { 65         if (wordCounter != null) { 66             wordCounter.clear(); 67         } 68         super.cleanup(); 69     } 70      71     @Override 72     public Map<String, Object> getComponentConfiguration() { 73         Config conf = new Config(); 74         // 5秒定时 75         conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 5); 76         return conf; 77     } 78  79     private void outputResult(Map<String, Integer> wordCounter) { 80         String filePath = outputDir + "/" + UUID.randomUUID().toString(); 81         RandomAccessFile randomAccessFile = null; 82         try { 83             randomAccessFile = new RandomAccessFile(filePath, "rw"); 84             for (Map.Entry<String, Integer> entry : wordCounter.entrySet()) { 85                 randomAccessFile.writeChars(entry.getKey()); 86                 randomAccessFile.writeChar(‘\t‘); 87                 randomAccessFile.writeChars(String.valueOf(entry.getValue())); 88                 randomAccessFile.writeChar(‘\n‘); 89             } 90         } catch (IOException e) { 91             logger.error("Failed to write file [" + filePath + "].", e); 92         } finally { 93             if (randomAccessFile != null) { 94                 try { 95                     randomAccessFile.close(); 96                 } catch (IOException e) { 97                     logger.warn("Failed to close output stream.", e); 98                 } 99             }100         }101     }102 103 }

序列化

1. 用途:Storm是一个分布式系统,Tuple对象在任务之间传递时需序列化和反序列化。

2. 支持类型:

    a) Java基本类型

    b) String

    c) byte[]

    d) ArrayList

    e) HashMap

    f) HashSet

    g) Clojure集合

    h) 自定义序列化

3. 序列化框架:Kryo,灵活、快速。

4. 序列化方式:动态类型,即无需为Tuple中的字段声明类型。

5. 自定义序列化:需要时参考官方文档。

6. 未知类型序列化

    a) Storm使用Java序列化处理没有序列化注册的类,如果无法序列化,则抛出异常。

    b) 可配置参数“topology.fall.back.on.java.serialization”为false关闭Java序列化。

与其他系统集成

Storm已封装与以下系统集成的API,需要时参考官方文档。

1. Apache Kafka

2. Apache HBase

3. Apache HDFS

4. Apache Hive

5. Apache Solr

6. Apache Cassandra

7. JDBC

8. JMS

9. Redis

10. Event Hubs

11. Elasticsearch

12. MQTT

13. Mongodb

14. OpenTSDB

15. Kinesis

16. Druid

17. Kestrel

性能调优

1. 不要在Spout处理耗时操作

    a) 背景:Spout是单线程的,启用Ack时,在该线程同时执行Spout的nextTuple、ack和fail方法(JStorm启动3个线程分别执行这3个方法)。

    b) 如果nextTuple方法非常耗时,Acker给Spout发送Tuple执行ack或fail方法无法及时相应,可能造成ACK超时后被丢弃,Spout反而认为该Tuple执行失败。

    c) 如果ack或fail方法非常耗时,会影响Spout执行nextTuple方法发射数据量,造成Topology吞吐量降低。

2. 注意Fields grouping数据均衡性

如果按某分组字段分组后的数据,某些分组字段对应的数据非常多,另一些非常少,那么会造成下一级Bolt收到的数据不均衡,整个性能将受制于那些数据量非常大的节

点。

3. 优先使用Local or shuffle grouping

    a) 原理:使用Local or shuffle grouping时,在Worker内部传输,只需通过Disruptor队列完成,无网络开销和序列化开销。

    b) 结论:数据处理复杂度不高而网络和序列化开销占主要时,使用Local or shuffle grouping代替Shuffle grouping。

4. 合理设置MaxSpoutPending

    a) 背景:启用Ack时,Spout将已发射但未等到Ack的Tuple保存在RotatingMap。

    b) 设置方式:通过参数“topology.max.spout.pending”或TopologyBuilder.setSout.setMaxSpoutPending方法设置其最大个数。

    c) 方法:具体优化值再参考资料。

5. Netty优化

参数

默认值(defaults.yaml

storm.messaging.transport

org.apache.storm.messaging.netty.Context

storm.messaging.netty.server_worker_threads

1

storm.messaging.netty.client_worker_threads

1

storm.messaging.netty.buffer_size

5242880

storm.messaging.netty.transfer.batch.size

262144

6. JVM调优
参数“worker.childopts”,例如:

work.childopts: "-Xms2g -Xmx2g"

 

作者:netoxi
出处:http://www.cnblogs.com/netoxi
本文版权归作者和博客园共有,欢迎转载,未经同意须保留此段声明,且在文章页面明显位置给出原文连接。欢迎指正与交流。

 

 

<style></style>

Storm笔记——技术点汇总