简介
Kafka本身自带了性能测试的脚本,可以测试发送端和消费端的速度,分别为:
kafka-producer-perf-test.sh
kafka-consumer-perf-test.sh
这两个脚本可以在kafka的bin目录下找到。
发送端
bin/kafka-producer-perf-test.sh
usage: producer-performance [-h] --topic TOPIC --num-records NUM-RECORDS --record-size RECORD-SIZE --throughput THROUGHPUT [--producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...]] [--producer.config CONFIG-FILE] This tool is used to verify the producer performance. optional arguments: -h, --help show this help message and exit
--topic TOPIC produce messages to this topic
--num-records NUM-RECORDS number of messages to produce
--record-size RECORD-SIZE message size in bytes
--throughput THROUGHPUT throttle maximum message throughput to *approximately* THROUGHPUT messages/sec
--producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...] kafka producer related configuration properties like bootstrap.servers,client.id etc. These configs take precedence over those passed via --
producer.config.
--producer.config CONFIG-FILE producer config properties file.
例子:
bin/kafka-producer-perf-test.sh --topic store --record-size 1000 --throughput 2000 --num-records 10000 --producer-props bootstrap.servers=cdh01:9092 client.id=store_client
消费端
bin/kafka-consumer-perf-test.sh
Option Description
------ ----------- --batch-size <Integer: size> Number of messages to write in a
single batch. (default: 200)
--broker-list <host> A broker list to use for connecting if using the new consumer.
--compression-codec <Integer: If set, messages are sent compressed
supported codec: NoCompressionCodec (default: 0)
as 0, GZIPCompressionCodec as 1,
SnappyCompressionCodec as 2,
LZ4CompressionCodec as 3>
--consumer.config <config file> Consumer config properties file. --date-format <date format> The date format to use for formatting
the time field. See java.text.
SimpleDateFormat for options.
(default: yyyy-MM-dd HH:mm:ss:SSS)
--fetch-size <Integer: size> The amount of data to fetch in a
single request. (default: 1048576)
--from-latest If the consumer does not already have
an established offset to consume
from, start with the latest message
present in the log rather than the
earliest message.
--group <gid> The group id to consume on. (default:
perf-consumer-77417)
--help Print usage. --hide-header If set, skips printing the header for
the stats
--message-size <Integer: size> The size of each message. (default: 100)
--messages <Long: count> REQUIRED: The number of messages to
send or consume
--new-consumer Use the new consumer implementation. --num-fetch-threads <Integer: count> Number of fetcher threads. (default: 1) --reporting-interval <Integer: Interval in milliseconds at which to
interval_ms> print progress info. (default: 5000)
--show-detailed-stats If set, stats are reported for each
reporting interval as configured by
reporting-interval --socket-buffer-size <Integer: size> The size of the tcp RECV size.
(default: 2097152)
--threads <Integer: count> Number of processing threads.
(default: 10)
--topic <topic> REQUIRED: The topic to consume from. --zookeeper <urls> The connection string for the
zookeeper connection in the form
host:port. Multiple URLS can be
given to allow fail-over. This
option is only used with the old
consumer.
例子:
bin/kafka-consumer-perf-test.sh --topic store --zookeeper cdh01:2181 --messages 10000
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)