设为首页 加入收藏

TOP

kafka文档(14)----0.10.1-Document-文档(6)-configures-Kafka Connect配置信息
2018-11-13 16:28:55 】 浏览:71
Tags:kafka 文档 ---- 0.10.1 Document configures Kafka Connect 配置 信息

3.4 Kafka Connect Configs

Below is the configuration of the Kafka Connect framework.


下面是kafka Connect框架的配置

NAME DESCRIPTION TYPE DEFAULT VALID VALUES IMPORTANCE
config.storage.topic

kafka topic to store configs


存储配置的kafka topic

string high
group.id

A unique string that identifies the Connect cluster group this worker belongs to.


唯一的标识,用于指明当前worker属于哪一个Connect cluster group

string high
key.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.


转换的类:用于将kafka Connect格式与写入kafka的序列化格式之间进行转换。这个配置控制了从kafka读写消息的keys的格式,因为这个是 connectors的个性化特征,即它可以是以序列化格式工作的任何connector。例如,通用的格式为:JSON和Avro

class high
offset.storage.topic

kafka topic to store connector offsets in


存储connector offsets的topic

string high
status.storage.topic

kafka topic to track connector and task status


追踪connector和任务状态的kafka topic

string high
value.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.


在kafka Connect格式和写入kafka的序列化格式之间进行转换的转换器。这个控制了读写kafka的消息中的values的格式,因为这个是connectors个性化特征,因此可以支持以任何序列化格式工作的任何connector。例如:JSON和Avro

class high
internal.key.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation.


用于在kafka Connect格式和读写kafka的序列化格式之间进行转换的转换器。这个控制了从kafka读写消息的keys的格式。因为这个是connectors的个性化特征,因此可以是以任何序列化格式工作的connectors。例如,JSON和Avro。此设置控制了框架内部预定数据的格式,例如配置和offsets,因此,用户一般可以使用任何Converter实现。

class low
internal.value.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation.


用于在kafka Connect格式和读写kafka的序列化格式之间进行转换的转换器。这个控制了从kafka读写消息的values的格式。因为这个是connectors的个性化特征,因此可以是以任何序列化格式工作的connectors。例如,JSON和Avro。此设置控制了框架内部预定数据的格式,例如配置和offsets,因此,用户一般可以使用任何Converter实现。

class low
bootstrap.servers

A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the formhost1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).


host/port对的列表,用来建立与kafka的初始链接。客户端将使用列表中所有指定的servers-这个列表只影响客户端的初始化,客户端需要使用这个列表去查询所有servers的完整列表。列表格式应该为:host1:port1,host2,port2,....;因为这些server列表只是用来初始化发现完整的server列表(而完整的server列表可能在使用中发生变化,机器损坏,部署迁移等),这个表不需要包含所有server的ip和port(但是最好多于1个,预防这个server挂掉的风险,防止下次启动无法链接)

list [localhost:9092] high
heartbeat.interval.ms

The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower thansession.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.


当使用Kafka的group管理用法时,consumer协作器两次心跳之间的时间间隔。心跳链接用来保证consumer的会话依然活跃,以及在新consumer加入consumer group时可以重新进行负载均衡。这个值要比session.timeout.ms小,但是一般要比session.timeout.ms的1/3要打。这个值可以适当的减小,以控制重负载均衡的时间。

int 3000 high
rebalance.timeout.ms

The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.


重新负载均衡开始之后,每个worker重新加入group的最大等待时间。这个设置限制了所有任务需要回刷待定数据以及提交offsets的总时长。如果超时,worker会从group中移除,有可能会导致offset提交失败。

int 60000 high
session.timeout.ms

The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration bygroup.min.session.timeout.msandgroup.max.session.timeout.ms.


用于检测worker失败的超时时间。worker周期性的发送心跳信息,用来向broker表明处于活跃状态。如果broker在超时时间之内没有收到心跳信息,broker会将worker从group中移除,并重新进行负载均衡。注意:这个值必须在可允许的范围之内:group.min.session.timeout.ms和group.max.session.timeout.ms之间。

int 10000 high
ssl.key.password

The password of the private key in the key store file. This is optional for client.



存储在密钥文件中私有密钥。这个是可选的

password null high
ssl.keystore.location

The location of the key store file. This is optional for client and can be used for two-way authentication for client.


密钥文件的位置。对于客户端来说是可选的,而是可以用来做双向认证。

string null high
ssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.


密钥存储文件中密码。客户端是可选的,只有当ssl.keystore.location配置时才能使用。

password null high
ssl.truststore.location

The location of the trust store file.


受信任文件的位置

string null high
ssl.truststore.password

The password for the trust store file.


受信任文件的密码

password null high
connections.max.idle.ms

Close idle connections after the number of milliseconds specified by this config.


空闲链接的超时时间:server socket处理线程会关闭超时的链接。

long 540000 medium
receive.buffer.bytes

The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.


TCP接受缓存的大小(SO_RCVBUF)。如果设置为-1,则使用OS默认值.

int 32768 [0,...] medium
request.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.


客户端等待broker应答的超时时间。如果超时了,客户端没有收到应答,如果必要的话可能会重发请求,如果重试都失败了也可能会报请求失败

int 40000 [0,...] medium
sasl.kerberos.service.name

The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.


kafka运行的Kerberos主机名。可以在Kafka's JAAS配置或者Kafka's 配置中定义。

string null medium
sasl.mechanism

SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.


客户端链接进行通信的SASL机制。默认时GSSAPI

string GSSAPI medium
security.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.


brokers之间通信使用的安全协议。正确值为:PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

string PLAINTEXT medium
send.buffer.bytes

The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.


TCP发送的socket的SO_SNDBUF缓存。如果设置为-1,将使用OS的默认值

int 131072 [0,...] medium
ssl.enabled.protocols

The list of protocols enabled for SSL connections.


SSL链接的协议

list [TLSv1.2, TLSv1.1, TLSv1] medium
ssl.keystore.type

The file format of the key store file. This is optional for client.


密钥文件的文件格式。对客户端来说是可选的。

string JKS medium
ssl.protocol

The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.


生成SSLContext的SSL协议。默认配置时TLS,适用于大部分情况。最近JVMS支持的协议包括:TLS,TLSv1.1,TLSv1.2.
SSL,SSLv2,SSLv3在老版本的JVMS中可用,但是由于知名的安全漏洞,它们并不受欢迎。

string TLS medium
ssl.provider

The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.


SSL链接安全提供者名字。默认是JVM

string null medium
ssl.truststore.type

The file format of the trust store file.


受信任文件的文件格式

string JKS medium
worker.sync.timeout.ms

When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.


当某个worker与其它workers无法同步以及需要重新同步配置时,这个超时时间是放弃之前的等待时间,否则就会放弃同步,离开group,然后等待下一次重新加入。

int 3000 medium
worker.unsync.backoff.ms

When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.


当某个worker与其它workers不同步并在超时时间之内无法重新同步时,离开Connect集群并等待下一次重新加入的时长。

int 300000 medium
access.control.allow.methods

Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.


通过设置Access-Control-Allow-Methods信息头的方式设置交叉原始请求支持的方式。Access-Control-Allow-Methods信息头的默认值允许GET,POST以及HEAD

string "" low
access.control.allow.origin

Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.


设置Access-Control-Allow-Origin的头信息,用于REST API的请求。为了能够交叉原始使用,将此设置为应允许访问API的应用程序的域,或者设置为‘*’,以允许访问任何域。默认值仅允许访问REST API的域。

string "" low
client.id

An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.


请求中会附带上id 字符串,用来标识客户端。目的是追踪请求的来源,用于检查某些请求是否来自非法ip/port。

string "" low
metadata.max.age.ms

The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.


更新metadata的时间间隔,无论partition的leader是否发生变换或者topic其它的元数据是否发生变化。

long 300000 [0,...] low
metric.reporters

A list of classes to use as metrics reporters. Implementing theMetricReporterinterface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.


用于实现指标统计的类的列表。MetricReporter接口允许调用实现指标统计的插件类。JmxReporter总是包含注册JMX统计。

list [] low
metrics.num.samples

The number of samples maintained to compute metrics.


维护计算指标的样本数

int 2 [1,...] low
metrics.sample.window.ms

The window of time a metrics sample is computed over


度量样本的计算的时长.

long 30000 [0,...] low
offset.flush.interval.ms

Interval at which to try committing offsets for tasks


tasks尝试提交offsets的时间间隔

long 60000 low
offset.flush.timeout.ms

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.


等待回刷纪录或者提交partition的offset的最长时间,超时则会重试

long 5000 low
reconnect.backoff.ms

The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.


在重连给定host之前的等待时间,避免极短时间内频繁链接某个host。

long 50 [0,...] low
rest.advertised.host.name

If this is set, this is the hostname that will be given out to other workers to connect to.


如果设置此选项,则此host name是其它workers链接时的host name

string null low
rest.advertised.port

If this is set, this is the port that will be given out to other workers to connect to.


如果设置了此选项,这个port就是给其它worker进行链接的port

int null low
rest.host.name

Hostname for the REST API. If this is set, it will only bind to this interface.


用于REST API的hostname。如果设置,则只会绑定到这个接口。

string null low
rest.port

Port for the REST API to listen on.


对REST API监听的port

int 8083 low
retry.backoff.ms

The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.


重新发送失败请求的等待时间。避免某些失败情况下频繁发送请求。

long 100 [0,...] low
sasl.kerberos.kinit.cmd

Kerberos kinit command path.


Kerberos kinit命令路径

string /usr/bin/kinit low
sasl.kerberos.min.time.before.relogin

Login thread sleep time between refresh attempts.


在重试之间登陆线程的睡眠时间

long 60000 low
sasl.kerberos.ticket.renew.jitter

Percentage of random jitter added to the renewal time.


添加到更新时间的随机抖动的百分比。

double 0.05 low
sasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.


重新进行登录验证刷新之前,登录线程的睡眠时间

double 0.8 low
ssl.cipher.suites

A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.


密码套件列表。 这是一种集认证,加密,MAC和密钥交换算法一块的命名组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。

list null low
ssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate server hostname using server certificate.


端点标识算法,使用服务器证书验证服务器主机名。

string null low
ssl.keymanager.algorithm

The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.


密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法

string SunX509 low
ssl.secure.random.implementation

The SecureRandom PRNG implementation to use for SSL cryptography operations.


用于SSL加密操作的SecureRandom PRNG实现。

string null low
ssl.trustmanager.algorithm

The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.


ssl链接信任管理者工厂的算法。默认时JVM支持的算法。

string PKIX low
task.shutdown.graceful.timeout.ms

Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.


tasks在优雅退出前的等待时间。此设置时所有任务的退出等待时间,不是针对每个任务来说的。所有任务都被触发推出,那么会顺序的等待退出。

long 5000 low
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Kafka 使用 下一篇replication factor: 1 larger th..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目