Apache Kafka 2.0.0 正式发布,分布式消息发布订阅系统
2018-07-31 来源:oschina
Apache Kafka 2.0.0 已正式发布,这是一个主要版本,新增了许多重要的新功能。此外还包括许多重要的 bug 修复和改进,其中还包括一些严重的错误修复。
Apache Kafka 2.0.0 下载地址 >>> https://kafka.apache.org/downloads#2.0.0
值得关注的新特性
KIP-290 adds support for prefixed ACLs, simplifying access control management in large secure deployments. Bulk access to topics, consumer groups or transactional ids with a prefix can now be granted using a single rule. Access control for topic creation has also been improved to enable access to be granted to create specific topics or topics with a prefix.
KIP-255 adds a framework for authenticating to Kafka brokers using OAuth2 bearer tokens. The SASL/OAUTHBEARER implementation is customizable using callbacks for token retrieval and validation.
Host name verification is now enabled by default for SSL connections to ensure that the default SSL configuration is not susceptible to man-in-the-middle attacks. You can disable this verification if required.
You can now dynamically update SSL truststores without broker restart. You can also configure security for broker listeners in ZooKeeper before starting brokers, including SSL keystore and truststore passwords and JAAS configuration for SASL. With this new feature, you can store sensitive password configs in encrypted form in ZooKeeper rather than in cleartext in the broker properties file.
The replication protocol has been improved to avoid log divergence between leader and follower during fast leader failover. We have also improved resilience of brokers by reducing the memory footprint of message down-conversions. By using message chunking, both memory usage and memory reference time have been reduced to avoid OutOfMemory errors in brokers.
Kafka clients are now notified of throttling before any throttling is applied when quotas are enabled. This enables clients to distinguish between network errors and large throttle times when quotas are exceeded.
We have added a configuration option for Kafka consumer to avoid indefinite blocking in the consumer.
We have dropped support for Java 7 and removed the previously deprecated Scala producer and consumer.
Kafka Connect includes a number of improvements and features. KIP-298 enables you to control how errors in connectors, transformations and converters are handled by enabling automatic retries and controlling the number of errors that are tolerated before the connector is stopped. More contextual information can be included in the logs to help diagnose problems and problematic messages consumed by sink connectors can be sent to a dead letter queue rather than forcing the connector to stop.
KIP-297 adds a new extension point to move secrets out of connector configurations and integrate with any external key management system. The placeholders in connector configurations are only resolved before sending the configuration to the connector, ensuring that secrets are stored and managed securely in your preferred key management system and not exposed over the REST APIs or in log files.
We have added a thin Scala wrapper API for our Kafka Streams DSL, which provides better type inference and better type safety during compile time. Scala users can have less boilerplate in their code, notably regarding Serdes with new implicit Serdes.
Message headers are now supported in the Kafka Streams Processor API, allowing users to add and manipulate headers read from the source topics and propagate them to the sink topics.
Windowed aggregations performance in Kafka Streams has been largely improved (sometimes by an order of magnitude) thanks to the new single-key-fetch API.
We have further improved unit testibility of Kafka Streams with the kafka-streams-testutil artifact.
可以看到,从该版本起,已经放弃对 Java 7 的支持,并移除了之前弃用的 Scala 生产者和消费者。
更多详细的信息,请查看发布说明。
标签: ssl
版权申明:本站文章部分自网络,如有侵权,请联系:west999com@outlook.com
特别注意:本站所有转载文章言论不代表本站观点!
本站所提供的图片等素材,版权归原作者所有,如需使用,请与原作者联系。