A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients.
ScalableKafka is designed to allow a single cluster to serve as the central data backbone for a large organization. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of co-ordinated consumers
DurableMessages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages without performance impact.
Distributed by DesignKafka has a modern cluster-centric design that offers strong durability and fault-tolerance guarantees.