Kafka In Production

Kafka is the main component of any system that implement Event-Driven-Architecture.

What do you need to know before deploying Kafka in production?

Kafka is low-latency component that act as the broker between the producers and the consumers.

Low-latency is extremely important and you’ll need to verify and monitor that the cluster is as follows:

Low-latency disk IO

  • Kafka write messages to disk (from producers) and messages are read from disk (consumers)
  • Kafka use data in sequential disk access that is very fast.
  • Zero Copy – this means Kafka copy data from local disk directly to the network interface.
  • Disk size per Kafka broker should probably be 6TB or more, depending on the use case.

RAM

  • RAM is extremely important as Kafka process uses Java HEAP.
  • Page Cache – is the main disk cache, the Linux Kernel uses page cache as buffer for reading and writing to disk.
    • If there is memory available the page is kept in the cache without accessing the disk.
    • this is extremely efficient and what makes Kafka so fast.

Network

  • High Throughput as Kafka broker the entire data between services.

What do you need to know after deploying Kafka in production?

Tuning and reassign partitions

Even after your cluster is working as expected in production you’ll need to tune it using new config options and reassign partitions.

Monitor

Monitor your cluster is extremely important as it reflects that actual status of the cluster, and since the cluster is the main component of Event-Driven-System the cluster should perform in micro-seconds and milliseconds.

Monitor will make your debugging much easier since Kafka metrics will display current status.

SpinningOps helps startups improve their system design, contact HERE and ask what can we do for your application.

(Visited 127 times, 1 visits today)

Leave A Comment