Liking cljdoc? Tell your friends :D

ziggurat.producer

This namespace defines methods for publishing data to Kafka topics. The methods defined here are essentially wrapper around variants of send methods defined in org.apache.kafka.clients.producer.KafkaProducer.

At the time of initialization, an instance of org.apache.kafka.clients.producer.KafkaProducer is constructed using config values provided in resources/config.edn.

A producer can be configured for each of the stream-routes in config.edn. Please see the example below

` :stream-router {:default {:application-id "test"

                         :bootstrap-servers    "localhost:9092"

                         :stream-threads-count [1 :int]

                         :origin-topic         "topic"

                         :channels             {:channel-1 {:worker-count [10 :int]

:retry {:count [5 :int] :enabled [true :bool]}}}

                         :producer             {:bootstrap-servers "localhost:9092"

                                                :acks "all"

                                                :retries-config  5

                                                :max-in-flight-requests-per-connection 5

                                                :enable-idempotence  false

                                                :value-serializer  "org.apache.kafka.common.serialization.StringSerializer"

                                                :key-serializer    "org.apache.kafka.common.serialization.StringSerializer"}}

`

Usage:

Please seesendfor publishing data via Kafka producers

These are the KafkaProducer configs currenlty supported in Ziggurat.

  • bootstrap.servers
  • acks
  • retries
  • key.serializer
  • value.serializer
  • max.in.flight.requests.per.connection
  • enable.idempotencecd

Please see producer configs for a complete list of all producer configs available in Kafka.

This namespace defines methods for publishing data to
 Kafka topics. The methods defined here are essentially wrapper
 around variants of `send` methods defined in
 `org.apache.kafka.clients.producer.KafkaProducer`.

 At the time of initialization, an instance of
 `org.apache.kafka.clients.producer.KafkaProducer`
 is constructed using config values provided in `resources/config.edn`.

 A producer can be configured for each of the stream-routes
 in config.edn. Please see the example below

 `
   :stream-router {:default {:application-id "test"

                             :bootstrap-servers    "localhost:9092"

                             :stream-threads-count [1 :int]

                             :origin-topic         "topic"

                             :channels             {:channel-1 {:worker-count [10 :int]
:retry {:count [5 :int]
:enabled [true :bool]}}}

                             :producer             {:bootstrap-servers "localhost:9092"

                                                    :acks "all"

                                                    :retries-config  5

                                                    :max-in-flight-requests-per-connection 5

                                                    :enable-idempotence  false

                                                    :value-serializer  "org.apache.kafka.common.serialization.StringSerializer"

                                                    :key-serializer    "org.apache.kafka.common.serialization.StringSerializer"}}
 `

 Usage:

 `
    Please see `send` for publishing data via Kafka producers
 `

 These are the KafkaProducer configs currenlty supported in Ziggurat.
 - bootstrap.servers
 - acks
 - retries
 - key.serializer
 - value.serializer
 - max.in.flight.requests.per.connection
 - enable.idempotencecd

 Please see [producer configs](http://kafka.apache.org/documentation.html#producerconfigs)
 for a complete list of all producer configs available in Kafka.
raw docstring

ziggurat.sentry

Sentry reporting functions used by both Ziggurat and the actor.

Sentry reporting functions used by both Ziggurat and the actor.
raw docstring

ziggurat.tracer

This namespace creates a tracer that can be used to trace the various stages of the application workflow.

The following flows are traced:

  1. Normal basic consume
  2. Retry via rabbitmq
  3. Produce to rabbitmq channel
  4. Produce to another kafka topic

At the time of initialization, an instance of io.opentracing Tracer is created based on the configuration.

  • If tracer is disabled, then a NoopTracer is created, which will basically do nothing.

  • If tracer is enabled, then by default a Jaeger tracer will be created based on the environment variables. Please refer Jaeger Configuration and Jaeger Architecture to set the respective env variables.

    Example Jaeger Env Config: JAEGER_SERVICE_NAME: "service-name" JAEGER_AGENT_HOST: "localhost" JAEGER_AGENT_PORT: 6831

  • Custom tracer can be created by passing the name of a custom tracer provider function as :custom-provider.The corresponding function will be executed to create the tracer.

In the event of any errors, a NoopTracer will be created

Example tracer configuration:

` :tracer {:enabled [true :bool]

       :custom-provider       ""}

`

Usage: Inziggurat.streams/traced-handler-fn`, around the execution of the handler function, a span is started, activated and finished. If there are trace-id headers in the kafka message, this span will be tied to the same trace. If not, a new trace will be started.

Any part of the handler function execution can be traced as a child span of this activated parent span. Please refer to the doc to understand how to create child spans.

The trace ids are propagated to rabbitmq using io.opentracing.contrib.rabbitmq.TracingConnection. Hence rabbitmq flows are also traced.

The trace ids are propagated back to kafka using io.opentracing.contrib.kafka.TracingKafkaProducer. Hence push back to kafka flow is traced. `

This namespace creates a [tracer](https://opentracing.io/docs/overview/tracers/)
that can be used to trace the various stages of the application workflow.

The following flows are traced:
1. Normal basic consume
2. Retry via rabbitmq
3. Produce to rabbitmq channel
4. Produce to another kafka topic

At the time of initialization, an instance of `io.opentracing Tracer`
is created based on the configuration.

- If tracer is disabled, then a `NoopTracer` is created,
  which will basically do nothing.

- If tracer is enabled, then by default a [Jaeger](https://www.jaegertracing.io/)
  tracer will be created based on the environment variables. Please refer
  [Jaeger Configuration](https://github.com/jaegertracing/jaeger-client-java/tree/master/jaeger-core#configuration-via-environment)
  and [Jaeger Architecture](https://www.jaegertracing.io/docs/1.13/architecture/)
  to set the respective env variables.

  Example Jaeger Env Config:
  `
    JAEGER_SERVICE_NAME: "service-name"
    JAEGER_AGENT_HOST: "localhost"
    JAEGER_AGENT_PORT: 6831
  `

- Custom tracer can be created by passing the name of a custom tracer
  provider function as `:custom-provider`.The corresponding function
  will be executed to create the tracer.

In the event of any errors, a NoopTracer will be created

Example tracer configuration:

`
  :tracer {:enabled               [true :bool]

           :custom-provider       ""}
`

Usage:
`
   In `ziggurat.streams/traced-handler-fn`, around the execution of the handler function,
   a span is started, activated and finished. If there are trace-id headers in the kafka message,
   this span will be tied to the same trace. If not, a new trace will be started.

   Any part of the handler function execution can be traced as a child span of this activated parent span.
   Please refer to the [doc](https://github.com/opentracing/opentracing-java#starting-a-new-span)
   to understand how to create child spans.

   The trace ids are propagated to rabbitmq using `io.opentracing.contrib.rabbitmq.TracingConnection`.
   Hence rabbitmq flows are also traced.

   The trace ids are propagated back to kafka using `io.opentracing.contrib.kafka.TracingKafkaProducer`.
   Hence push back to kafka flow is traced.
`
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close