All notable changes to this project will be documented in this file. This change log follows the conventions
of keepachangelog.com.
- Adds support for ACL auth for kafka streams.
- Fix retry-count returning nil if empty. Returns 0 by default now.
- exposes kafka key to stream routes
- Adds Native Prometheus client integration
- Adds Graceful shutdown to the http server
- Improvises the publishing logic during consumption via subscribers.
- Upgrades the state management for rabbitmq subscribers.
rabbitmq-retry-count
is now available in metadata
provided in user handler function.
- Fixed a bug where kafka header with null values, throws Null Pointer Exception upon publishing to rabbitmq
- Publishes metric to gauge time taken to send messages to rabbitmq
- Updated dead-set APIs to replay and delete dead-set messages asynchronously
- Fixed a bug where instantiation of channel pool leads to null pointer exception when stream route does not have
:stream-threads-count defined
- Releasing a new tag because the version 4.7.0 was already present in clojars.
- Added a feature to retry non-recoverable exceptions during publishing messages on rabbitmq
- user can provide
:prefetch-count
for RabbitMQ channel threads in [:stream-router :channels :<channel_key>]
section of the config - Fixed a bug for overriding the default channel-pool configuration with the user provided config
- RabbitMQ's connections use a DNS IP resolver to resolve DNS based hosts
- Setting of HA policies from within ziggurat have been removed
- Fixed a bug where publish code would keep on retrying despite getting an exception while borrowing
a channel from the pool
- Reduce the maximum number of idle objects in the channel pool
- Implements pool RabbitMQ channels
- Update logic of the deadset delete API to just read and ack messages and ignore content
- Add deprecation warnings to sentry usage
- Updates logback-classic version to 1.2.9
- All Kafka SSL configs
and SASL configs
can be provided as kebab case keywords. These configs are automatically applied to all kafka stream, kafka producer
and kafka consumer objects created in Ziggurat. Please refer to README for examples.
- Allows channel mapper to push to dead letter queue
- Fixes the stop order of components to facilitate graceful shutdown of business operations
- Update on the UpgradeGuide document
- Removed the flatland dependency
- Allows handler function to push to dead letter queue (does not work if the handler acts on RabbitMQ channles too)
- Enabled structured logging via cambium
- Replaced log4j with logback as the default slf4j implementation.
- Added structured logs in ziggurat codebase as well.
- Changed the code for consuming from RabbitMQ - on exception during de-serialization, message is sent to the dead-set
queues and NOT re-queued back in the queue like previous versions.
- The kafka-metadata is now exposed to the stream handler function along with the message
itself
{:topic <string> :partition <int> :timestamp <long>}
- The stream handler receives a map containing two keys
:message
and :metadata
- Includes a
StreamsUncaughtExceptionHandler
which shuts down the client in case of an uncaught exception. - Introduces a new stream-route config
:stream-thread-exception-response
which lets user control the behaviour
of StreamsUncaughtExceptionHandler
. - Restores AoT compilation for various namespaces
- Upgrade kafka-streams library to 2.8.0
- Replaced
:ziggurat :datadog
configuration in favour of :ziggurat :statsd
- Removed the default state-store which was created at the time of Kafka Streams initialization
- Uses Kafka Streams client 2.7.0
- Introduces default.api.timeout.ms for Kafka Consumer API
- Uses Kafka Streams client 2.5.0
- Uses Kafka Streams client 2.4.1
- Added support for handling uncaught exceptions in Kafka Streams using
:enable-stream-restart-on-uncaught-exception
- Strict type checking for batch handler return type. Application is stopped if the type does not match the expected.
- Added support to flatten protobuf-struct during deserialization
- Validation of stream and batch route arguments against the configuration, when starting the application.
- Changed the logic for committing offsets to only commit only when non-zero records are polled while
consuming via Kafka Consumer API
- Error reporting done to newrelic along with sentry.
- Refactored and simplified the code for retrying, publishing and consuming using RabbitMQ.
- The config
{:messaging {:constructor
has been removed from :ziggurat
config space - Both,
{:ziggurat :rabbit-mq-connection {:hosts
and {:ziggurat :rabbit-mq-connection {:host
configs
are accepted for connecting to RabbitMQ. But, :hosts
is preferred over :host
. :hosts
should be used
to define cluster hosts. - defrecord
ziggurat.mapper.MessagePayload
has been added back to preserve backward compatibility - Fixed a bug in calculation of exponential backoff timeout where casting the timeout to integer
was throwing an IllegalArgumentException
- If there's an exception in the batch handler function, the failure metrics is published with a count of "total batch
size" (which was being processed by the function) instead of just 1 as was being done before this change.
- Fixed publishing of metrics for batch consumption
- Fixed the startup logic for batch consumption - only routes provided in Ziggurat init-args will be started
- Standardized naming for Kafka Consumer API configs
- Adds support for consuming Kafka messages in batches using Kafka Consumer API
- Fixes the logic during RabbitMQ disconnection - Ziggurat now retries (publishing a message)
infinitely till RMQ recovers. This changes the present behaviour where Kafka Streams were being stopped
during disconnection with RabbitMQ.
- Log the stream record metadata
- Investigate if transform() affects how stream joins behave
- Fixes issue #56
- Adds functionality to stop and restart KafkaStreams using nREPL.
- Fixes issue #142
- Adds improved error message responses for Deadset API calls
- Releases Stable Ziggurat version with support for RabbitMQ clusters
- Upgraded the Kafka Streams library to 2.3.0
- Moves stream joins behind an alpha feature flag
- Defaults RabbitMQ queue replication to
(n/2) + 1
nodes, where n
is the number of nodes in the cluster
- Fixes false positive exception thrown by messaging when an actor abnormally terminates
- Remove defrecord wrappers
- Uses default ha-config values if nothing is provided
- Adds RabbitMQMessaging implementation to support connection with RabbitMQ clusters
- Adds support for setting up HA policies for queues and exchanges
- Removes the use of the old protobuf library in favor of the new one
- Makes messaging implementation configurable
- Adds a new protocol for Messaging
- Refactors rabbitmq specific logic to the messaging.rabbitmq package
- Adds unit tests for rabbitmq specific namespaces
- Adds test annotations to messaging integration tests
- Support for Kafka Stream KStream-KStream Join
- Introduces a swagger middleware on the HTTP server.
- Makes metrics library configurable, exposes a metrics interface and provides an
implementation for clj-statsd library.
- Uses clojusc protobuf in place of flatland protobuf for deserializing proto messages
- Adds support for all configurations of kafka-producer.
(for a more details list of changes look at the changelogs of 3.3.0-alpha. entries)
- Adding support for all configurations supported by Kafka Producer
- Remove
[org.flatland/protobuf "0.8.1"]
from test dependencies
- Fixes Issue
- Adds alpha feature flags
- Makes metrics implementation configurable
- passes "all" metric-name to update timing for all metrics
- Adds docs for MetricsProtocol
- Defines an interface for metrics.
- Changes the dropwizard implementation to use the metrics interface
- Adds a metrics interface implementation for clj-statsd library.
- Fixes metrics initialization
- Refactors Metrics.clj
- Moves dropwizard metrics logic to its own namespace
- Moves statsd state (transport and reporter) to dropwizard namespace
- Removes wrap-with-metrics middleware from HTTP router
- Fixes bug Issue
to avoid confusion between datadog and statsd
- Fixes bug for this Issue
- Releases metrics for http requests in a stable release
- Added metrics for recording http requests served
- Changes Exponential backoff config contract.
- Adds a
:type
key to retry-config - Adds a limit on the number of retries possible in exponential backoff
- Releasing exponential backoff as an alpha feature
- Fixes issue where dead-set replay doesn't send the message to the
retry-flow
- Fixes issue by updating tools.nrepl dependency to nrepl/nrepl
- Fixes this bug where dead set replay broke on Ziggurat upgrade from
2.x to 3.x .
- Fixes this bug in RabbitMQ message processing flow
- Adds support for exponential backoffs in channels and normal retry flow
- exponential backoffs can be enabled from the config
- Fixes issue where dead-set replay doesn't send the message to the
retry-flow
- Fixes issue by updating tools.nrepl dependency to nrepl/nrepl
- Fixes this bug where dead set replay broke on Ziggurat upgrade from
2.x to 3.x .
- Fixes this bug in RabbitMQ message processing flow
- Adds support for exponential backoffs in channels and normal retry flow
- exponential backoffs can be enabled from the config
- Adds tracing support. With Jaeger as the default tracer
- Adds a JSON middleware to parse JSON serialized functions
- Renames report-time to report-histogram and adds deprecation notice on report-time
- Makes metrics backward compatible with 2.x and 3.0.0 . Ziggurat now publishes metrics in 2 formats similar to version
2.12.0 and above.
- Fixes metrics publishing for custom metrics (i.e. string metric namespaces): In 2.x ziggurat appended the service_name
to a string metrics namespace (e.g. "metric" -> "service_name.metric"), we changed the contract in 3.0 by
removing the service_name from the metric name and instead adding a tag for it. To have backward compatibility with
both
2.x and 3.0 we now send metrics in both formats
- Reintroduces old metrics format (statsd). Ziggurat now pushes metrics in both formats (statsd and prometheus like).
- Reverts the changes for exponential backoff, the current implementation was broken and a new PR is being raised with
the correct approach.
- Renames report-time to report-histogram while being backward compatible
- JSON middleware has been added.
- Adds custom delay (Issue#78)
for processing messages from RabbitMQ channels and
adds exponential backoff strategy (configurable) for channel retries.
- Adds tracing support to the framework.
- Updates kafka streams - 1.1.1 -> 2.1.0
- Changes metrics format
- Instead of having service name and topic in the metric name, everything is now added to tags
- Middleware
- Handler-fn will now receive the message as a byte array
- Channel-fns will now receive the message as a byte array
- We have provided middlewares, that can be used to deserialize the messages
- Deadset-get api will now get serialized messages
- Java functions
- Java functions are now exposed for all public functions in namespaces
- Dependency simplification
- Removes dependency overrides.
- Removes dependency overrides and conflicts
- Adds pedantic warn to generic profile
- Adds pedantic abort to uberjar profile
- Fix increment/decrement count to accept both number and map
- Exposes Java methods for init, config, producer, sentry, metrics, fixtures namespaces
- Adds util methods for converting data types between java and Clojure
- Remove old metrics from being sent
- Adds middleware support
Breaking Change!
Mapper-function will now receive deserialised message if middleware is not applied- Deadset view will now return serialized messages
- Fixes a bug that incorrectly checked for additional-tags in ziggurat.metrics/merge-tags
- Fixes a bug where calling inc-or-dec count without passing additional tags raised and exception
- Upgrades kafka streams to version 2.1. Please refer this to upgrade
- Fix increment/decrement count to accept both number and map
- Fix functions to either take vector or string as input
- Fixes a bug that incorrectly checked for additional-tags in ziggurat.metrics/merge-tags
- Fixes a bug where calling inc-or-dec count without passing additional tags raised and exception
- Add support for providing a topic-name label in the metrics
- Multiple Kafka producers support in ziggurat (#55)
- Validate stream routes only when modes is not present or it contains stream-server (#59)
- Actor stop fn should stop before the Ziggurat state (#53)
- Running ziggurat in different modes (#46)
- Adds config to change the changelog topic replication factor
- dont close the channel on shutdown listener. it is already closed when connection is broken. this prevents topology
recovery
- catch message production exception in rabbitmq publisher
- Adds nippy as dependency instead of carmine
- Adds macro for setting thread-local context params for logs
- Adds deployent stage on CI pipeline
- Initialize reporters before running actor start fn
- Initialize reporters before running actor start fn
- Adds deployent stage on CI pipeline
- Updates changelog for older releases
- Releases using java 8
- This release has been compiled using java 10. It will not work with
older versions of java.
- Adds oldest-processed-message-in-s config
- Adds capabiltiy to filter message based on timestamp
- Fixes bug in deadset API for channel enabled
- Changes namespace of
transformer
into timestamp-transformer
- Handle Deadset API when retry is disabled
- Fixing message being retried n + 1 times
- Fixing kafka delay calculation
- Upgrades kafka streams to 1.1.1
- Adds stream integration tests
- Adds API to flush messages from dead-letter-queue in RabbitMQ
- Starts sentry-reporter in on application initialization
- removes executor dependency as it was not being used
- updates readme and contribution guidelines
- refactors config files to remove gojek specific configs
- Removes sentry dependency and instead uses sentry-clj.async
- Merges lambda commons and adds default configs for missing application specified configs
- Users using
lambda-commons.metrics
should now start using ziggurat.metrics
to send custom metrics.
- Fixes bug where connection to rabbitmq fails when stream routes is not passed in mount/args
- Adds support for multipart params over actor routes, moves lein-kibit and eastwood to dev plugins
- Changed the order of starting up of ziggurat and actor. First config will be initialized, then actor function will
start up and ziggurat start function will start up.
- Apps with (mount/start) in their
start-fn
will no longer work correctly. Users should start using mount/only
instead.
- Removes Yggdrasil, bulwark and ESB log entities dependency
- Removes the
make-config
function from ziggurat.config
namespace. Users should now use config-from-env
function
instead.
- Overrides and exposes kafka streams config: buffered.records.per.partitions and commit.inteval.ms
- Fixes bug where rabbitmq connection is established even when retry is disabled and channels are absent in
consumer/start-subscribers
- Add configuration to read data from earliest offset in kafka
- Fixes rabbitmq queue creation when retries are disabled and channels are present
- Fixes rabbitmq intialization when retry is disabled but channels are present
- Fixes dead set management api to validate the channel names
- Starts up rabbitmq connection when channels are present or retry is enabled
- Fixes bug around reporting execution time for handler fn
- Adds arbitrary channels for long running jobs
- Fix parallelism for retry workers
- Starts sending expiration per message instead of setting it on the queue
- Starts calculating timestamp from kafka metadata
- removes deprecated config variables in kafka-streams
- Upgraded lambda commons library to 0.3.1
- Upgraded lambda commons library
- Adds metrics to skipped and retried message
- Retry message when actor raises an expection
- Add support for multi stream routes
- Fixes replay of messages in dead letter queue.
- Bumps up lambda-common version to 0.2.2
- Fixes converting message from kafka to clojure hash
- Fixes converting message from kafka to clojure hash
- Instruments time of execution of mapper function
- Increments the esb-log-entites version to fetch from 3.18.7 and above
- Fixes the consumer to retry the mapper-fn
- Uses WallclockTimestampExtractor as timestamp extractor for the streams
- Always fetches the esb-log-entites version greater than or equal 3.17.11
- Bumps up the esb log entities version to 3.17.11
- Fetches config from yggdrasil and if not found fallbacks to env
Configs added
{
:ziggurat {:yggdrasil {:host "http://localhost"
:port [8080 :int]
:connection-timeout-in-ms [1000 :int}}
}
- Bumps up the esb log entities version
- Adds ability to pass actor specific routes
- Changes dependency from esb-log-client to esb-log-entities
- Adds metrics to count throughput
- Changes the job name getting pushed to NR
- Adds an
v1/dead_set
to view the dead set messages
- Bump version of
com.gojek/sentry
- Fixed a bug in application shutdown: the actor's start-fn was being called instead of the stop-fn.
- Made some functions private.
- Added some docstrings.
- Added Gotchas section to the README.
- Added ziggurat.sentry/report-error to be used by actors.
- Upgrade esb-log-client version to latest (1.103.0).
- Various internal refactorings: removed dead code, fixed some spelling mistakes, made some functions private.
- Flag to enable retries and conditionally start the rabbitmq states depending on this flag.
- Namespace framework configs under
:ziggurat
Can you improve this documentation? These fine people already did:
Anirudh, prateek.khatri, Kartik Gupta, shubhang.balkundi, Michael Angelo Calimlim, Anmol Vijaywargiya, Sandilya Jandhyala, vruttantmankad, Nivedita Priyadarshini, WickedBrat, Saptanto Sindu, prashant, hogaur, Rooba Limsa, Anirudh Vyas, Prateek Khatri, Harshal Bhatia, Soham Kamani, anmol1vw13, Anshuman Srivastava, Akshay Gupta, maulik.soneji, Shubhang Balkundi, Uddeshya Singh, Kishan Sharma, Gowtham Sai, Lakshya Gupta, rajnishdashora & dark-bytesEdit on GitHub