This namespace aims to provide ->T
, (datafy T)
, and data->T
as
a round-tripping of Katka's (client) record types.
Note that for some types, particularly Kafka's -Result
types no
->T
constructors are provided as there are no consumers within the
Kafka API for these records they are merely packed results.
For compatibility with Clojure before 1.10.0, a datafy
function is
provided. On 1.10 or after, it simply defers to
clojure.datafy/datafy
but before 1.10 it acts as a backport
thereof.
This namespace aims to provide `->T`, `(datafy T)`, and `data->T` as a round-tripping of Katka's (client) record types. Note that for some types, particularly Kafka's `-Result` types no `->T` constructors are provided as there are no consumers within the Kafka API for these records they are merely packed results. For compatibility with Clojure before 1.10.0, a `datafy` function is provided. On 1.10 or after, it simply defers to `clojure.datafy/datafy` but before 1.10 it acts as a backport thereof.
A timestamp type associated with the timestamp being from the record's creation. That is, the record timestamp was user supplied.
A timestamp type associated with the timestamp being from the record's creation. That is, the record timestamp was user supplied.
A timestamp type associated with the timestamp having been generated by Kafka when the record was produced, not having been specified by the user when the record was created.
A timestamp type associated with the timestamp having been generated by Kafka when the record was produced, not having been specified by the user when the record was created.
A timestamp type associated with... not having a timestamp type.
A timestamp type associated with... not having a timestamp type.
(->ConfigEntry k v)
value can be a string else is a map where value is the :value key
value can be a string else is a map where value is the :value key
(->ConsumerRecord {:keys [:topic-name]}
partition
offset
ts
ts-type
key-size
value-size
key
value
headers)
Given unrolled ctor-style arguments create a Kafka ConsumerRecord
.
Convenient for testing the consumer API and its helpers.
Given unrolled ctor-style arguments create a Kafka `ConsumerRecord`. Convenient for testing the consumer API and its helpers.
(->ProducerRecord {:keys [topic-name]} value)
(->ProducerRecord {:keys [topic-name]} key value)
(->ProducerRecord {:keys [topic-name]} partition key value)
(->ProducerRecord {:keys [topic-name]} partition timestamp key value)
(->ProducerRecord {:keys [topic-name]} partition timestamp key value headers)
Given unrolled ctor-style arguments creates a Kafka ProducerRecord
.
Given unrolled ctor-style arguments creates a Kafka `ProducerRecord`.
(->RecordMetadata t partition offset timestamp key-size value-size)
(->RecordMetadata t
partition
base-offset
relative-offset
timestamp
key-size
value-size)
(->RecordMetadata t
partition
base-offset
relative-offset
timestamp
checksum
key-size
value-size)
Given unrolled ctor-style arguments, create a Kafka RecordMetadata
.
Note that as of KIP-31, Kafka actually only stores offsets relative to a message batch on the wire or on disk. In order to maintain the previous abstraction that there's a "offset" field which is absolute, an additional arity is provided which lets the user construct a record with a base offset and a relative offset of 0 so that the metadata's apparent offset is predictable.
Note that as the checksum is deprecated, by default it is not required. The third arity allows a user to provide a checksum. This arity may be removed in the future pending further breaking changes to the Kafka APIs.
Given unrolled ctor-style arguments, create a Kafka `RecordMetadata`. Note that as of KIP-31, Kafka actually only stores offsets relative to a message batch on the wire or on disk. In order to maintain the previous abstraction that there's a "offset" field which is absolute, an additional arity is provided which lets the user construct a record with a base offset and a relative offset of 0 so that the metadata's apparent offset is predictable. Note that as the checksum is deprecated, by default it is not required. The third arity allows a user to provide a checksum. This arity may be removed in the future pending further breaking changes to the Kafka APIs.
(->TimestampType kw)
Given a keyword being a datafied Kafka TimestampType
, return the
equivalent TimestampType
instance.
Given a keyword being a datafied Kafka `TimestampType`, return the equivalent `TimestampType` instance.
(->TopicPartition {:keys [:topic-name]} partition)
Given unrolled ctor-style arguments, create a Kafka TopicPartition
.
Given unrolled ctor-style arguments, create a Kafka `TopicPartition`.
(datafy x)
Attempts to return x as data.
datafy
will return the value of #'clojure.core.protocols/datafy
.
If the value has been transformed and the result supports metadata,
:clojure.datafy/obj
will be set on the metadata to the original
value of x, and :clojure.datafy/class
to the name of the class of
x, as a symbol.
Attempts to return x as data. `datafy` will return the value of `#'clojure.core.protocols/datafy`. If the value has been transformed and the result supports metadata, `:clojure.datafy/obj` will be set on the metadata to the original value of x, and `:clojure.datafy/class` to the name of the class of x, as a symbol.
(map->ConsumerRecord {:keys [:key :value :headers :partition :timestamp
:timestamp-type :offset :serialized-key-size
:serialized-value-size]
:as m})
Given a ::consumer-record
, build an equivalent ConsumerRecord
.
Inverts (datafy ^ConsumerRecord cr)
.
Given a `::consumer-record`, build an equivalent `ConsumerRecord`. Inverts `(datafy ^ConsumerRecord cr)`.
(map->NewTopic {:keys [:topic-name :partition-count :replication-factor
:topic-config]
:as m})
(map->ProducerRecord {:keys [topic-name key value headers partition timestamp]})
Given a ::producer-record
build an equivalent ProducerRecord
.
Inverts (datafy ^ProducerRecord r)
.
Given a `::producer-record` build an equivalent `ProducerRecord`. Inverts `(datafy ^ProducerRecord r)`.
(map->Properties m)
Given a mapping of keywords to string values, stringify the keys via
#'clojure.walk/stringify-keys
and return a Properties
object
with the transformed keys and unmodified values.
Given a mapping of keywords to string values, stringify the keys via `#'clojure.walk/stringify-keys` and return a `Properties` object with the transformed keys and unmodified values.
(map->RecordMetadata {:keys [:partition :timestamp :offset :serialized-key-size
:serialized-value-size]
:as m})
Given a ::record-metdata
, build an equivalent RecordMetadata
.
Inverts (datafy ^RecordMetadata rm)
.
Given a `::record-metdata`, build an equivalent `RecordMetadata`. Inverts `(datafy ^RecordMetadata rm)`.
(map->TopicPartition {:keys [partition] :as m})
Given a topic-partition
, build an equivalent TopicPartition
.
Inverts (datafy ^TopicPartition tp)
.
Given a `topic-partition`, build an equivalent `TopicPartition`. Inverts `(datafy ^TopicPartition tp)`.
(Properties->data o)
Consume a Properties
instance, keywordizing the keys and returning
a Clojure mapping of the resulting keys to unmodified values.
Consume a `Properties` instance, keywordizing the keys and returning a Clojure mapping of the resulting keys to unmodified values.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close