Current state: Accepted
Discussion thread: here
Vote thread: here
JIRA: KAFKA-13511
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Currently, the Kafka Connect SMT TimestampConverter can convert Timestamp from multiples sources types (String, Unix Long or Date) into different target types (String, Unix Long or Date).
The problem is that Unix Long as a source or as a target type must be with milliseconds precision.
In many cases, Unix time is represented with different precisions within external systems : seconds, microseconds, nanoseconds.
When such case arise, Kafka Connect can't do anything expect pass along the Unix Long and leave the conversion to another layer.
This issue was raised several times :
TimestampConverter should have a config to define which precision to use when converting from and to Long Unix timestamp.
name | description | type | default | valid values | importance |
---|---|---|---|---|---|
unix.precision | The desired unix precision for the timestamp. Used to generate the output when type=unix or used to parse the input if the input is a Long. Note: This SMT will cause precision loss during conversions from and to values with sub-milliseconds components. | String | milliseconds | , , , | low |
Implementation details to be discussed :
Unix Long to Timestamp example:
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value", "transforms.TimestampConverter.field": "event_date_long", "transforms.TimestampConverter.unix.precision": "microseconds", "transforms.TimestampConverter.target.type": "Timestamp" |
String to Unix Long nanoseconds example:
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value", "transforms.TimestampConverter.field": "event_date_str", "transforms.TimestampConverter.format": "yyyy-MM-dd'T'HH:mm:ss.SSS", "transforms.TimestampConverter.target.type": "unix", "transforms.TimestampConverter.unix.precision": "nanoseconds" |
Since these classes can only handle precisions down to the millisecond, it should be noted that:
Systems that produces int32 into Kafka should willingly chain Cast SMT and then TimestampConverter SMT if they want to use this feature.
"transforms": "Cast,TimestampConverter", "transforms.Cast.type": "org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec": "event_date_int:int64", "transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value", "transforms.TimestampConverter.field": "event_date_int", "transforms.TimestampConverter.unix.precision": "seconds", "transforms.TimestampConverter.target.type": "Timestamp" |
The change will not break the compatibility.
If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.
epoch.precision
Since epoch is not a measure but rather a point in time, it can't be associated to a precision.
It makes more sense to name the field unix.precision
for that reason.
seconds is a unit, but millis micros, nanos are really just prefixes. Mixing up doesn't work well.
s, ms, µs, ns are a valid SI symbols but the µs (or its accepted equivalent us) can be confusing.
For clarity, it was decided to use the plaintext naming convention.