Add TimestampWithOffset canonical extension type#558
Add TimestampWithOffset canonical extension type#558serramatutu wants to merge 10 commits intoapache:mainfrom
TimestampWithOffset canonical extension type#558Conversation
…48002) ### Rationale for this change Closes #44248 Arrow has no built-in canonical way of representing the `TIMESTAMP WITH TIME ZONE` SQL type, which is present across multiple different database systems. Not having a native way to represent this forces users to either convert to UTC and drop the time zone, which may have correctness implications, or use bespoke workarounds. A new `arrow.timestamp_with_offset` extension type would introduce a standard canonical way of representing that information. Rust implementation: apache/arrow-rs#8743 Go implementation: apache/arrow-go#558 [DISCUSS] [thread in the mailing list](https://lists.apache.org/thread/yhbr3rj9l59yoxv92o2s6dqlop16sfnk). ### What changes are included in this PR? Proposal and documentation for `arrow.timestamp_with_offset` canonical extension type. ### Are these changes tested? N/A ### Are there any user-facing changes? Yes, this is an extension to the arrow format. * GitHub Issue: #44248 --------- Co-authored-by: David Li <li.davidm96@gmail.com> Co-authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Co-authored-by: Felipe Oliveira Carvalho <felipekde@gmail.com>
95230ad to
b9b8bf2
Compare
TimestampWithOffset extension typeTimestampWithOffset extension type
TimestampWithOffset extension typeTimestampWithOffset canonical extension type
b9b8bf2 to
ccdd288
Compare
| // valid Dictionary index type. | ||
| // | ||
| // The error will be populated if the index is not a valid dictionary-encoding index type. | ||
| func NewTimestampWithOffsetTypeDictionaryEncoded(unit arrow.TimeUnit, index arrow.DataType) (*TimestampWithOffsetType, error) { |
There was a problem hiding this comment.
Another idea that might be interesting would be to use generics to avoid the need for the error output.
For example:
func NewTimestampWithOffsetTypeDictEncoded[T arrow.IntType | arrow.UintType](unit arrow.TimeUnit) *TimestampWithOffsetType {
offsetType := &arrow.DictionaryType{
IndexType: arrow.GetDataType[T](),
ValueType: arrow.PrimitiveTypes.Int16,
}
...This would be used like NewTimestampWithOffsetTypeDictEncoded[int8](arrow.Second)
or maybe:
type IntegralType interface {
*arrow.Int8Type | *arrow.Int16Type | *arrow.Int32Type | *arrow.Int64Type |
*arrow.Uint8Type | *arrow.Uint16Type | *arrow.Uint32Type | *arrow.Uint64Type
}
func NewTimestampWithOffsetTypeDictEncoded[T IntegralType](unit arrow.TimeUnit, index T) *TimestampWithOffsetType {
offsetType := &arrow.DictionaryType{
IndexType: index,
ValueType: arrow.PrimitiveTypes.Int16,
}
...This could be used like NewTimestampWithOffsetTypeDictEncoded(arrow.Second, arrow.PrimitiveTypes.Int8)
Leveraging the generics might make the usage simpler potentially by compile time enforcement of the proper types. This is just an idea, curious what you think
There was a problem hiding this comment.
I like it. I prefer compile-time errors over runtime errors :)
There was a problem hiding this comment.
Commit: cbf8f65
Ok, I fixed this but I'm still not sure I like the look of the API. Turns out arrow.PrimitiveTypes.* are all DataTypes and the compiler complains DataType is too broad to fit in the generic constraints.
You need to manually use the type struct like so:
NewTimestampWithOffsetTypeDictEncoded(arrow.Second, &arrow.Int8Type{})
The other disadvantage is that I can't just iterate over a list of DataType like I was doing in tests before, you need to specify the specialization of the function at compile-time. Which might not be a problem for users, but wanted to point that out...
| // valid run-ends type. | ||
| // | ||
| // The error will be populated if runEnds is not a valid run-end encoding run-ends type. | ||
| func NewTimestampWithOffsetTypeRunEndEncoded(unit arrow.TimeUnit, runEnds arrow.DataType) (*TimestampWithOffsetType, error) { |
There was a problem hiding this comment.
See the previous comment about potentially leveraging generics for simplification
| values := make([]interface{}, a.Len()) | ||
| a.iterValues(func(i int, timestamp *time.Time) { | ||
| values[i] = timestamp | ||
| }) | ||
| return json.Marshal(values) |
There was a problem hiding this comment.
| values := make([]interface{}, a.Len()) | |
| a.iterValues(func(i int, timestamp *time.Time) { | |
| values[i] = timestamp | |
| }) | |
| return json.Marshal(values) | |
| return json.Marshal(a.Values()) |
There was a problem hiding this comment.
I ended up not being able to do this because I changed iterValues() to return time.Time instead of *time.Time. So we need a check if time.Unix() == 0, meaning it should actually be serialized as null instead of 1970-01-01.
| timestamps.UnsafeAppendBoolToBitmap(false) | ||
| offsets.UnsafeAppendBoolToBitmap(false) |
There was a problem hiding this comment.
these are non-nullable I thought? We should be pushing default 0,0 in the case where !valids[i]
There was a problem hiding this comment.
Oops, yea this was reminiscent of the old implementation before we all circled around to this being non-nullable again... Good catch :)
There was a problem hiding this comment.
As we discussed in Slack, I found an issue when roundtripping to JSON.
It looks like replacing these with UnsafeAppend results in an array with nulls=0 in the inner (non-nullable) field.
If I write the same array to JSON and parse it back with arrow.RecordFromJSON(), the inner array has nulls=1, even if the field is not nullable.
This leads to arrow.RecordEqual() returning false due to original.NullN() != roundtrip.NullN().
There is an ongoing thread on the mainling list about how to resolve this, so I'll hold off on fixing it for now.
| timestamps.UnsafeAppendBoolToBitmap(false) | ||
| offsets.UnsafeAppendBoolToBitmap(false) |
There was a problem hiding this comment.
same as above, these are non-nullable according to the spec.
| timestamps.UnsafeAppendBoolToBitmap(false) | ||
| offsets.UnsafeAppendBoolToBitmap(false) |
There was a problem hiding this comment.
same as above. these are non-nullable according to the spec
This commit adds a new `TimestampWithOffset` extension type that can be used to represent timestamps with per-row timezone information. It stores information in a `struct` with 2 fields, `timestamp=[T, "UTC"]`, where `T` can be any `arrow.TimeUnit` and `offset_minutes=int16`, which represents the offset in minutes from the UTC timestamp.
This commit allows `TimestampWithOffset` to be dict-encoded. - I made `NewTimestampWithOffsetType` take in an input `offsetType arrow.DataType`. It returns an error if the data type is not valid. - I added a new infallible `NewTimestampWithOffsetTypePrimitiveEncoded` to make the encoding explicit. - I added `NewTimestampWithOffsetTypeDictionaryEncoded` which returns an error in case the given type is not a valid dictionary key type. - I made all tests run in a for loop with all possible allowed encoding types, ensuring all encodings work.
Smartly iterate over offsets if they're run-end encoded instead of doing a binary search at every iteration. This makes the loops O(n) instead of O(n*logn).
Changed a lot of things based on Matt's suggestions.
ccdd288 to
cee17a3
Compare
These builder implementations were inheriting the default implementation from `Builder`, which does not bump the length of the inner builders. This would leave the builder in an inconsistent state where the top-level builder had the correct length but the inner builder doesn't.
Which issue does this PR close?
This PR implements the new
arrow.timestamp_with_offsetcanonical extension type forarrow-go.Rationale for this change
Be compatible with Arrow spec.
What changes are included in this PR?
This commit adds a new
TimestampWithOffsetextension type. This type represents a timestamp column that stores potentially different timezone offsets per value. The timestamp is stored in UTC alongside the original timezone offset in minutes. The offset in minutes can be primitive encoded, dictionary encoded or run-end encoded.Are these changes tested?
Yes.
Are there any user-facing changes?
Yes, this is a new canonical extension type.