- Added floating-point getters for price fields
- Added new IntelligentCross venues
ASPN
,ASMT
, andASPI
- Upgraded
thiserror
version to 2.0 - Upgraded
pyo3
version to 0.22.6
- Fixed
pretty_activation
getter indatabento_dbn
returningexpiration
instead ofactivation
- Fixed some
pretty_
getters indatabento_dbn
didn't correctly handleUNDEF_PRICE
- Added new
None
Action
variant that will be gradually rolled out to historical and liveGLBX.MDP3
data - Added consistent escaping of non-printable and non-ASCII values when text encoding
c_char
fields - Implemented
Default
forAction
andSide
- Added support for Python 3.13 to
databento_dbn
- Implemented missing
Serialize
for (withserde
feature enabled) forVenue
,Dataset
,Publisher
,Compression
,SType
,Schema
, andEncoding
- Removed support for Python 3.8 in
databento-dbn
due to end of life
- Fixed buffer overrun in
c_chars_to_str
on non-null terminated input
- Added Python type stubs for Record
__init__
methods
- Combined
_reserved3
and_reserved4
fields inCbboMsg
- Changed setters for
char
Record fields to accept a single-character strings - Changed
rtype
andlength
to no longer be settable from Python. Users should use the Record type__init__
methods to initialize records
- Added missing Python type stub for
CMBP_1
variant inSchema
- Added
--omit-header
option todbn
CLI to skip encoding the header row when encoding CSVs - Added Python setter for
ts_event
on all records - Upgraded
pyo3
version to 0.22.3 - Added new consolidated publishers for
XNAS.BASIC
andDBEQ.MAX
- Changed handling of
write_header
parameter forCsvEncoder
andDynEncoder
. It now determines whether a header is written automatically in general, not only during instantiation of the encoder. This makes it possible to useencode_records
andencode_decoded
without writing a header CsvEncoder::new
creates an encoder that will always try to write a header. Use the builder withwrite_header(false)
to create an encoder that won't write a header rowschema
is now always optional for theCsvEncoder
builder and no longer returns aResult
- Changed the layout of
CbboMsg
to better matchBboMsg
- Renamed
Schema::Cbbo
toSchema::Cmbp1
- Removed
debug_assert!
onrtype
inRecordRef::get_unchecked
that was too strict. The method is already marked unsafe and it's okay to interpret one record type as another type as long as latter type's size is not greater than the former's
- Added
DynAsyncBufWriter
for buffering compressed or uncompressed async output - Added new publisher values for
XCIS.BBOTRADES
andXNYS.BBOTRADES
- Added missing Python type stub for
pretty_ts_ref
inStatMsg
- Added new
SType
variants for reference data:Isin
,UsCode
,BbgCompId
,BbgCompTicker
,Figi
,FigiTicker
- Added new publisher value for
DBEQ.SUMMARY
- Renamed
SType::Nasdaq
variant toSType::NasdaqSymbol
- Renamed
SType::Cms
variant toSType::CmsSymbol
- Fixed issue where
AsyncDynReader
would only decode the first frame of multi-frame Zstandard files
- Update
rtype_dispatch
andschema_dispatch
macros forBboMsg
- Update
RecordEnum
andRecordRefEnum
forBboMsg
- Added
BboMsg
record struct for futurebbo-1m
andbbo-1s
schemas - Upgraded
pyo3
version to 0.22.1 - Upgraded
json-writer
to 0.4
- Added
Default
trait implementation forMbp1Msg
due to it no longer needing to support multiplertype
values. Thedefault_for_schema
function has been removed - Changed
Bbo1sMsg
andBbo1mMsg
to be aliases forBboMsg
- Changed the default value of the
side
fields toSide::None
- Reordered parameters and added defaults to Python
Metadata
initializer to match required arguments in Rust
- Fixed issue where DBN encoders would permit symbols in the metadata that left no space for a null terminator
- Updated metadata length calculation to respect
symbol_cstr_len
field rather than inferring the length fromversion
- Added new
shutdown
method to async encoders to more easily ensure the end of output is written and I/O cleaned up. Previously this required a call to.get_mut().shutdown().await
- Changed
AsyncDynWriter
andAsyncDbnEncoder::with_zstd
to use a zstd checksum like the sync equivalents - Added new publisher values for
XNAS.BASIC
andXNAS.NLS
- Fixed bug where DBN metadata would still be upgraded after passing
AsIs
toDbnDecoder::set_upgrade_policy
andAsyncDbnDecoder::set_upgrade_policy
- Added new
stat_type
for UncrossingPrice - Added new publisher values for
XNAS.BASIC
- Added new off-market publisher values for
IFEU.IMPACT
andNDEX.IMPACT
- Fixed descriptions for
FINN
andFINY
publishers
- Added links to example usage in documentation
- Added new predicate methods
InstrumentClass::is_option
,is_future
, andis_spread
to make it easier to work with multiple instrument class variants - Implemented
DecodeRecord
forDbnRecordDecoder
- Added
new_inferred
,with_buffer
,inferred_with_buffer
,from_file
,get_mut
, andget_ref
methods toAsyncDynReader
for parity with the syncDynReader
- Improved documentation enumerating errors returned by functions
- Added new
DBNError
Python exception that's now the primary exception raised bydatabento_dbn
- Improved async performance of decoding DBN files
- Added
StatMsg::ts_in_delta()
method that returns atime::Duration
for consistency with other records with ats_in_delta
field
- Changed type of
flags
inMboMsg
,TradeMsg
,Mbp1Msg
,Mbp10Msg
, andCbboMsg
fromu8
to a newFlagSet
type with predicate methods for the various bit flags as well as setters. Theu8
value can still be obtained by calling theraw()
method.- Improved
Debug
formatting - Python and encodings are unaffected
- Improved
- Removed
write_dbn_file
function deprecated in version 0.14.0 from Python interface. Please useTranscoder
instead - Switched
DecodeStream
fromstreaming_iterator
crate tofallible_streaming_iterator
to allow better notification of errors - Switched
EncodeDbn::encode_stream
from accepting animpl StreamingIterator
to accepting anFallibleStreamingIterator
to allow bubbling up of decoding errors - Changed default value for
stype_in
andstype_out
inSymbolMappingMsg
tou8::MAX
to match C++ client and to reflect an unknown value. This also changes the value of these fields when upgrading aSymbolMappingMsgV1
to DBNv2 - Renamed
CbboMsg
toCBBOMsg
in Python for consistency with other schemas - Changed text serialization of
display_factor
to be affected bypretty_px
. While it's not a price, it uses the same fixed-price decimal format as other prices - Changed text serialization of
unit_of_measure_qty
inInstrumentDefMsgV1
to be affected bypretty_px
to match behavior ofInstrumentDefMsgV2
- Added missing Python type stub for
StatusMsg
- Added new record type
CbboMsg
, new rtypes and schema types forCbbo
,Cbbo1S
,Cbbo1M
,Tcbbo
,Bbo1S
, andBbo1M
- Added
Volatility
andDelta
StatType
variants - Added
Undefined
andTimeProRata
MatchAlgorithm
variants - Exported more enums to Python:
Action
InstrumentClass
MatchAlgorithm
SecurityUpdateAction
Side
StatType
StatUpdateAction
StatusAction
StatusReason
TradingEvent
TriState
UserDefinedInstrument
- Removed
Default
trait implementation forMbp1Msg
due to it now having multiple permissiblertype
values. Users should usedefault_for_schema
instead - Changed the default
match_algorithm
forInstrumentDefMsg
andInstrumentDefMsgV1
fromFifo
toUndefined
- Made
Dataset
,Venue
, andPublisher
non-exhaustive to allow future additions without breaking changes - Renamed publishers from deprecated datasets to their respective sources (
XNAS.NLS
andXNYS.TRADES
respectively)
- Deprecated dataset values
FINN.NLS
andFINY.TRADES
- Fixed an issue where the Python
MappingIntervalDict
was not exported - Fixed Python type stubs for
VersionUpgradePolicy
andSType
- Updated
StatusMsg
and made it public in preparation for releasing a status schema - Added
StatusAction
,StatusReason
,TradingEvent
, andTriState
enums for use in the status schema - Added
-t
and--tsv
flags to DBN CLI to encode tab-separated values (TSV) - Added
delimiter
method to builders forDynEncoder
andCsvEncoder
to customize the field delimiter character, allowing DBN to be encoded as tab-separated values (TSV) - Documented cancellation safety for
AsyncRecordDecoder::decode_ref
(credit: @yongqli) - Added new publisher values for consolidated DBEQ.MAX
- Added C FFI conversion functions from
ErrorMsgV1
toErrorMsg
andSystemMsgV1
toSystemMsg
- Improved documentation for
side
field andSide
enum - Upgraded
async-compression
to 0.4.6 - Upgraded
strum
to 0.26
- Changed default for
VersionUpgradePolicy
toUpgrade
- Changed default
upgrade_policy
forDbnDecoder
,AsyncDbnDecoder
, and PythonDBNDecoder
toUpgrade
so by default the primary record types can always be used - Changed fields of previously-hidden
StatusMsg
record type - Updated text serialization order of status schema to match other schemas
- Changed text serialization of
unit_of_measure_qty
to be affected bypretty_px
. While it's not a price, it uses the same fixed-price decimal format as other prices - Made
StatType
andVersionUpgradePolicy
non-exhaustive to allow future additions without breaking changes - Renamed
_dummy
field inImbalanceMsg
andStatMsg
to_reserved
- Added
ts_out
parameter toRecordDecoder
andAsyncRecordDecoder
with_upgrade_policy
methods
- Fixed handling of
ts_out
when upgrading DBNv1 records to version 2 - Added missing
StatType::Vwap
variant used in the ICE datasets - Fixed an issue with Python stub file distribution
- Fixed missing handling of
ErrorMsgV1
andSystemMsgV1
inrtype
dispatch macros
- Fixed an import error in the Python type stub file
- Improved
Debug
implementation for all record types- Prices are formatted as decimals
- Fixed-length strings are formatted as strings
- Bit flag fields are formatted as binary
- Several fields are formatted as enums instead of their raw representations
- Improved
Debug
implementation forRecordRef
to showRecordHeader
- Added
--schema
option todbn
CLI tool to filter a DBN to a particular schema. This allows outputting saved live data to CSV - Allowed passing
--limit
option todbn
CLI tool with--metadata
flag - Improved performance of decoding uncompressed DBN fragments with the
dbn
CLI tool - Added builders to
CsvEncoder
,DynEncoder
, andJsonEncoder
to assist with the growing number of customizations- Added option to write CSV header as part of creating
CsvEncoder
to make it harder to forget
- Added option to write CSV header as part of creating
- Added
-s
/--map-symbols
flag to CLI to create asymbol
field in the output with the text symbol mapped from the instrument ID - Added
version
param to PythonMetadata
constructor choose between DBNv1 and DBNv2 - Implemented
EncodeRecordTextExt
forDynEncoder
- Implemented
Deserialize
andSerialize
for all records and enums (withserde
feature enabled). This allows serializing records with additional encodings not supported by the DBN crate - Implemented
Hash
for all record types - Added new publisher value for OPRA MIAX Sapphire
- Added Python type definition for
Metadata.__init__
- Added
metadata_mut
method to decoders to get a mutable reference to the decoded metadata - Improved panic message on
RecordRef::get
when length doesn't match expected to be actionable - Added
encode::ZSTD_COMPRESSION_LEVEL
constant
- Increased size of
SystemMsg
andErrorMsg
to provide better messages from Live gateway- Increased length of
err
andmsg
fields for more detailed messages - Added
is_last
field toErrorMsg
to indicate the last error in a chain - Added
code
field toSystemMsg
andErrorMsg
, although currently unused - Added new
is_last
parameter toErrorMsg::new
- Decoding these is backwards-compatible and records with longer messages won't be sent during the DBN version 2 migration period
- Renamed previous records to
compat::ErrorMsgV1
andcompat::SystemMsgV1
- Increased length of
- Split
DecodeDbn
trait intoDecodeRecord
andDbnMetadata
traits for more flexibility.DecodeDbn
continues to exist as a trait alias - Moved
decode_stream
out ofDecodeDbn
to its own separate traitDecodeStream
- Changed trait bounds of
EncodeDbn::encode_decoded
andencode_decoded_with_limit
toDecodeRecordRef + DbnMetadata
- Fixed panic in
TsSymbolMap
whenstart_date
==end_date
- Added missing Python
__eq__
and__ne__
implementations forBidAskPair
- Fixed Python
size_hint
return value forInstrumentDefMsgV1
andSymbolMappingMsgV1
- Fixed cases where
dbn
CLI tool would write a broken pipe error to standard error such as when piping tohead
- Fixed bug in sync and async
MetadataEncoder
s whereversion
was used to determine the encoded length of fixed-length symbols instead of thesymbol_cstr_len
field
- Added
set_upgrade_policy
setters toDbnDecoder
,DbnRecordDecoder
,AsyncDbnDecoder
, andAsyncDbnRecordDecoder
- Added
from_schema
classmethod for PythonRType
enum
- Renamed parameter for Python Enum classmethod constructors to
value
fromdata
.
- Added new trait
compat::SymbolMappingRec
for code reuse when working with both versions ofSymbolMappingMsg
- Changed
PitSymbolMap::on_symbol_mapping
to accept either version ofSymbolMappingMsg
- Fixed missing DBNv1 compatibility in
PitSymbolMap::on_record
- Fixed missing Python export for
VersionUpgradePolicy
- Fixed missing Python export and methods for
InstrumentDefMsgV1
andSymbolMappingMsgV1
- Fixed bug where Python
DbnDecoder
andTranscoder
would throw exceptions when attempting to decode partial metadata
- This version begins the transition to DBN version 2 (DBNv2). In this version, the
decoders support decoding both versions of DBN and the DBN encoders default to
keeping version of the input. However, in a future version, decoders will by default
convert DBNv1 to DBNv2 and support will be dropped for encoding DBNv1.
- Affects
SymbolMappingMsg
,InstrumentDefMsg
, andMetadata
. All other record types and market data schemas are unchanged - Version 1 structs can be converted to version 2 structs with the
From
trait
- Affects
- Added
symbol_cstr_len
field toMetadata
to indicate the length of fixed symbol strings - Added
stype_in
andstype_out
fields toSymbolMappingMsg
to provide more context with live symbology updates - Added smart wrapping to
dbn
CLI help output - Updated
rtype_dispatch
family of macros to check record length to handle both versions of records. This is temporary during the transition period - Added
VersionUpgradePolicy
enum and associated methods to the decoders to allow specifying how to handle decoding records from prior DBN versions - Added
Metadata::upgrade()
method to updateMetadata
from a prior DBN version to the latest version - Added
-u
/--upgrade
flags todbn
CLI that when passed upgrades DBN data from previous versions. By default data is decoded as-is - Made
AsyncDbnDecoder::decode_record
,AsyncDbnDecoder::decode_record_ref
,dbn::AsyncRecordDecoder::decode
, anddbn::AsyncRecordDecoder::decode_ref
cancellation safe. This makes them safe to use within atokio::select!
(https://docs.rs/tokio/latest/tokio/macro.select.html) statement - Added documentation around cancellation safety for async APIs
- Improved error messages for conversion errors
- Added
TOB
flag to denote top-of-book messages - Added new publisher values in preparation for IFEU.IMPACT and NDEX.IMPACT datasets
- Added new publisher values for consolidated DBEQ.BASIC and DBEQ.PLUS
- Added
MAX_RECORD_LEN
constant for the length of the largest record type - Exposed record flag constants in
databento_dbn
withF_
prefix - Added export to Python for
RType
- The old
InstrumentDefMsg
is nowcompat::InstrumentDefMsgV1
compat::InstrumentDefMsgV2
is now an alias forInstrumentDefMsg
- The old
SymbolMappingMsg
is nowcompat::SymbolMappingMsgV1
compat::SymbolMappingMsgV2
is now an alias forSymbolMappingMsg
- Changed
SYMBOL_CSTR_LEN
constant to 71. Previous value is now incompat::SYMBOL_CSTR_V1
- Changed
DBN_VERSION
constant to 2 security_update_action
was converted to a rawc_char
to safely support adding variants in the future- Renamed
_dummy
inInstrumentDefMsg
to_reserved
- Removed
_reserved2
,_reserved3
, and_reserved5
fromInstrumentDefMsg
- Removed
_dummy
fromSymbolMappingMsg
- Moved position of
strike_price
withinInstrumentDefMsg
but left text serialization order unchanged - Made
Error
non-exhaustive, meaning it can no longer be exhaustively matched against. This allows adding additional error variants in the future without a breaking change - Added
upgrade_policy
parameter toRecordDecoder::with_version
constructor to control whether records of previous versions will be upgraded - Added
upgrade_policy
parameter toDynDecoder
constructors to control whether records of previous versions will be upgraded - Renamed
symbol_map
parameter for Python Transcoder tosymbol_interval_map
to better reflect the date intervals it contains
- Deprecated unused
write_dbn_file
function from Python interface. Please useTranscoder
instead
- Fixed typo in Python type definition for
InstrumentDefMsg.pretty_high_limit_price
- Fixed type signature for
Metadata.stype_in
andMetadata.stype_out
Python methods - Fixed incorrect version in
pyproject.toml
- Added
SymbolMappingMsgV2::new
method - Added
Record
trait for all types beginning with aRecordHeader
- Added new
index_ts
andraw_index_ts
methods toRecord
trait, which returns the primary timestamp for a record
- Added new
- Added
RecordMut
trait for accessing a mutable reference to aRecordHeader
- Implemented
PartialOrd
for all record types, based onraw_index_ts
- Loosened
DbnEncodable
from requiringHasRType
to only requiringRecord
. This meansRecordRef
s and concrete records can be encoded with the same methods
- Split part of
HasRType
into newRecord
andRecordMut
traits, which are object- safe: they can be used inBox<dyn>
.RecordRef
also implementsRecord
, so it's easier to write code that works for both concrete records as well asRecordRef
- Removed
RecordRef
methods made redundant by it implementingRecord
- Removed
input_compression
parameter from PythonTranscoder
- Deprecated
SymbolIndex::get_for_rec_ref
, which was made redundant by loosening the trait bound onSymbolIndex::get_for_rec
to acceptRecordRef
s
- Fixed
TsSymbolMap
not always using the correct timestamp for getting the mapped symbol
- Added
map_symbols
support to PythonTranscoder
- Added new publisher variants in preparation for DBEQ.PLUS dataset
- Added
from_dataset_venue
function toPublisher
to facilitate destructuring - Implemented
Default
for most records to make testing easier - Added
from_zstd
function toAsyncDbnEncoder
to match synchronous encoder - Added re-exports for
enums::flags
,enums::rtype
,record::BidAskPair
,record::RecordHeader
, andrecord::WithTsOut
to simplify imports - Added
--fragment
CLI flag for writing DBN without the metadata header - Added
--input-dbn-version
CLI option for specifying the DBN version of a DBN fragment - Added
serde::Deserialize
implementations forDataset
,Venue
, andPublisher
- Added support for Python 3.12 to
databento_dbn
- Added
RecordDecoder::with_version
for future use when dealing with compatibility between different DBN versions - Added new dispatch macros:
rtype_ts_out_method_dispatch
,rtype_ts_out_async_method_dispatch
,rtype_method_dispatch
, andschema_ts_out_method_dispatch
- Added
InstrumentDefMsgV2
andSymbolMappingMsgV2
for forward compatibility with a version of DBN - Added
TsSymbolMap
andPitSymbolMap
to aid with both historical and live symbology- Added support for inverse symbology, i.e. with
stype_in=InstrumentId
- Added support for inverse symbology, i.e. with
- Changed
Metadata::symbol_map
to returnTsSymbolMap
- Changed
Metadata::symbol_map_for_date
to returnPitSymbolMap
- Changed
Default
implementation forBidAskPair
by setting prices toUNDEF_PRICE
- Added new publisher values in preparation for DBEQ.PLUS
- Added
ts_out
parameter toencode_header_for_schema
inCsvEncoder
andDynEncoder
to allow controlling whether "ts_out" is in the header
- Upgraded
async-compression
to 0.4.3 - Upgraded
csv
to 1.3 - Upgraded
num_enum
to 0.7
- Changed DBN stream detection to ignore the DBN version
- Added new
EncodeRecordTextExt
trait which is implemented for the CSV and JSON encoders. It adds two methods for encoding asymbol
field along side the rest of the record fields, matching the behavior ofmap_symbols
in the historical API - Added
encode_header
andencode_header_for_schema
methods toCsvEncoder
andDynEncoder
to give more flexibility for encoding CSV headers - Added
from_file
andfrom_zstd_file
functions toAsyncDbnDecoder
to match synchronous decoder - Implemented
Copy
forRecordRef
to make it behave more like a reference - Added
AsyncDbnEncoder
for simpler DBN encoding and to match sync API - Added
RecordEnum
andRecordRefEnum
to more easily be able to pattern match on records of different types - Added
ARCX.PILLAR.ARCX
publisher - Added
From
DBN records forRecordRef
- Added re-exports to the top level of the crate for all enums and records for simpler imports
- Added
ClosePrice
andNetChange
StatType
s used in theOPRA.PILLAR
dataset
- Split
encode_record_ref
into a safe method with no arguments and an unsafe method with ats_out
parameter to reduceunsafe
usage when not working with live data that may containts_out
- Fixed
dbn
CLI not writing CSV header when using--fragment
and--zstd-fragment
flags - Fixed lifetime on return value from
RecordRef::get_unchecked
- Fixed missing check for
stype_out
before buildingMetadata
symbology maps
- Fixed query range checking in
Metadata::symbol_map_for_date
- Added
debug_assert_eq!
check for alignment inRecordRef::new
- Changed
Metadata::symbol_map
andsymbol_map_for_date
to returnString
values instead of&str
, which made it difficult to use
- Added
start
andend
getters toMetadata
that returntime::OffsetDateTime
- Added
symbol_map
andsymbol_map_for_date
methods toMetadata
to aid historical symbology mapping from the instrument IDs in records - Added
DynReader
struct for being agnostic about whether an input stream is zstd-compressed - Improved safety of
RecordRef::get
by adding length check - Added Python DBN
Transcoder
class for converting DBN to JSON and CSV with optional zstd compression - Added optional
has_metadata
parameter to PythonDBNDecoder
to allow decoding plain records by passingFalse
. By defaultDBNDecoder
expects a complete DBN stream, which begins with metadata - Added
get_ref
methods todbn::Decoder
anddbn::RecordDecoder
which return a reference to the inner reader - Added
UNDEF_PRICE
,UNDEF_ORDER_SIZE
,UNDEF_STAT_QUANTITY
, andUNDEF_TIMESTAMP
constants todatabento_dbn
Python package to make it easier to filter null values - Added
Metadata::builder()
function to create a new builder instance
- Split out
EncodeRecordRef
trait fromEncodeDbn
to have a boxable trait (i.e.Box<dyn EncodeRecordRef>
) for dynamic encoding - Split out
EncodeRecord
trait fromEncodeDbn
- Split out
DecodeRecordRef
trait fromDecodeDbn
to have a boxable trait (i.e.Box<dyn DecodeRecordRef>
) for dynamic decoding - Changed
DynWriter
from an enum to a struct with only private fields
- Fixed typo in
BATY.PITCH.BATY
publisher - Fixed typo in
README.md
(credit: @thomas-k-cameron)
- Added
publisher
method toRecordHeader
and all record types for converting thepublisher_id
to an enum - Added getters that return
time::OffsetDateTime
for the following fields:ts_event
,ts_recv
,ts_ref
,activation
,expiration
,start_ts
,end_ts
,ts_out
- Added getters for
ts_in_delta
that returntime::Duration
- Fixed missing
raw_instrument_id
field in PythonInstrumentDefMsg
- Fixed missing
OHLCV_EOD
variant in PythonSchema
type hint
- Added new
OhlcvEod
schema variant for future use with OHLCV bars based around the end of the trading session - Implemented
std::fmt::Display
for publisher enums (Publisher
,Dataset
, andVenue
)
- Fixed Python type hint for
Encoding.variants()
- Added
raw_instrument_id
field toInstrumentDefMsg
(definition schema) for use in future datasets consolidated from multiple publishers - Added new
OHLCV_EOD
rtype for future daily OHLCV schema based on the trading session - Added new
SType::Nasdaq
andSType::Cms
to support querying US equities datasets using either convention, regardless of the original convention of the dataset - Relaxed
pyo3
,tokio
, andzstd
dependency version requirements - Added
FIXED_PRICE_SCALE
constant todatabento_dbn
Python package - Added generated field metadata for each record type to aid in pandas DataFrame creation
- Changed
size_hint
class method to class attribute for Python records
- Fixed multi-frame Zstd decoding for async decoders
- Switched from
anyhow::Error
to customdbn::Error
for all public fallible functions and methods. This should make it easier to disambiguate between error types EncodeDbn::encode_record
andEncodeDbn::record_record_ref
no longer treat aBrokenPipe
error differently- Added
AsyncDbnDecoder
- Added
pretty::Px
andpretty::Ts
newtypes to expose price and timestamp formatting logic outside of CSV and JSON encoding - Added interning for Python strings
- Added
rtype
to encoded JSON and CSV to aid differeniating between different record types. This is particularly important when working with live data - Added
pretty_
Python attributes for DBN price fields - Added
pretty_
Python attributes for DBN UTC timestamp fields
- All fallible operations now return a
dbn::Error
instead of ananyhow::Error
- Updated serialization order to serialize
ts_recv
andts_event
first - Moved header fields (
rtype
,publisher_id
,instrument_id
, andts_event
) to nested object under the keyhd
in JSON encoding to match structure definitions - Changed JSON encoding of all 64-bit integers to strings to avoid loss of precision
- Updated
MboMsg
serialization order to serializeaction
,side
, andchannel_id
earlier given their importance - Updated
Mbp1Msg
,Mbp10Msg
, andTradeMsg
serialization order to serializeaction
,side
, anddepth
earlier given their importance - Updated
InstrumentDefMsg
serialization order to serializeraw_symbol
,security_update_action
, andinstrument_class
earlier given their importance - Removed
bool
return value fromEncodeDbn::encode_record
andEncodeDbn::record_record_ref
. These now returndbn::Result<()>
- Fixed handling of NUL byte when encoding DBN to CSV and JSON
- Fixed handling of broken pipe in
dbn
CLI tool
- Added Python
variants
method to return an iterator over the enum variants forCompression
,Encoding
,Schema
, andSType
- Improved Python enum conversions for
Compression
,Encoding
,Schema
, andSType
- Added publishers enums
- Added export to Python for
Compression
,Encoding
,SType
, andSchema
enums - Improved Python string representation of
ErrorMsg
andSystemMsg
- Added async JSON encoder
- Dropped support for Python 3.7
- Fixed pretty timestamp formatting to match API
- Added
--fragment
and--zstd-fragment
CLI arguments to read DBN streams without metadata - Added
csv::Decoder::get_ref
that returns reference to the underlying writer - Added missing Python getter for
InstrumentDefMsg::group
- Added dataset constants
- Changed
c_char
fields to be exposed to Python asstr
- Added
--limit NUM
CLI argument to output only the firstNUM
records - Added
AsRef<[u8]>
implementation forRecordRef
- Added Python
size_hint
classmethod for DBN records - Improved DBN encoding performance of
RecordRef
s - Added
use_pretty_px
for price formatting anduse_pretty_ts
for datetime formatting to CSV and JSON encoders - Added
UNDEF_TIMESTAMP
constant for when timestamp fields are unset
- Renamed
booklevel
MBP field tolevels
for brevity and consistent naming - Renamed
--pretty-json
CLI flag to--pretty
and added support for CSV. Passing this flag now also enablesuse_pretty_px
anduse_pretty_ts
- Removed
open_interest_qty
andcleared_volume
fields that were always unset from definition schema - Changed Python
DBNDecoder.decode
to return records with ats_out
attribute, instead of a tuple - Rename Python
DbnDecoder
toDBNDecoder
- Fixed
Action
conversion methods (credit: @thomas-k-cameron)
- Added
F
ill action type for MBO messages - Added Python type stub for
StatMsg
- Added support for Statistics schema
- Added
RType
enum for exhaustive pattern matching - Added
&str
getters for morec_char
array record fields - Changed
DbnDecoder.decode
to always return a list of tuples
- Changed
schema
andstype_in
to optional inMetadata
to support live data - Renamed
SType::ProductId
toSType::InstrumentId
andSType::Native
toSType::RawSymbol
- Renamed
RecordHeader::product_id
toinstrument_id
- Renamed
InstrumentDefMsg::symbol
toraw_symbol
- Renamed
SymbolMapping::native_symbol
toraw_symbol
- Deprecated
SType::Smart
to split intoSType::Parent
andSType::Continuous
- Fixed value associated with
Side::None
- Fixed issue with decoding partial records in Python
DbnDecoder
- Fixed missing type hint for Metadata bytes support
- Added support for equality comparisons in Python classes
- Fixed typo in Python type stubs
- Fixed support for
ErrorMsg
,SystemMsg
, andSymbolMappingMsg
in Python
- Added enums
MatchAlgorithm
,UserDefinedInstrument
- Added constants
UNDEF_PRICE
andUNDEF_ORDER_SIZE
- Added Python type stubs for
databento_dbn
package
- Fixed
Metadata.__bytes__
method to return valid DBN - Fixed panics when decoding invalid records
- Fixed issue with attempting to decode partial records in Python
DbnDecoder
- Fixed support for
ImbalanceMsg
in PythonDbnDecoder
- Added support for Imbalance schema
- Updated
InstrumentDefMsg
to include options-related fields andinstrument_class
- Added support for encoding and decoding
ts_out
- Added
ts_out
toMetadata
- Improved enum API
- Relaxed requirement for slice passed to
RecordRef::new
to be mutable - Added error forwarding from
DecodeDbn
methods - Added
SystemMsg
record - Exposed constructor and additional methods for DBN records and
Metadata
to Python - Made
RecordRef
implementSync
andSend
- Introduced separate rtypes for each OHLCV schema
- Removed
record_count
fromMetadata
- Changed serialization of
c_char
fields to strings instead of ints - Renamed
dbn::RecordDecoder::decode_record
todecode
- Renamed
dbn::RecordDecoder::decode_record_ref
todecode_ref
- Renamed
HasRType::size
torecord_size
to avoid confusion with order size fields - Stopped serializing
related
andrelated_security_id
fields inInstrumentDefMsg
- Added records and
Metadata
as exports ofdatabento_dbn
Python package - Improved how
Metadata
appears in Python and added__repr__
- Fixed bug where
dbn
CLI tool didn't truncate existing files
- Added improved Python bindings for decoding DBN
- Standardized documentation for
start
,end
, andlimit
- Fixed bug with
encode_metadata
Python function
- Added ability to migrate legacy DBZ to DBN through CLI
- Relaxed requirement that DBN be Zstandard-compressed
- Folded in
databento-defs
- Added support for async encoding and decoding
- Added billable size calculation to
dbn
CLI - Added
MetadataBuilder
to assist with defaults - Refactored into encoder and decoder types
- Renamed DBZ to DBN
- Renamed python package to
databento-dbn
- Moved metadata out of skippable frame
- Added Python DBZ writing example
- Changed databento-defs dependency to crates.io version
- Added interface for writing DBZ files
- Enabled Zstd checksums
- Changed DBZ decoding to use streaming-iterator
- Changed JSON output to NDJSON
- Change nanosecond timestamps to strings in JSON to avoid loss of precision when parsing
- Initial release