Product roadmap

Submit dataset requests and feature ideas here. For bug reports, use our chat support or issues tracker instead.

Trending
  1. Render API documentation to a single PDF or text-like format

    This request is to make the API documentation available for download so it can be referenced offline.

    Nicholas James Macholl

    0

  2. Real-time and historical index data

    Currently, indices are indirectly supported through tradable index instruments on CME futures, ETFs, etc. and we don't provide the index values (non-tradable) themselves. This may be sourced from a feed like the Cboe Global Indices Feed or NYSE Global Index Feed.

    Tessa Hollinger

    20

  3. WebSocket API for live data

    To extend support to browser-based applications.

    Tessa Hollinger

    3

  4. ICE iMpact: ICE US Futures

    For ICE US futures including US softs and financials, such as cocoa, cotton, sugar, frozen orange juice, canola, world cotton, US grains, precious metals, MSCI indices, all currency pairs, and US Dollar Index. This was previously tied to this ticket for ICE Futures Europe and ICE Endex. However, we've decided to release ICE Futures Europe and Endex first separately from ICE US.

    Tessa Hollinger

    11

  5. Official Java client library

    Make our historical and live APIs easier to integrate from Java.

    Carter Green

    5

  6. Include line number in OPRA messages

    For the purpose of error recovery, it is very important to have a sequenced stream, where each message has a monotonically increasing field that can serve as a sequence number. For OPRA messages, the timestamps are only guaranteed to be increasing within each line. Currently, the line that a message came from can be inferred by (root, is_odd_strike, is_put, is_odd_day), but OPRA will be changing the way messages are distributed over lines, breaking this heuristic. After Q4 2026, the only way to find out which line a message came from would be to include the information in the message itself. Please see: "New OPRA Dynamic Rebalance Symbol Distribution – Go Live Timeframe: Q4, 2026" https://cdn.opraplan.com/documents/OPRA_Dynamic_Symbol_Load_Balancing_Plan_Effective_Q4_2026.pdf

    Yongqian L

    3

  7. Status schema for EQUS.MINI

    The EQUS.MINI dataset does not have the status schema in either the Historical or Live API. This schema is needed for determining if symbols are trading (e.g. open for quoting/trading, or if they were halted or resumed mid-session). As a workaround, users may use the status schema from other US equities datasets such as XNAS.BASIC or IEXG.TOPS (Nasdaq Basic, IEX Tops) through the historical API. However, these are only available on a 15-minute delay without additional licensing fees.

    Zach Banks

    0

  8. Provide support for live intraday replay up to 1 week

    We originally spec'ed the intraday replay to exactly 1 week, guaranteeing it's past an exchange maintenance window/scheduled weekend halt, so that majority of our customers could easily stitch historical and real-time data together unless the venue is truly 24/7 (most "24 hour" traditional venues still have a weekend halt). However the message rates have gotten so high this year that we can only guarantee replay up to 24 hours. This means that a customer that needs to stitch history from >24 hours ago to real-time data will need to do something tedious like the following: Subscribe to the real-time feed with start=0 and keep queuing up the real-time data.Use the first ts_recv timestamp plus some buffer (let's say ts_recv + 10min or ts_recv + 1 min) as the end parameter in an analogous timeseries.get_range request.Use either of these as the index to join your historical data with the queued real-time data: (A) channel_id and sequence_no as a pair - Most correct, but also most tedious. (B) ts_recv - Much easier but could be out of order within a factor of ~1 microsecond at the boundary between historical and queued real-time due to PTP sync error. (C) ts_event - Just as easy as using ts_recv, avoids the sync error at the boundary and works on most current venues which enforce FIFO and monotonic timestamps, however not guaranteed to be correct on all venues since not all venues truly respect monotonicity. Cut over to the real-time stream. Note: The above choice of A/B/C also depends on your business logic and the schema you're using. Most of our schemas are stateless so it might not matter if you have ~1 mic of error at the boundary. Most features/alpha signals are probably stable to that. Even if you're using a stateful schema like MBO, I imagine most downstream applications and business logic should be robust to that ~1 mic out-of-order MBO events at the boundary using ts_recv. The "queue" above may be some kind of message middleware that you can insert into that enforces a sort order on ts_recvβ€”for example, not that this would be a recommended or efficient pattern, you could imagine a database serving as a hacky message middleware could enforce sort on the ts_recv column.

    Tessa Hollinger

    1

  9. Trading calendar information

    This feature would allow the user to request trading calendar information (such as trading session start/end times) via our API. This is especially useful when considering trading sessions that can span multiple UTC dates (and hence the possibility of having multiple trading sessions within a single day)

    Renan Gemignani

    5

  10. CFE Book Depth

    Full depth of book feed for Cboe Futures Exchange (CFE). CFE contains volatility futures and corporate bond index futures, such as VIX futures (VX, VXM).

    Zach Banks

    5

  11. Eurex EOBI dataset

    Data for Eurex, including all schemas (MBO, MBP, ohlcv, etc.).

    Renan Gemignani

    11

  12. Consolidated US equities data

    Currently, equities is supported via individual prop feeds of each venue. While NASDAQ is sufficient for getting NBBO for most of the time, some users prefer something that will be more in line with actual NBBO from SIPs. This feature request tracks 3 possible modes of consolidation for both historical and live data: Databento server-side consolidation of multiple proprietary feeds Consolidated data from proprietary feed like Nasdaq Basic in lieu of SIP Consolidated data from CTA/UTP SIPs We plan on implementing 1-2 of these three options.

    Tessa Hollinger

    11

  13. Cboe FX ITCH (forex, foreign exchange)

    All orders plus last look quotes from 35 major banks and non-bank LPs, on one of the largest FX venues.

    Tessa Hollinger

    12

  14. Official C# client library

    This client library makes all our historical and live features easier to integrate in C# on Windows, Linux, and Mac OS. C# is currently already supported through our HTTP API and Raw TCP protocol, which are both language-agnostic.

    Tessa Hollinger

    9

  15. Provide snapshots for historical and live data

    This serves as a master list of all other snapshot-like features on our roadmap. The scope of this ticket is potentially very large and ambiguous so we've broken this down into smaller tickets that you can follow separately. (Historical only) https://roadmap.databento.com/b/n0o5prm6/feature-ideas/add-historical-endpoint-for-latest-snapshot-of-any-schema. This would allow a user to get the latest published value of any given schema, within the boundaries allowed by licensing/entitlements/historical embargo window. The main benefit of this is for creating ticker tape or latest quote features, e.g. on a web app, after we start exposing intraday data over the historical/HTTP API (https://roadmap.databento.com/roadmap/expose-intraday-and-current-trading-session-historical-data-over-historical-http-api-and-clients). Likely endpoint names for this would be either timeseries.get_last or timeseries.get_snapshot. (Historical only) https://roadmap.databento.com/b/n0o5prm6/feature-ideas/provide-snapshots-as-of-specified-time-in-historical-api. Likely endpoint names for this would be either timeseries.get_last or timeseries.get_snapshot.(Live only) https://roadmap.databento.com/roadmap/add-periodic-mbo-book-snapshots-to-live-api. This allows a user to get the last published value of any given schema at a specified time. The main benefit of this would be to allow customers to subsample the data on server side and reduce cost, though the benefit is diminished with feature 5 on this list. Note that this would allow a user to emulate (1) relatively well since a user could potentially just pass in their current clock time or some time slightly ahead of the clock time. However, their underlying implementations would be different and (1) and (2) would likely be released separately. Likely endpoint names for this would be either timeseries.get_last_asof or `timeseries. (Live only) https://roadmap.databento.com/b/n0o5prm6/feature-ideas/allow-live-api-clients-to-request-for-mbo-snapshot-recovery. This provides resilience to gaps or data errors originating from Databento side. It could also be used for recovery of book state caused by client-side issues or disconnection, but would be less quick than feature (4) on this list.(Both historical and live) https://roadmap.databento.com/roadmap/fixed-interval-mbp-1-summaries-eg-1-minute-bbo-or-subsampled-bbo. The purpose of this is more to provide customers a convenience over fetching or subscribing MBP-1 and subsampling and forward filling the MBP-1 data themselves, which could be very expensive given the size of MBP-1 data and how the customer has no idea how far to look back for the "last" MBP-1 update prior to the 1 second or 1 minute refresh interval. Some of these are in development, hence the status of this entire ticket, however you should check on each individual one in case the specific feature you're looking for is still in Considering state.

    Tessa Hollinger

    7