Skip to main content
SonarX’s high-quality process encodes the rules we discussed in the Data Freshness section into its Reliability Layer, a robust operational paradigm built upon three pillars:

Pillar I

The first pillar of SonarX’s reliability layer is its thorough, multi-period set of checks to detect when actual re-orgs occur. In fact, beyond any analysis and probabilistic assessment of finality, SonarX knows that re-orgs can manifest at any moment, in any block, even after 1000 more blocks have been generated. As such, it never stops checking for them, no matter how much time has passed. SonarX implements three different sets of checks for re-orgs:
  1. “immediate checks” - with every new ingestion
  2. hourly
  3. every 4 hours
If a re-org is detected, SonarX goes back and modifies all the affected historical data, and reflects the correct new blocks accordingly.

Pillar II

The second pillar of SonarX’s reliability layer is the creation of two separate datasets, each reflecting a different level of probabilistic finality, thereby enabling it to fulfil customers’ requirements and match their risk aversion (or lack thereof). SonarX handles its Real Time Dataset differently from the Full Historical one when it comes to finality and re-orgs. In fact, the Real Time dataset prioritizes speed over immutable finality. As such, it reflects the needs of customers who are aware of the risk of re-orgs but are keen to address it while getting the data as fast as possible. On the contrary, the Full Historical dataset leverages the carefully derived P90 Lag Period parameter to minimize the risk of unexpected re-orgs occurring after each batch of data is inserted. Consequently, the Full Historical dataset maximizes the likelihood of immutable finality, at the cost of additional latency / lower freshness, and meets the requirements of customers with low or zero tolerance for probabilistic finality. The same considerations also apply to all the Specialized Datasets in SonarX Solutions, which are derived from the Full Historical dataset by applying additional analyses, decoding, and bespoke logic.

Pillar III

SonarX is well aware of two key facts:
  1. Customers have different degrees of risk tolerance when facing probabilistic finality, and
  2. At any given time, each chain has a different probability of being subject to re-orgs within a particular unit of time after the generation of a block (or at all). Moreover, a chain’s propensity to re-org can change during its history, even significantly.
As such, SonarX’s reliability layer is entirely configuration-based and parametric, allowing for fine-tuning, adaptability to customers’ risk appetite, and factors that can affect a chain’s likelihood of facing re-orgs over time. As such, when looking at freshness/latency metrics in SonarX, it is essential to distinguish between physical limits and constraints on one side, and the choices and configurations implemented to maximize quality and finality on the other: the former cannot be changed, while the latter are fully configurable.