-
Notifications
You must be signed in to change notification settings - Fork 1
Metadata 1 #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Metadata 1 #16
Conversation
Updated metadata documentation with corrections and clarifications.
Updated terminology and corrected size units in metadata documentation. Added sections on conversions and statistics, and improved clarity in various explanations.
Expanded the metadata document to include detailed discussions on metadata purpose, goals, and formats, along with specific short-term and long-term objectives.
Revised the metadata document to improve clarity and consistency in language, including updates to the purpose, schema definitions, and type system descriptions.
Corrected spelling of 'modelling' to 'modeling' throughout the document.
Added comment to clarify hash type options.
Updated metadata document to improve clarity and consistency in terminology, including changes to key definitions and type representations.
Corrected a typo in the metadata documentation regarding chunk summary.
Add comment regarding key range sharding and chunk handling.
|
One of the key architectural decisions to be made is if we put the dataset properties (that, is the properties of the data itself) into the metadata, or leave it purely schema oriented. Introducing the statistics already suggests that we want to be data aware here. Then, we should also think where we store:
A previous attempt to design such a location aware and update/edit-aware metadata file was made in this issue: https://github.com/subsquid/datas3ts/issues/5 There the design was built around the following properties:
Some extra thought and care should be taken in order to not trigger expensive list operations -- similar how we currently do that by placing the files in a tree-like directory structure in s3 buckets |
define-null
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a first pass on this draft, so left a bunch of comments and questions.
| - a **modeling facility** to define generic data in the first place. | ||
|
|
||
| We are currently working with known and - to some extent - homogeneous data. | ||
| Generic interoperability is significantly harder. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to better understand the generic interoperability and generic data implications here.
- Are we talking about the raw data for such generic datasets or is it a structured data?
- Do we intend to guaranty atomicity and isolation of the ingestion?
|
|
||
| - Data Generation | ||
|
|
||
| - Integration with standard tools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate - which type of tools?
| - Hand-crafted ingestion pipelines | ||
|
|
||
| - Validation and Parsing of data for different components | ||
| (portals, workers, SDKs, the DuckDB extension). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the important questions that is not clear to me from the draft - wherever the intent is to implement schema-on-read or schema-on-write architecture? In the former case we are talking about the traditional data lakes with raw/semi unstructured data, with rather limited correctness check and schema applied when running the query (with less integrity constraints that are enforced on read). In the later we are aiming for the more strict correctness (integrity constraints enforced on write) and consistency.
From that perspective I'm not sure if https://github.com/subsquid/specs/pull/3/changes is assumed in this document or not.
| In the future, we may add statistics to columns or row groups | ||
| to accelerate ingestion and, in particular, retrieval. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we aim for analytical use cases statistics would be essential, as in such systems the common pattern is to use the pruning techniques to further reduce the subset of data involved in query execution. So in my view we should priorities column and row statistics such as min/max, cardinality, null counts, etc from the start
|
|
||
| Types are distinguished into **primitive types** and **complex types**. | ||
|
|
||
| Primitive types are defined in terms of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When it comes to types in my view it's important to consider several factors:
- what is the minimum subset of types that we need for a POC?
- what is the compatibility story with other existing dbs and engines?
- what are the conversion rules that we would like to have for those types?
|
|
||
| In other words, chunks remain tied to ingestion time, whereas keys may not. | ||
| The natural partitioning therefore maps **key ranges to ingestion time ranges**. | ||
| Key-range shards should be defined by users: they know their data size, ingestion speed and data skew. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we can go with the hybrid strategy, where the sorting criteria is provided by the user, while sharding happens automatically? While users commonly better understand the shape of their data, they might have less visibility and understanding what the efficient sharding strategies would be.
|
|
||
| - B+Tree. | ||
|
|
||
| ### Real-Time Data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not quite sure I understand what the realt-time ingestion means in this context. Could you elaborate with some example what is the user use-case that we are considering here? I would expect the batch ingestion which is more preferred for the efficiency and thus not real-time.
| Schemas shall also include elements for defining real-time data. | ||
| This may include an endpoint from which data is read, | ||
| and a stored procedure (or equivalent processing step) | ||
| that transforms data and passes it on to an internal API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to confirm that I understand you correctly - are we talking about ETL pipeline here with possibility to specify the transformation part?
| In the future, we may add statistics to columns or row groups | ||
| to accelerate ingestion and, in particular, retrieval. | ||
|
|
||
| Integrity Constraints are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to be a bit more specific here, wherever it is suggested to enforce those constraints or not and if yes - at which phase (read/write). For example enforcing FK constraint would add significant ingestion overhead and complexity. Similarly uniqueness constraint it not typically enforced on the OLAP systems to my knowledge.
| (maps, assignments, chunks, etc.) in memory in a single portal. We will need | ||
| to **shard datasets across portals**. | ||
|
|
||
| We may also want to explore other kinds of indices, for example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest we prioritize bitmap and zone indexes. B+tree and Radix tree are better fit for the OLTP workloads.
First iteration for metadata with some preliminary ideas.
Highly relevant is #3