Logic for taking any sort of weird MBQL query and normalizing it into a standardized, canonical form. You can think of this like taking any 'valid' MBQL query and rewriting it as-if it was written in perfect up-to-date MBQL in the latest version. There are four main things done here, done as four separate steps:
Converting all identifiers to lower-case, lisp-case keywords. e.g. {"SOURCE_TABLE" 10}
becomes {:source-table 10}
.
Rewriting deprecated MBQL 95/98 syntax and other things that are still supported for backwards-compatibility in
canonical MBQL 2000 syntax. For example {:breakout [:count 10]}
becomes {:breakout [[:count [:field-id 10]]]}
.
Transformations and cleanup of the query structure as a whole to fix inconsistencies. Whereas the canonicalization
phase operates on a lower-level, transforming invidual clauses, this phase focuses on transformations that affect
multiple clauses, such as removing duplicate references to Fields if they are specified in both the :breakout
and
:fields
clauses.
This is not the only place that does such transformations; several pieces of QP middleware perform similar
individual transformations, such as reconcile-breakout-and-order-by-bucketing
.
Removing empty clauses like {:aggregation nil}
or {:breakout []}
.
Token normalization occurs first, followed by canonicalization, followed by removing empty clauses.
Logic for taking any sort of weird MBQL query and normalizing it into a standardized, canonical form. You can think of this like taking any 'valid' MBQL query and rewriting it as-if it was written in perfect up-to-date MBQL in the latest version. There are four main things done here, done as four separate steps: #### NORMALIZING TOKENS Converting all identifiers to lower-case, lisp-case keywords. e.g. `{"SOURCE_TABLE" 10}` becomes `{:source-table 10}`. #### CANONICALIZING THE QUERY Rewriting deprecated MBQL 95/98 syntax and other things that are still supported for backwards-compatibility in canonical MBQL 2000 syntax. For example `{:breakout [:count 10]}` becomes `{:breakout [[:count [:field-id 10]]]}`. #### WHOLE-QUERY TRANSFORMATIONS Transformations and cleanup of the query structure as a whole to fix inconsistencies. Whereas the canonicalization phase operates on a lower-level, transforming invidual clauses, this phase focuses on transformations that affect multiple clauses, such as removing duplicate references to Fields if they are specified in both the `:breakout` and `:fields` clauses. This is not the only place that does such transformations; several pieces of QP middleware perform similar individual transformations, such as `reconcile-breakout-and-order-by-bucketing`. #### REMOVING EMPTY CLAUSES Removing empty clauses like `{:aggregation nil}` or `{:breakout []}`. Token normalization occurs first, followed by canonicalization, followed by removing empty clauses.
Predicate functions for checking whether something is a valid instance of a given MBQL clause.
Predicate functions for checking whether something is a valid instance of a given MBQL clause.
Schema for validating a normalized MBQL query. This is also the definitive grammar for MBQL, wow!
Schema for validating a *normalized* MBQL query. This is also the definitive grammar for MBQL, wow!
Utilitiy functions for working with MBQL queries.
Utilitiy functions for working with MBQL queries.
Internal implementation of the MBQL match
and replace
macros. Don't use these directly.
Internal implementation of the MBQL `match` and `replace` macros. Don't use these directly.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close