Liking cljdoc? Tell your friends :D

metabase.api.common.internal

Internal functions used by metabase.api.common. These are primarily used as the internal implementation of defendpoint.

Internal functions used by `metabase.api.common`.
These are primarily used as the internal implementation of `defendpoint`.
raw docstring

metabase.api.email

/api/email endpoints

/api/email endpoints
raw docstring

metabase.api.embed

Various endpoints that use JSON web tokens to fetch Cards and Dashboards. The endpoints are the same as the ones in api/public/, and differ only in the way they are authorized.

To use these endpoints:

  1. Set the embedding-secret-key Setting to a hexadecimal-encoded 32-byte sequence (i.e., a 64-character string). You can use /api/util/random_token to get a cryptographically-secure value for this.
  2. Sign/base-64 encode a JSON Web Token using the secret key and pass it as the relevant part of the URL path to the various endpoints here.

Tokens can have the following fields:

{:resource {:question <card-id> :dashboard <dashboard-id>} :params <params>}

Various endpoints that use [JSON web tokens](https://jwt.io/introduction/) to fetch Cards and Dashboards.
The endpoints are the same as the ones in `api/public/`, and differ only in the way they are authorized.

To use these endpoints:

 1.  Set the `embedding-secret-key` Setting to a hexadecimal-encoded 32-byte sequence (i.e., a 64-character string).
     You can use `/api/util/random_token` to get a cryptographically-secure value for this.
 2.  Sign/base-64 encode a JSON Web Token using the secret key and pass it as the relevant part of the URL path
     to the various endpoints here.

Tokens can have the following fields:

   {:resource {:question  <card-id>
               :dashboard <dashboard-id>}
    :params   <params>}
raw docstring

metabase.api.ldap

/api/ldap endpoints

/api/ldap endpoints
raw docstring

metabase.api.notify

/api/notify/* endpoints which receive inbound etl server notifications.

/api/notify/* endpoints which receive inbound etl server notifications.
raw docstring

metabase.api.preview-embed

Endpoints for previewing how Cards and Dashboards will look when embedding them. These endpoints are basically identical in functionality to the ones in /api/embed, but:

  1. Require admin access
  2. Ignore the values of :enabled_embedding for Cards/Dashboards
  3. Ignore the :embed_params whitelist for Card/Dashboards, instead using a field called :_embedding_params in the JWT token itself.

Refer to the documentation for those endpoints for further details.

Endpoints for previewing how Cards and Dashboards will look when embedding them.
These endpoints are basically identical in functionality to the ones in `/api/embed`, but:

1.  Require admin access
2.  Ignore the values of `:enabled_embedding` for Cards/Dashboards
3.  Ignore the `:embed_params` whitelist for Card/Dashboards, instead using a field called `:_embedding_params` in
    the JWT token itself.

Refer to the documentation for those endpoints for further details.
raw docstring

metabase.api.setting

/api/setting endpoints

/api/setting endpoints
raw docstring

metabase.api.slack

/api/slack endpoints

/api/slack endpoints
raw docstring

metabase.api.task

/api/task endpoints

/api/task endpoints
raw docstring

metabase.api.util

Random utilty endpoints for things that don't belong anywhere else in particular, e.g. endpoints for certain admin page tasks.

Random utilty endpoints for things that don't belong anywhere else in particular, e.g. endpoints for certain admin
page tasks.
raw docstring

metabase.async.api-response

Handle ring response maps that contain a core.async chan in the :body key:

{:status 200
 :body (a/chan)}

and send strings (presumibly ) as heartbeats to the client until the real results (a seq) is received, then stream that to the client.

Handle ring response maps that contain a core.async chan in the :body key:

    {:status 200
     :body (a/chan)}

  and send strings (presumibly 
) as heartbeats to the client until the real results (a seq) is received, then stream
  that to the client.
raw docstring

No vars found in this namespace.

metabase.async.util

Utility functions for core.async-based async logic.

Utility functions for core.async-based async logic.
raw docstring

metabase.automagic-dashboards.core

Automatically generate questions and dashboards based on predefined heuristics.

Automatically generate questions and dashboards based on predefined
heuristics.
raw docstring

metabase.automagic-dashboards.rules

Validation, transformation to cannonical form, and loading of heuristics.

Validation, transformation to cannonical form, and loading of heuristics.
raw docstring

metabase.cmd

Functions for commands that can be ran from the command-line with lein or the Metabase JAR. These are ran as follows:

<metabase> <command> <options>

for example, running the migrate command and passing it force can be done using one of the following ways:

lein run migrate force java -jar metabase.jar migrate force

Logic below translates resolves the command itself to a function marked with ^:command metadata and calls the function with arguments as appropriate.

You can see what commands are available by running the command help. This command uses the docstrings and arglists associated with each command's entrypoint function to generate descriptions for each command.

Functions for commands that can be ran from the command-line with `lein` or the Metabase JAR. These are ran as
follows:

  <metabase> <command> <options>

for example, running the `migrate` command and passing it `force` can be done using one of the following ways:

  lein run migrate force
  java -jar metabase.jar migrate force


Logic below translates resolves the command itself to a function marked with `^:command` metadata and calls the
function with arguments as appropriate.

You can see what commands are available by running the command `help`. This command uses the docstrings and arglists
associated with each command's entrypoint function to generate descriptions for each command.
raw docstring

metabase.cmd.endpoint-dox

Implementation for the api-documentation command, which generate

Implementation for the `api-documentation` command, which generate
raw docstring

metabase.cmd.load-from-h2

Commands for loading data from an H2 file into another database. Run this with lein run load-from-h2 or java -jar metabase.jar load-from-h2.

Test this as follows:

# Postgres
psql -c 'DROP DATABASE IF EXISTS metabase;'
psql -c 'CREATE DATABASE metabase;'
MB_DB_TYPE=postgres MB_DB_HOST=localhost MB_DB_PORT=5432 MB_DB_USER=camsaul MB_DB_DBNAME=metabase lein run load-from-h2

# MySQL
mysql -u root -e 'DROP DATABASE IF EXISTS metabase; CREATE DATABASE metabase;'
MB_DB_TYPE=mysql MB_DB_HOST=localhost MB_DB_PORT=3305 MB_DB_USER=root MB_DB_DBNAME=metabase lein run load-from-h2
Commands for loading data from an H2 file into another database.
Run this with `lein run load-from-h2` or `java -jar metabase.jar load-from-h2`.

Test this as follows:

```
# Postgres
psql -c 'DROP DATABASE IF EXISTS metabase;'
psql -c 'CREATE DATABASE metabase;'
MB_DB_TYPE=postgres MB_DB_HOST=localhost MB_DB_PORT=5432 MB_DB_USER=camsaul MB_DB_DBNAME=metabase lein run load-from-h2

# MySQL
mysql -u root -e 'DROP DATABASE IF EXISTS metabase; CREATE DATABASE metabase;'
MB_DB_TYPE=mysql MB_DB_HOST=localhost MB_DB_PORT=3305 MB_DB_USER=root MB_DB_DBNAME=metabase lein run load-from-h2
```
raw docstring

metabase.core.initialization-status

Code related to tracking the progress of metabase initialization. This is kept in a separate, tiny namespace so it can be loaded right away when the application launches (and so we don't need to wait for metabase.core to load to check the status).

Code related to tracking the progress of metabase initialization.
This is kept in a separate, tiny namespace so it can be loaded right away when the application launches
(and so we don't need to wait for `metabase.core` to load to check the status).
raw docstring

metabase.db

Application database definition, and setup logic, and helper functions for interacting with it.

Application database definition, and setup logic, and helper functions for interacting with it.
raw docstring

metabase.db.connection-pool

Low-level logic for creating connection pools for a JDBC-based database. Used by both application DB and connected data warehouse DBs.

The aim here is to completely encapsulate the connection pool library we use -- that way we can swap it out if we want to at some point without having to touch any other files. (TODO - this is currently true of everything except for the options, which are c3p0-specific -- consider abstracting those as well?)

Low-level logic for creating connection pools for a JDBC-based database. Used by both application DB and connected
data warehouse DBs.

The aim here is to completely encapsulate the connection pool library we use -- that way we can swap it out if we
want to at some point without having to touch any other files. (TODO - this is currently true of everything except
for the options, which are c3p0-specific -- consider abstracting those as well?)
raw docstring

metabase.db.metadata-queries

Predefined MBQL queries for getting metadata about an external database.

Predefined MBQL queries for getting metadata about an external database.
raw docstring

metabase.db.migrations

Clojure-land data migration definitions and fns for running them. These migrations are all ran once when Metabase is first launched, except when transferring data from an existing H2 database. When data is transferred from an H2 database, migrations will already have been run against that data; thus, all of these migrations need to be repeatable, e.g.:

CREATE TABLE IF NOT EXISTS ... -- Good CREATE TABLE ... -- Bad

Clojure-land data migration definitions and fns for running them.
These migrations are all ran once when Metabase is first launched, except when transferring data from an existing
H2 database.  When data is transferred from an H2 database, migrations will already have been run against that data;
thus, all of these migrations need to be repeatable, e.g.:

   CREATE TABLE IF NOT EXISTS ... -- Good
   CREATE TABLE ...               -- Bad
raw docstring

metabase.db.spec

Functions for creating JDBC DB specs for a given engine. Only databases that are supported as application DBs should have functions in this namespace; otherwise, similar functions are only needed by drivers, and belong in those namespaces.

Functions for creating JDBC DB specs for a given engine.
Only databases that are supported as application DBs should have functions in this namespace;
otherwise, similar functions are only needed by drivers, and belong in those namespaces.
raw docstring

metabase.driver

Metabase Drivers handle various things we need to do with connected data warehouse databases, including things like introspecting their schemas and processing and running MBQL queries. Drivers must implement some or all of the multimethods defined below, and register themselves with a call to regsiter!.

SQL-based drivers can use the :sql driver as a parent, and JDBC-based SQL drivers can use :sql-jdbc. Both of these drivers define additional multimethods that child drivers should implement; see metabase.driver.sql and metabase.driver.sql-jdbc for more details.

Metabase Drivers handle various things we need to do with connected data warehouse databases, including things like
introspecting their schemas and processing and running MBQL queries. Drivers must implement some or all of the
multimethods defined below, and register themselves with a call to `regsiter!`.

SQL-based drivers can use the `:sql` driver as a parent, and JDBC-based SQL drivers can use `:sql-jdbc`. Both of
these drivers define additional multimethods that child drivers should implement; see `metabase.driver.sql` and
`metabase.driver.sql-jdbc` for more details.
raw docstring

metabase.driver.h2

No vars found in this namespace.

metabase.driver.mysql

MySQL driver. Builds off of the SQL-JDBC driver.

MySQL driver. Builds off of the SQL-JDBC driver.
raw docstring

No vars found in this namespace.

metabase.driver.postgres

Database driver for PostgreSQL databases. Builds on top of the SQL JDBC driver, which implements most functionality for JDBC-based drivers.

Database driver for PostgreSQL databases. Builds on top of the SQL JDBC driver, which implements most functionality
for JDBC-based drivers.
raw docstring

No vars found in this namespace.

metabase.driver.sql

Shared code for all drivers that use SQL under the hood.

Shared code for all drivers that use SQL under the hood.
raw docstring

metabase.driver.sql-jdbc

Shared code for drivers for SQL databases using their respective JDBC drivers under the hood.

Shared code for drivers for SQL databases using their respective JDBC drivers under the hood.
raw docstring

metabase.driver.sql-jdbc.connection

Logic for creating and managing connection pools for SQL JDBC drivers. Implementations for connection-related driver multimethods for SQL JDBC drivers.

Logic for creating and managing connection pools for SQL JDBC drivers. Implementations for connection-related driver
multimethods for SQL JDBC drivers.
raw docstring

metabase.driver.sql-jdbc.execute

Code related to actually running a SQL query against a JDBC database (including setting the session timezone when appropriate), and for properly encoding/decoding types going in and out of the database.

Code related to actually running a SQL query against a JDBC database (including setting the session timezone when
appropriate), and for properly encoding/decoding types going in and out of the database.
raw docstring

metabase.driver.sql-jdbc.sync

Implementations for sync-related driver multimethods for SQL JDBC drivers, using JDBC DatabaseMetaData.

Implementations for sync-related driver multimethods for SQL JDBC drivers, using JDBC DatabaseMetaData.
raw docstring

metabase.driver.sql.util

Utility functions for writing SQL drivers.

Utility functions for writing SQL drivers.
raw docstring

metabase.driver.sql.util.unprepare

Utility functions for converting a prepared statement with ? into a plain SQL query.

TODO - since this is no longer strictly a 'util' namespace (most :sql-jdbc drivers need to implement one or methods from here) let's rename this metabase.driver.sql.unprepare when we get a chance.

Utility functions for converting a prepared statement with `?` into a plain SQL query.

TODO - since this is no longer strictly a 'util' namespace (most `:sql-jdbc` drivers need to implement one or
methods from here) let's rename this `metabase.driver.sql.unprepare` when we get a chance.
raw docstring

metabase.email.messages

Convenience functions for sending templated email messages. Each function here should represent a single email. NOTE: we want to keep this about email formatting, so don't put heavy logic here RE: building data for emails.

Convenience functions for sending templated email messages.  Each function here should represent a single email.
NOTE: we want to keep this about email formatting, so don't put heavy logic here RE: building data for emails.
raw docstring

metabase.events

Provides a very simply event bus using core.async to allow publishing and subscribing to interesting topics happening throughout the Metabase system in a decoupled way.

Regarding Events Initialization:

The most appropriate way to initialize event listeners in any metabase.events.* namespace is to implement the events-init function which accepts zero arguments. This function is dynamically resolved and called exactly once when the application goes through normal startup procedures. Inside this function you can do any work needed and add your events subscribers to the bus as usual via start-event-listener!.

Provides a very simply event bus using `core.async` to allow publishing and subscribing to interesting topics
happening throughout the Metabase system in a decoupled way.

## Regarding Events Initialization:

The most appropriate way to initialize event listeners in any `metabase.events.*` namespace is to implement the
`events-init` function which accepts zero arguments. This function is dynamically resolved and called exactly once
when the application goes through normal startup procedures. Inside this function you can do any work needed and add
your events subscribers to the bus as usual via `start-event-listener!`.
raw docstring

metabase.events.driver-notifications

Driver notifications are used to let drivers know database details or other relevant information has changed (:database-update) or that a Database has been deleted (:database-delete). Drivers can choose to be notified of these events by implementing the metabase.driver/notify-database-updated multimethod. At the time of this writing, the SQL JDBC driver 'superclass' is the only thing that implements this method, and does so to close connection pools when database details change or when they are deleted.

Driver notifications are used to let drivers know database details or other relevant information has
changed (`:database-update`) or that a Database has been deleted (`:database-delete`). Drivers can choose to be
notified of these events by implementing the `metabase.driver/notify-database-updated` multimethod. At the time of
this writing, the SQL JDBC driver 'superclass' is the only thing that implements this method, and does so to close
connection pools when database details change or when they are deleted.
raw docstring

metabase.handler

Top-level Metabase Ring handler.

Top-level Metabase Ring handler.
raw docstring

metabase.mbql.normalize

Logic for taking any sort of weird MBQL query and normalizing it into a standardized, canonical form. You can think of this like taking any 'valid' MBQL query and rewriting it as-if it was written in perfect up-to-date MBQL in the latest version. There are four main things done here, done as four separate steps:

NORMALIZING TOKENS

Converting all identifiers to lower-case, lisp-case keywords. e.g. {"SOURCE_TABLE" 10} becomes {:source-table 10}.

CANONICALIZING THE QUERY

Rewriting deprecated MBQL 95/98 syntax and other things that are still supported for backwards-compatibility in canonical MBQL 2000 syntax. For example {:breakout [:count 10]} becomes {:breakout [[:count [:field-id 10]]]}.

WHOLE-QUERY TRANSFORMATIONS

Transformations and cleanup of the query structure as a whole to fix inconsistencies. Whereas the canonicalization phase operates on a lower-level, transforming invidual clauses, this phase focuses on transformations that affect multiple clauses, such as removing duplicate references to Fields if they are specified in both the :breakout and :fields clauses.

This is not the only place that does such transformations; several pieces of QP middleware perform similar individual transformations, such as reconcile-breakout-and-order-by-bucketing.

REMOVING EMPTY CLAUSES

Removing empty clauses like {:aggregation nil} or {:breakout []}.

Token normalization occurs first, followed by canonicalization, followed by removing empty clauses.

Logic for taking any sort of weird MBQL query and normalizing it into a standardized, canonical form. You can think
of this like taking any 'valid' MBQL query and rewriting it as-if it was written in perfect up-to-date MBQL in the
latest version. There are four main things done here, done as four separate steps:

#### NORMALIZING TOKENS

Converting all identifiers to lower-case, lisp-case keywords. e.g. `{"SOURCE_TABLE" 10}` becomes `{:source-table
10}`.

#### CANONICALIZING THE QUERY

Rewriting deprecated MBQL 95/98 syntax and other things that are still supported for backwards-compatibility in
canonical MBQL 2000 syntax. For example `{:breakout [:count 10]}` becomes `{:breakout [[:count [:field-id 10]]]}`.

#### WHOLE-QUERY TRANSFORMATIONS

Transformations and cleanup of the query structure as a whole to fix inconsistencies. Whereas the canonicalization
phase operates on a lower-level, transforming invidual clauses, this phase focuses on transformations that affect
multiple clauses, such as removing duplicate references to Fields if they are specified in both the `:breakout` and
`:fields` clauses.

This is not the only place that does such transformations; several pieces of QP middleware perform similar
individual transformations, such as `reconcile-breakout-and-order-by-bucketing`.

#### REMOVING EMPTY CLAUSES

Removing empty clauses like `{:aggregation nil}` or `{:breakout []}`.

Token normalization occurs first, followed by canonicalization, followed by removing empty clauses.
raw docstring

metabase.mbql.predicates

Predicate functions for checking whether something is a valid instance of a given MBQL clause.

Predicate functions for checking whether something is a valid instance of a given MBQL clause.
raw docstring

metabase.mbql.util.match

Internal implementation of the MBQL match and replace macros. Don't use these directly.

Internal implementation of the MBQL `match` and `replace` macros. Don't use these directly.
raw docstring

metabase.metabot.command

Implementations of various MetaBot commands.

Implementations of various MetaBot commands.
raw docstring

metabase.metabot.events

Logic related to handling Slack events, running commands for events that are messages to the MetaBot, and posting the response on Slack.

Logic related to handling Slack events, running commands for events that are messages to the MetaBot, and posting the
response on Slack.
raw docstring

metabase.metabot.instance

Logic for deciding which Metabase instance in a multi-instance (i.e., horizontally scaled) setup gets to be the MetaBot.

Close your eyes, and imagine a scenario: someone is running multiple Metabase instances in a horizontal cluster. Good for them, but how do we make sure one, and only one, of those instances, replies to incoming MetaBot commands? It would certainly be too much if someone ran, say, 4 instances, and typing metabot kanye into Slack gave them 4 Kanye West quotes, wouldn't it?

Luckily, we have an "elegant" solution: we'll use the Settings framework to keep track of which instance is currently serving as the MetaBot. We'll have that instance periodically check in; if it doesn't check in for some timeout interval, we'll consider the job of MetaBot up for grabs. Each instance will periodically check if the MetaBot job is open, and, if so, whoever discovers it first will take it.

How do we uniquiely identify each instance?

local-process-uuid is randomly-generated upon launch and used to identify this specific Metabase instance during this specifc run. Restarting the server will change this UUID, and each server in a hortizontal cluster will have its own ID, making this different from the site-uuid Setting. The local process UUID is used to differentiate different horizontally clustered MB instances so we can determine which of them will handle MetaBot duties.

TODO - if we ever want to use this elsewhere, we need to move it to metabase.config or somewhere else central like that.

Logic for deciding which Metabase instance in a multi-instance (i.e., horizontally scaled) setup gets to be the
MetaBot.

Close your eyes, and imagine a scenario: someone is running multiple Metabase instances in a horizontal cluster.
Good for them, but how do we make sure one, and only one, of those instances, replies to incoming MetaBot commands?
It would certainly be too much if someone ran, say, 4 instances, and typing `metabot kanye` into Slack gave them 4
Kanye West quotes, wouldn't it?

Luckily, we have an "elegant" solution: we'll use the Settings framework to keep track of which instance is
currently serving as the MetaBot. We'll have that instance periodically check in; if it doesn't check in for some
timeout interval, we'll consider the job of MetaBot up for grabs. Each instance will periodically check if the
MetaBot job is open, and, if so, whoever discovers it first will take it.

How do we uniquiely identify each instance?

`local-process-uuid` is randomly-generated upon launch and used to identify this specific Metabase instance during
this specifc run. Restarting the server will change this UUID, and each server in a hortizontal cluster will have
its own ID, making this different from the `site-uuid` Setting. The local process UUID is used to differentiate
different horizontally clustered MB instances so we can determine which of them will handle MetaBot duties.

TODO - if we ever want to use this elsewhere, we need to move it to `metabase.config` or somewhere else central like
that.
raw docstring

metabase.metabot.slack

Logic related to posting messages [synchronously and asynchronously] to Slack and handling errors.

Logic related to posting messages [synchronously and asynchronously] to Slack and handling errors.
raw docstring

metabase.metabot.websocket

Logic for managing the websocket MetaBot uses to monitor and reply to Slack messages, specifically a 'monitor thread' that watches the websocket handling thread and disconnects/reconnects it when needed.

Logic for managing the websocket MetaBot uses to monitor and reply to Slack messages, specifically a 'monitor thread'
that watches the websocket handling thread and disconnects/reconnects it when needed.
raw docstring

metabase.middleware.auth

Middleware related to enforcing authentication/API keys (when applicable). Unlike most other middleware most of this is not used as part of the normal app; it is instead added selectively to appropriate routes.

Middleware related to enforcing authentication/API keys (when applicable). Unlike most other middleware most of this
is not used as part of the normal `app`; it is instead added selectively to appropriate routes.
raw docstring

metabase.middleware.exceptions

Ring middleware for handling Exceptions thrown in API request handler functions.

Ring middleware for handling Exceptions thrown in API request handler functions.
raw docstring

metabase.middleware.json

Middleware related to parsing JSON requests and generating JSON responses.

Middleware related to parsing JSON requests and generating JSON responses.
raw docstring

metabase.middleware.log

Ring middleware for logging API requests/responses.

Ring middleware for logging API requests/responses.
raw docstring

metabase.middleware.security

Ring middleware for adding security-related headers to API responses.

Ring middleware for adding security-related headers to API responses.
raw docstring

metabase.middleware.session

Ring middleware related to session (binding current user and permissions).

Ring middleware related to session (binding current user and permissions).
raw docstring

metabase.models.alert

No vars found in this namespace.

metabase.models.card

Underlying DB model for what is now most commonly referred to as a 'Question' in most user-facing situations. Card is a historical name, but is the same thing; both terms are used interchangeably in the backend codebase.

Underlying DB model for what is now most commonly referred to as a 'Question' in most user-facing situations. Card
is a historical name, but is the same thing; both terms are used interchangeably in the backend codebase.
raw docstring

metabase.models.collection

Collections are used to organize Cards, Dashboards, and Pulses; as of v0.30, they are the primary way we determine permissions for these objects.

TODO - I think this namespace is too big now! Maybe move the graph stuff into somewhere like metabase.models.collection.graph

Collections are used to organize Cards, Dashboards, and Pulses; as of v0.30, they are the primary way we determine
permissions for these objects.

TODO - I think this namespace is too big now! Maybe move the graph stuff into somewhere like
`metabase.models.collection.graph`
raw docstring

metabase.models.dependency

Dependencies are used to keep track of objects that depend on other objects, and acts as a sort of m2m FK table. For example, a Card might use a Segment; a Dependency object will be used to track this dependency so appropriate actions can take place or be prevented when something changes.

Dependencies are used to keep track of objects that depend on other objects, and acts as a sort of m2m FK table. For
example, a Card might use a Segment; a Dependency object will be used to track this dependency so appropriate
actions can take place or be prevented when something changes.
raw docstring

metabase.models.dimension

Dimensions are used to define remappings for Fields handled automatically when those Fields are encountered by the Query Processor. For a more detailed explanation, refer to the documentation in metabase.query-processor.middleware.add-dimension-projections.

Dimensions are used to define remappings for Fields handled automatically when those Fields are encountered by the
Query Processor. For a more detailed explanation, refer to the documentation in
`metabase.query-processor.middleware.add-dimension-projections`.
raw docstring

metabase.models.humanization

Logic related to humanization of table names and other identifiers, e.g. taking an identifier like my_table and returning a human-friendly one like My Table.

There are currently three implementations of humanization logic; :advanced, cost-based logic is the default; which implementation is used is determined by the Setting humanization-strategy; :simple, which merely replaces underscores and dashes with spaces, and :none, which predictibly is merely an identity function that does nothing to the results.

The actual algorithm for advanced humanization is in metabase.util.infer-spaces. (NOTE: some of the logic is here, such as the captialize-word function; maybe we should move that so all the logic is in one place?)

Logic related to humanization of table names and other identifiers, e.g. taking an identifier like `my_table` and
returning a human-friendly one like `My Table`.

There are currently three implementations of humanization logic; `:advanced`, cost-based logic is the default; which
implementation is used is determined by the Setting `humanization-strategy`; `:simple`, which merely replaces
underscores and dashes with spaces, and `:none`, which predictibly is merely an identity function that does nothing
to the results.

The actual algorithm for advanced humanization is in `metabase.util.infer-spaces`. (NOTE: some of the logic is here,
such as the `captialize-word` function; maybe we should move that so all the logic is in one place?)
raw docstring

metabase.models.metric

A Metric is a saved MBQL 'macro' expanding to a combination of :aggregation and/or :filter clauses. It is passed in as an :aggregation clause but is replaced by the expand-macros middleware with the appropriate clauses.

A Metric is a saved MBQL 'macro' expanding to a combination of `:aggregation` and/or `:filter` clauses.
It is passed in as an `:aggregation` clause but is replaced by the `expand-macros` middleware with the appropriate
clauses.
raw docstring

metabase.models.metric-important-field

Intersection table for Metric and Field; this is used to keep track of the top 0-3 important fields for a metric as shown in the Getting Started guide.

Intersection table for `Metric` and `Field`; this is used to keep track of the top 0-3 important fields for a metric as shown in the Getting Started guide.
raw docstring

metabase.models.params

Utility functions for dealing with parameters for Dashboards and Cards.

Utility functions for dealing with parameters for Dashboards and Cards.
raw docstring

metabase.models.permissions-group

A PermissionsGroup is a group (or role) that can be assigned certain permissions. Users can be members of one or more of these groups.

A few 'magic' groups exist: all-users, which predicably contains All Users; admin, which contains all superusers, and metabot, which is used to set permissions for the MetaBot. These groups are 'magic' in the sense that you cannot add users to them yourself, nor can you delete them; they are created automatically. You can, however, set permissions for them.

A `PermissionsGroup` is a group (or role) that can be assigned certain permissions. Users can be members of one or
more of these groups.

A few 'magic' groups exist: `all-users`, which predicably contains All Users; `admin`, which contains all
superusers, and `metabot`, which is used to set permissions for the MetaBot. These groups are 'magic' in the sense
that you cannot add users to them yourself, nor can you delete them; they are created automatically. You can,
however, set permissions for them. 
raw docstring

metabase.models.pulse

Notifcations are ways to deliver the results of Questions to users without going through the normal Metabase UI. At the time of this writing, there are two delivery mechanisms for Notifications -- email and Slack notifications; these destinations are known as 'Channels'. Notifications themselves are futher divied into two categories -- 'Pulses', which are sent at specified intervals, and 'Alerts', which are sent when certain conditions are met (such as a query returning results).

Because 'Pulses' were originally the only type of Notification, this name is still used for the model itself, and in some of the functions below. To keep things clear try to make sure you use the term 'Notification' for things that work with either type.

One more thing to keep in mind: this code is pretty old and doesn't follow the code patterns used in the other Metabase models. There is a plethora of CRUD functions for working with Pulses that IMO aren't really needed (e.g. functions for fetching a specific Pulse). At some point in the future, we can clean this namespace up and bring the code in line with the rest of the codebase, but for the time being, it probably makes sense to follow the existing patterns in this namespace rather than further confuse things.

Notifcations are ways to deliver the results of Questions to users without going through the normal Metabase UI. At
the time of this writing, there are two delivery mechanisms for Notifications -- email and Slack notifications;
these destinations are known as 'Channels'. Notifications themselves are futher divied into two categories --
'Pulses', which are sent at specified intervals, and 'Alerts', which are sent when certain conditions are met (such
as a query returning results).

Because 'Pulses' were originally the only type of Notification, this name is still used for the model itself, and in
some of the functions below. To keep things clear try to make sure you use the term 'Notification' for things that
work with either type.

One more thing to keep in mind: this code is pretty old and doesn't follow the code patterns used in the other
Metabase models. There is a plethora of CRUD functions for working with Pulses that IMO aren't really needed (e.g.
functions for fetching a specific Pulse). At some point in the future, we can clean this namespace up and bring the
code in line with the rest of the codebase, but for the time being, it probably makes sense to follow the existing
patterns in this namespace rather than further confuse things.
raw docstring

metabase.models.query

Functions related to the 'Query' model, which records stuff such as average query execution time.

Functions related to the 'Query' model, which records stuff such as average query execution time.
raw docstring

metabase.models.query-cache

A model used to cache query results in the database.

A model used to cache query results in the database.
raw docstring

metabase.models.query-execution

QueryExecution is a log of very time a query is executed, and other information such as the User who executed it, run time, context it was executed in, etc.

QueryExecution is a log of very time a query is executed, and other information such as the User who executed it, run
time, context it was executed in, etc.
raw docstring

metabase.models.query.permissions

Functions used to calculate the permissions needed to run a query based on old-style DATA ACCESS PERMISSIONS. The only thing that is subject to these sorts of checks are ad-hoc queries, i.e. queries that have not yet been saved as a Card. Saved Cards are subject to the permissions of the Collection to which they belong.

Functions used to calculate the permissions needed to run a query based on old-style DATA ACCESS PERMISSIONS. The
only thing that is subject to these sorts of checks are *ad-hoc* queries, i.e. queries that have not yet been saved
as a Card. Saved Cards are subject to the permissions of the Collection to which they belong.
raw docstring

metabase.models.segment

A Segment is a saved MBQL 'macro', expanding to a :filter subclause. It is passed in as a :filter subclause but is replaced by the expand-macros middleware with the appropriate clauses.

A Segment is a saved MBQL 'macro', expanding to a `:filter` subclause. It is passed in as a `:filter` subclause but is
replaced by the `expand-macros` middleware with the appropriate clauses.
raw docstring

metabase.models.setting

Settings are a fast and simple way to create a setting that can be set from the admin page. They are saved to the Database, but intelligently cached internally for super-fast lookups.

Define a new Setting with defsetting (optionally supplying a default value, type, or custom getters & setters):

(defsetting mandrill-api-key "API key for Mandrill")

The setting and docstr will then be auto-magically accessible from the admin page.

You can also set the value via the corresponding env var, which looks like MB_MANDRILL_API_KEY, where the name of the setting is converted to uppercase and dashes to underscores.

The var created with defsetting can be used as a getter/setter, or you can use get and set!:

(require '[metabase.models.setting :as setting])

(setting/get :mandrill-api-key)           ; only returns values set explicitly from SuperAdmin
(mandrill-api-key)                        ; returns value set in SuperAdmin, OR value of corresponding env var,
                                          ; OR the default value, if any (in that order)

(setting/set! :mandrill-api-key "NEW_KEY")
(mandrill-api-key "NEW_KEY")

(setting/set! :mandrill-api-key nil)
(mandrill-api-key nil)

Get a map of all Settings:

(setting/all)

Settings are a fast and simple way to create a setting that can be set from the admin page. They are saved to the
Database, but intelligently cached internally for super-fast lookups.

Define a new Setting with `defsetting` (optionally supplying a default value, type, or custom getters & setters):

   (defsetting mandrill-api-key "API key for Mandrill")

The setting and docstr will then be auto-magically accessible from the admin page.

You can also set the value via the corresponding env var, which looks like `MB_MANDRILL_API_KEY`, where the name of
the setting is converted to uppercase and dashes to underscores.

The var created with `defsetting` can be used as a getter/setter, or you can use `get` and `set!`:

    (require '[metabase.models.setting :as setting])

    (setting/get :mandrill-api-key)           ; only returns values set explicitly from SuperAdmin
    (mandrill-api-key)                        ; returns value set in SuperAdmin, OR value of corresponding env var,
                                              ; OR the default value, if any (in that order)

    (setting/set! :mandrill-api-key "NEW_KEY")
    (mandrill-api-key "NEW_KEY")

    (setting/set! :mandrill-api-key nil)
    (mandrill-api-key nil)

Get a map of all Settings:

   (setting/all)
raw docstring

metabase.models.setting.cache

Settings cache. Cache is a 1:1 mapping of what's in the DB. Cached lookup time is ~60µs, compared to ~1800µs for DB lookup.

Settings cache. Cache is a 1:1 mapping of what's in the DB. Cached lookup time is ~60µs, compared to ~1800µs for DB
lookup.
raw docstring

metabase.models.view-log

The ViewLog is used to log an event where a given User views a given object such as a Table or Card (Question).

The ViewLog is used to log an event where a given User views a given object such as a Table or Card (Question).
raw docstring

metabase.plugins.classloader

Logic for getting and setting the context classloader we'll use for loading Metabase plugins. Use the-classloader to get the Classloader you should use with calls to Class/forName; call it for side effects to ensure the current thread context classloader will have access to JARs we add at runtime before calling require.

The classloader is guaranteed to be an instance of DynamicClassLoader, which means we can add URLs to it at runtime with dynapath; use add-url-to-classpath! to add URLs to the classpath to make sure they are added to the correct classloader.

If you are unfamiliar with ClassLoaders in general, I found this article pretty helpful: https://www.javaworld.com/article/2077344/core-java/find-a-way-out-of-the-classloader-maze.html.

<3 Cam

Logic for getting and setting the context classloader we'll use for loading Metabase plugins. Use `the-classloader`
to get the Classloader you should use with calls to `Class/forName`; call it for side effects to ensure the current
thread context classloader will have access to JARs we add at runtime before calling `require`.

The classloader is guaranteed to be an instance of `DynamicClassLoader`, which means we can add URLs to it at
runtime with dynapath; use `add-url-to-classpath!` to add URLs to the classpath to make sure they are added to the
correct classloader.

If you are unfamiliar with ClassLoaders in general, I found this article pretty helpful:
https://www.javaworld.com/article/2077344/core-java/find-a-way-out-of-the-classloader-maze.html.

<3 Cam
raw docstring

metabase.plugins.files

Low-level file-related functions for implementing Metabase plugin functionality. These use the java.nio.file library rather than the usual java.io stuff because it abstracts better across different filesystems (such as files in a normal directory vs files inside a JAR.)

As much as possible, this namespace aims to abstract away the nio.file library and expose a set of high-level file-manipulation functions for the sorts of operations the plugin system needs to perform.

Low-level file-related functions for implementing Metabase plugin functionality. These use the `java.nio.file`
library rather than the usual `java.io` stuff because it abstracts better across different filesystems (such as
files in a normal directory vs files inside a JAR.)

As much as possible, this namespace aims to abstract away the `nio.file` library and expose a set of high-level
*file-manipulation* functions for the sorts of operations the plugin system needs to perform.
raw docstring

metabase.plugins.init-steps

Logic for performing the init-steps listed in a Metabase plugin's manifest. For driver plugins that specify that we should lazy-load, these steps are lazily performed the first time non-trivial driver methods (such as connecting to a Database) are called; for all other Metabase plugins these are perfomed during launch.

The entire list of possible init steps is below, as impls for the do-init-step! multimethod.

Logic for performing the `init-steps` listed in a Metabase plugin's manifest. For driver plugins that specify that we
should `lazy-load`, these steps are lazily performed the first time non-trivial driver methods (such as connecting
to a Database) are called; for all other Metabase plugins these are perfomed during launch.

The entire list of possible init steps is below, as impls for the `do-init-step!` multimethod.
raw docstring

metabase.plugins.initialize

Logic related to initializing plugins, i.e. running the init steps listed in the plugin manifest. This is done when Metabase launches as soon as all dependencies for that plugin are met; for plugins with unmet dependencies, it is retried after other plugins are loaded (e.g. for things like BigQuery which depend on the shared Google driver.)

Note that this is not the same thing as initializing drivers -- drivers are initialized lazily when first needed; this step on the other hand runs at launch time and sets up that lazy load logic.

Logic related to initializing plugins, i.e. running the `init` steps listed in the plugin manifest. This is done when
Metabase launches as soon as all dependencies for that plugin are met; for plugins with unmet dependencies, it is
retried after other plugins are loaded (e.g. for things like BigQuery which depend on the shared Google driver.)

Note that this is not the same thing as initializing *drivers* -- drivers are initialized lazily when first needed;
this step on the other hand runs at launch time and sets up that lazy load logic.
raw docstring

metabase.plugins.jdbc-proxy

JDBC proxy driver used for drivers added at runtime. DriverManager refuses to recognize drivers that weren't loaded by the system classloader, so we need to wrap our drivers loaded at runtime with a proxy class loaded at launch time.

JDBC proxy driver used for drivers added at runtime. DriverManager refuses to recognize drivers that weren't loaded
by the system classloader, so we need to wrap our drivers loaded at runtime with a proxy class loaded at launch time.
raw docstring

metabase.plugins.lazy-loaded-driver

Implementation for a delayed-load driver that implements a few basic driver methods (available?, display-name, and connection-properties) needed for things like setup using the information provided in the plugin manifest. Other methods resolve drivers using driver/the-initialized-driver, which calls initialize!; we'll wait until that call to do more memory-intensive things like registering a JDBC driver or loading the actual driver namespace.

See https://github.com/metabase/metabase/wiki/Metabase-Plugin-Manifest-Reference for all the options allowed for a plugin manifest.

Implementation for a delayed-load driver that implements a few basic driver methods (`available?`, `display-name`,
and `connection-properties`) needed for things like setup using the information provided in the plugin manifest.
Other methods resolve drivers using `driver/the-initialized-driver`, which calls `initialize!`; we'll wait until
that call to do more memory-intensive things like registering a JDBC driver or loading the actual driver namespace.

See https://github.com/metabase/metabase/wiki/Metabase-Plugin-Manifest-Reference for all the options allowed for a
plugin manifest.
raw docstring

metabase.public-settings.metastore

Settings related to checking token validity and accessing the MetaStore.

Settings related to checking token validity and accessing the MetaStore.
raw docstring

metabase.pulse.color

Namespaces that uses the Nashorn javascript engine to invoke some shared javascript code that we use to determine the background color of pulse table cells

Namespaces that uses the Nashorn javascript engine to invoke some shared javascript code that we use to determine
the background color of pulse table cells
raw docstring

metabase.query-processor

Preprocessor that does simple transformations to all incoming queries, simplifing the driver-specific implementations.

Preprocessor that does simple transformations to all incoming queries, simplifing the driver-specific
implementations.
raw docstring

metabase.query-processor.async

Async versions of the usual public query processor functions. Instead of blocking while the query is ran, these functions all return a core.async channel that can be used to fetch the results when they become available.

Async versions of the usual public query processor functions. Instead of blocking while the query is ran, these
functions all return a `core.async` channel that can be used to fetch the results when they become available.
raw docstring

metabase.query-processor.debug

Functions for debugging QP code. Enable QP debugging by binding qp/*debug*; the debug-middlewaer function below wraps each middleware function for debugging purposes.

Functions for debugging QP code. Enable QP debugging by binding `qp/*debug*`; the `debug-middlewaer` function below
wraps each middleware function for debugging purposes.
raw docstring

metabase.query-processor.interface

Dynamic variables, constants, and other things used across the query builder namespaces.

Dynamic variables, constants, and other things used across the query builder namespaces.
raw docstring

metabase.query-processor.middleware.add-dimension-projections

Middleware for adding remapping and other dimension related projections. This remaps Fields that have a corresponding Dimension object (which defines a remapping) in two different ways, depending on the :type attribute of the Dimension:

external type Dimensions mean the Field's values will be replaced with corresponding values from a column on a different table, joined via a foreign key. A common use-case would be to replace FK IDs with the name of whatever it references, for example replacing a values of venue.category_id with values of category.name. Actual replacement of values happens on the frontend, so this middleware simply adds the column to be used for replacement (e.g. category.name) to the :fields clause in pre-processing, so the Field will be fetched. Recall that Fields referenced via with :fk-> clauses imply that JOINs will take place, which are automatically handled later in the Query Processor pipeline. Additionally, this middleware will swap out and :order-by clauses referencing the original Field with ones referencing the remapped Field (for example, so we would sort by category.name instead of category_id).

internal type Dimensions mean the Field's values are replaced by a user-defined map of values, stored in the human_readable_values column of a corresponding FieldValues object. A common use-case for this scenario would be to replace integer enum values with something more descriptive, for example replacing values of an enum can_type -- 0 becomes Toucan, 1 becomes Pelican, and so forth. This is handled exclusively in post-processing by adding extra columns and values to the results.

In both cases, to accomplish values replacement on the frontend, the post-processing part of this middleware adds appropriate :remapped_from and :remapped_to attributes in the result :cols in post-processing. :remapped_from and :remapped_to are the names of the columns, e.g. category_id is :remapped_to name, and name is :remapped_from :category_id.

Middleware for adding remapping and other dimension related projections. This remaps Fields that have a corresponding
Dimension object (which defines a remapping) in two different ways, depending on the `:type` attribute of the
Dimension:

`external` type Dimensions mean the Field's values will be replaced with corresponding values from a column on a
different table, joined via a foreign key. A common use-case would be to replace FK IDs with the name of whatever it
references, for example replacing a values of `venue.category_id` with values of `category.name`. Actual replacement
of values happens on the frontend, so this middleware simply adds the column to be used for replacement (e.g.
`category.name`) to the `:fields` clause in pre-processing, so the Field will be fetched. Recall that Fields
referenced via with `:fk->` clauses imply that JOINs will take place, which are automatically handled later in the
Query Processor pipeline. Additionally, this middleware will swap out and `:order-by` clauses referencing the
original Field with ones referencing the remapped Field (for example, so we would sort by `category.name` instead of
`category_id`).

`internal` type Dimensions mean the Field's values are replaced by a user-defined map of values, stored in the
`human_readable_values` column of a corresponding `FieldValues` object. A common use-case for this scenario would be
to replace integer enum values with something more descriptive, for example replacing values of an enum `can_type`
-- `0` becomes `Toucan`, `1` becomes `Pelican`, and so forth. This is handled exclusively in post-processing by
adding extra columns and values to the results.

In both cases, to accomplish values replacement on the frontend, the post-processing part of this middleware adds
appropriate `:remapped_from` and `:remapped_to` attributes in the result `:cols` in post-processing.
`:remapped_from` and `:remapped_to` are the names of the columns, e.g. `category_id` is `:remapped_to` `name`, and
`name` is `:remapped_from` `:category_id`.
raw docstring

metabase.query-processor.middleware.add-implicit-clauses

Middlware for adding an implicit :fields and :order-by clauses to certain queries.

Middlware for adding an implicit `:fields` and `:order-by` clauses to certain queries.
raw docstring

metabase.query-processor.middleware.add-implicit-joins

Middleware that creates corresponding :joins for Tables referred to by :fk-> clauses and replaces those clauses with :joined-field clauses.

Middleware that creates corresponding `:joins` for Tables referred to by `:fk->` clauses and replaces those clauses
with `:joined-field` clauses.
raw docstring

metabase.query-processor.middleware.add-row-count-and-status

Middleware for adding :row_count and :status info to QP results.

Middleware for adding `:row_count` and `:status` info to QP results.
raw docstring

metabase.query-processor.middleware.add-settings

Middleware for adding a :settings map to a query before it is processed.

Middleware for adding a `:settings` map to a query before it is processed.
raw docstring

metabase.query-processor.middleware.annotate

Middleware for annotating (adding type information to) the results of a query, under the :cols column.

Middleware for annotating (adding type information to) the results of a query, under the `:cols` column.
raw docstring

metabase.query-processor.middleware.async

Middleware for implementing async QP behavior.

Middleware for implementing async QP behavior.
raw docstring

metabase.query-processor.middleware.async-wait

Middleware that limits the number of concurrent queries for each database.

Each connected database is limited to a maximum of 15 simultaneous queries (configurable) using these methods; any additional queries will park the thread. Super-useful for writing high-performance API endpoints. Prefer these methods to the old-school synchronous versions.

How is this achieved? For each Database, we'll maintain a thread pool executor to limit the number of simultaneous queries.

Middleware that limits the number of concurrent queries for each database.

Each connected database is limited to a maximum of 15 simultaneous queries (configurable) using these methods; any
additional queries will park the thread. Super-useful for writing high-performance API endpoints. Prefer these
methods to the old-school synchronous versions.

How is this achieved? For each Database, we'll maintain a thread pool executor to limit the number of simultaneous
queries.
raw docstring

metabase.query-processor.middleware.auto-bucket-datetimes

Middleware for automatically bucketing unbucketed :type/DateTime (but not :type/Time) Fields with :day bucketing. Applies to any unbucketed Field in a breakout, or fields in a filter clause being compared against yyyy-MM-dd format datetime strings.

Middleware for automatically bucketing unbucketed `:type/DateTime` (but not `:type/Time`) Fields with `:day`
bucketing. Applies to any unbucketed Field in a breakout, or fields in a filter clause being compared against
`yyyy-MM-dd` format datetime strings.
raw docstring

metabase.query-processor.middleware.binning

Middleware that handles binning-strategy Field clauses. This adds a resolved-options map to every binning-strategy clause that contains the information query processors will need in order to perform binning.

Middleware that handles `binning-strategy` Field clauses. This adds a `resolved-options` map to every
`binning-strategy` clause that contains the information query processors will need in order to perform binning.
raw docstring

metabase.query-processor.middleware.cache

Middleware that returns cached results for queries when applicable.

If caching is enabled (enable-query-caching is true) cached results will be returned for Cards if possible. There's a global default TTL defined by the setting query-caching-default-ttl, but individual Cards can override this value with custom TTLs with a value for :cache_ttl.

For all other queries, caching is skipped.

Various caching backends are defined in metabase.query-processor.middleware.cache-backend namespaces. The default backend is db, which uses the application database; this value can be changed by setting the env var MB_QP_CACHE_BACKEND.

Refer to metabase.query-processor.middleware.cache-backend.interface for more details about how the cache backends themselves.

Middleware that returns cached results for queries when applicable.

If caching is enabled (`enable-query-caching` is `true`) cached results will be returned for Cards if possible.
There's a global default TTL defined by the setting `query-caching-default-ttl`, but individual Cards can override
this value with custom TTLs with a value for `:cache_ttl`.

For all other queries, caching is skipped.

Various caching backends are defined in `metabase.query-processor.middleware.cache-backend` namespaces. The default
backend is `db`, which uses the application database; this value can be changed by setting the env var
`MB_QP_CACHE_BACKEND`.

 Refer to `metabase.query-processor.middleware.cache-backend.interface` for more details about how the cache
backends themselves.
raw docstring

metabase.query-processor.middleware.cache-backend.interface

Interface used to define different Query Processor cache backends. Defining a backend is straightforward: define a new namespace with the pattern

metabase.query-processor.middleware.cache-backend.<backend>

Where backend is a key representing the backend, e.g. db, redis, or memcached.

In that namespace, create an object that reifies (or otherwise implements) IQueryProcessorCacheBackend. This object must be stored in a var called instance.

That's it. See metabase.query-processor.middleware.cache-backend.db for a complete example of how this is done.

Interface used to define different Query Processor cache backends.
Defining a backend is straightforward: define a new namespace with the pattern

  metabase.query-processor.middleware.cache-backend.<backend>

Where backend is a key representing the backend, e.g. `db`, `redis`, or `memcached`.

In that namespace, create an object that reifies (or otherwise implements) `IQueryProcessorCacheBackend`.
This object *must* be stored in a var called `instance`.

That's it. See `metabase.query-processor.middleware.cache-backend.db` for a complete example of how this is done.
raw docstring

metabase.query-processor.middleware.catch-exceptions

Middleware for catching exceptions thrown by the query processor and returning them in a friendlier format.

Middleware for catching exceptions thrown by the query processor and returning them in a friendlier format.
raw docstring

metabase.query-processor.middleware.constraints

Middleware that adds default constraints to limit the maximum number of rows returned to queries that specify the :add-default-userland-constraints? :middleware option.

Middleware that adds default constraints to limit the maximum number of rows returned to queries that specify the
`:add-default-userland-constraints?` `:middleware` option.
raw docstring

metabase.query-processor.middleware.cumulative-aggregations

Middlware for handling cumulative count and cumulative sum aggregations.

Middlware for handling cumulative count and cumulative sum aggregations.
raw docstring

metabase.query-processor.middleware.dev

Middleware that's only active in dev and test scenarios. These middleware functions do additional checks of query processor behavior that are undesirable in normal production use.

Middleware that's only active in dev and test scenarios. These middleware functions do additional checks of query
processor behavior that are undesirable in normal production use.
raw docstring

metabase.query-processor.middleware.driver-specific

Middleware that hands off to a driver's implementation of process-query-in-context, if any. If implemented, this effectively lets one inject custom driver-specific middleware for the QP. Drivers can use it to different things like rewrite queries as needed or perform special permissions checks.

Middleware that hands off to a driver's implementation of `process-query-in-context`, if any.
If implemented, this effectively lets one inject custom driver-specific middleware for the QP.
Drivers can use it to different things like rewrite queries as needed or perform special permissions checks.
raw docstring

metabase.query-processor.middleware.expand-macros

Middleware for expanding :metric and :segment 'macros' in unexpanded MBQL queries.

(:metric forms are expanded into aggregations and sometimes filter clauses, while :segment forms are expanded into filter clauses.)

TODO - this namespace is ancient and written with MBQL '95 in mind, e.g. it is case-sensitive. At some point this ought to be reworked to be case-insensitive and cleaned up.

Middleware for expanding `:metric` and `:segment` 'macros' in *unexpanded* MBQL queries.

(`:metric` forms are expanded into aggregations and sometimes filter clauses, while `:segment` forms are expanded
into filter clauses.)

 TODO - this namespace is ancient and written with MBQL '95 in mind, e.g. it is case-sensitive.
 At some point this ought to be reworked to be case-insensitive and cleaned up.
raw docstring

metabase.query-processor.middleware.fetch-source-query

Middleware responsible for 'hydrating' the source query for queries that use another query as their source. This middleware looks for MBQL queries like

{:source-table "card__1" ; Shorthand for using Card 1 as source query ...}

and resolves the referenced source query, transforming the query to look like the following:

{:source-query {...} ; Query for Card 1 :source-metadata [...] ; metadata about columns in Card 1 ...}

This middleware resolves Card ID :source-tables at all levels of the query, but the top-level query often uses the so-called virtual-id, because the frontend client might not know the original Database; this middleware will replace that ID with the approiate ID, e.g.

{:database <virtual-id>, :type :query, :query {:source-table "card__1"}} -> {:database 1, :type :query, :query {:source-query {...}, :source-metadata {...}}}

TODO - consider renaming this namespace to metabase.query-processor.middleware.resolve-card-id-source-tables

Middleware responsible for 'hydrating' the source query for queries that use another query as their source. This
middleware looks for MBQL queries like

  {:source-table "card__1" ; Shorthand for using Card 1 as source query
   ...}

and resolves the referenced source query, transforming the query to look like the following:

  {:source-query    {...} ; Query for Card 1
   :source-metadata [...] ; metadata about columns in Card 1
   ...}

This middleware resolves Card ID `:source-table`s at all levels of the query, but the top-level query often uses the
so-called `virtual-id`, because the frontend client might not know the original Database; this middleware will
replace that ID with the approiate ID, e.g.

  {:database <virtual-id>, :type :query, :query {:source-table "card__1"}}
  ->
  {:database 1, :type :query, :query {:source-query {...}, :source-metadata {...}}}

TODO - consider renaming this namespace to `metabase.query-processor.middleware.resolve-card-id-source-tables`
raw docstring

metabase.query-processor.middleware.format-rows

Middleware that formats the results of a query. Currently, the only thing this does is convert datetime types to ISO-8601 strings in the appropriate timezone.

Middleware that formats the results of a query.
Currently, the only thing this does is convert datetime types to ISO-8601 strings in the appropriate timezone.
raw docstring

metabase.query-processor.middleware.limit

Middleware that handles limiting the maximum number of rows returned by a query.

Middleware that handles limiting the maximum number of rows returned by a query.
raw docstring

metabase.query-processor.middleware.log

Middleware for logging a query before it is processed. (Various other middleware functions log the query as well in different stages.)

Middleware for logging a query before it is processed.
(Various other middleware functions log the query as well in different stages.)
raw docstring

metabase.query-processor.middleware.mbql-to-native

Middleware responsible for converting MBQL queries to native queries (by calling the driver's QP methods) so the query can then be executed.

Middleware responsible for converting MBQL queries to native queries (by calling the driver's QP methods)
so the query can then be executed.
raw docstring

metabase.query-processor.middleware.normalize-query

Middleware that converts a query into a normalized, canonical form.

Middleware that converts a query into a normalized, canonical form.
raw docstring

metabase.query-processor.middleware.parameters

Middleware for substituting parameters in queries.

Middleware for substituting parameters in queries.
raw docstring

metabase.query-processor.middleware.parameters.dates

Shared code for handling datetime parameters, used by both MBQL and native params implementations.

Shared code for handling datetime parameters, used by both MBQL and native params implementations.
raw docstring

metabase.query-processor.middleware.parameters.mbql

Code for handling parameter substitution in MBQL queries.

Code for handling parameter substitution in MBQL queries.
raw docstring

metabase.query-processor.middleware.parameters.sql

Param substitution for SQL queries. This is a new implementation, fondly referred to as 'SQL parameters 2.0', written for v0.23.0. The new implementation uses prepared statement args instead of substituting them directly into the query, and is much better-organized and better-documented.

Param substitution for *SQL* queries.
This is a new implementation, fondly referred to as 'SQL parameters 2.0', written for v0.23.0.
The new implementation uses prepared statement args instead of substituting them directly into the query,
and is much better-organized and better-documented.
raw docstring

metabase.query-processor.middleware.permissions

Middleware for checking that the current user has permissions to run the current query.

Middleware for checking that the current user has permissions to run the current query.
raw docstring

metabase.query-processor.middleware.process-userland-query

Middleware related to doing extra steps for queries that are ran via API endpoints (i.e., most of them -- as opposed to queries ran internally e.g. as part of the sync process). These include things like saving QueryExecutions and formatting the results.

Middleware related to doing extra steps for queries that are ran via API endpoints (i.e., most of them -- as opposed
to queries ran internally e.g. as part of the sync process). These include things like saving QueryExecutions and
formatting the results.
raw docstring

metabase.query-processor.middleware.reconcile-breakout-and-order-by-bucketing

SQL places restrictions when using a GROUP BY clause (MBQL :breakout) in combination with an ORDER BY clause (MBQL :order-by) -- columns that appear in the ORDER BY must appear in the GROUP BY. When we apply datetime or binning bucketing in a breakout, for example cast(x AS DATE) (MBQL :datetime-field clause), we need to apply the same bucketing to instances of that Field in the order-by clause. In other words:

Bad:

SELECT count(*) FROM table GROUP BY CAST(x AS date) ORDER BY x ASC

(MBQL)

{:source-table 1 :breakout [[:datetime-field [:field-id 1] :day]] :order-by [[:asc [:field-id 1]]]}

Good:

SELECT count(*) FROM table GROUP BY CAST(x AS date) ORDER BY CAST(x AS date) ASC

(MBQL)

{:source-table 1 :breakout [[:datetime-field [:field-id 1] :day]] :order-by [[:asc [:datetime-field [:field-id 1] :day]]]}

The frontend, on the rare occasion it generates a query that explicitly specifies an order-by clause, usually will generate one that directly corresponds to the bad example above. This middleware finds these cases and rewrites the query to look like the good example.

SQL places restrictions when using a `GROUP BY` clause (MBQL `:breakout`) in combination with an `ORDER BY`
clause (MBQL `:order-by`) -- columns that appear in the `ORDER BY` must appear in the `GROUP BY`. When we apply
datetime or binning bucketing in a breakout, for example `cast(x AS DATE)` (MBQL `:datetime-field` clause), we need
to apply the same bucketing to instances of that Field in the `order-by` clause. In other words:

Bad:

  SELECT count(*)
  FROM table
  GROUP BY CAST(x AS date)
  ORDER BY x ASC

(MBQL)

   {:source-table 1
    :breakout     [[:datetime-field [:field-id 1] :day]]
    :order-by     [[:asc [:field-id 1]]]}

Good:

  SELECT count(*)
  FROM table
  GROUP BY CAST(x AS date)
  ORDER BY CAST(x AS date) ASC

(MBQL)

  {:source-table 1
   :breakout     [[:datetime-field [:field-id 1] :day]]
   :order-by     [[:asc [:datetime-field [:field-id 1] :day]]]}

The frontend, on the rare occasion it generates a query that explicitly specifies an `order-by` clause, usually will
generate one that directly corresponds to the bad example above. This middleware finds these cases and rewrites the
query to look like the good example.
raw docstring

metabase.query-processor.middleware.resolve-driver

Middleware for resolving the appropriate driver to use for processing a query.

Middleware for resolving the appropriate driver to use for processing a query.
raw docstring

metabase.query-processor.middleware.resolve-fields

Middleware that resolves the Fields referenced by a query.

Middleware that resolves the Fields referenced by a query.
raw docstring

metabase.query-processor.middleware.resolve-joins

Middleware that fetches tables that will need to be joined, referred to by fk-> clauses, and adds information to the query about what joins should be done and how they should be performed.

Middleware that fetches tables that will need to be joined, referred to by `fk->` clauses, and adds information to
the query about what joins should be done and how they should be performed.
raw docstring

metabase.query-processor.middleware.resolve-source-table

Fetches Tables corresponding to any :source-table IDs anywhere in the query.

Fetches Tables corresponding to any `:source-table` IDs anywhere in the query.
raw docstring

metabase.query-processor.middleware.results-metadata

Middleware that stores metadata about results column types after running a query for a Card, and returns that metadata (which can be passed back to the backend when saving a Card) as well as a checksum in the API response.

Middleware that stores metadata about results column types after running a query for a Card,
and returns that metadata (which can be passed *back* to the backend when saving a Card) as well
as a checksum in the API response.
raw docstring

metabase.query-processor.middleware.store

The store middleware is responsible for initializing a fresh QP Store, which caches resolved objects for the duration of a query execution. See metabase.query-processor.store for more details.

The store middleware is responsible for initializing a fresh QP Store, which caches resolved objects for the duration
of a query execution. See `metabase.query-processor.store` for more details.
raw docstring

metabase.query-processor.middleware.validate

Middleware for checking that a normalized query is valid.

Middleware for checking that a normalized query is valid.
raw docstring

metabase.query-processor.middleware.wrap-value-literals

Middleware that wraps value literals in value/absolute-datetime/etc. clauses containing relevant type information; parses datetime string literals when appropriate.

Middleware that wraps value literals in `value`/`absolute-datetime`/etc. clauses containing relevant type
information; parses datetime string literals when appropriate.
raw docstring

metabase.query-processor.store

The Query Processor Store caches resolved Tables and Fields for the duration of a query execution. Certain middleware handles resolving things like the query's source Table and any Fields that are referenced in a query, and saves the referenced objects in the store; other middleware and driver-specific query processor implementations use functions in the store to fetch those objects as needed.

For example, a driver might be converting a Field ID clause (e.g. [:field-id 10]) to its native query language. It can fetch the underlying Metabase FieldInstance by calling field:

(qp.store/field 10) ;; get Field 10

Of course, it would be entirely possible to call (Field 10) every time you needed information about that Field, but fetching all Fields in a single pass and storing them for reuse is dramatically more efficient than fetching those Fields potentially dozens of times in a single query execution.

The Query Processor Store caches resolved Tables and Fields for the duration of a query execution. Certain middleware
handles resolving things like the query's source Table and any Fields that are referenced in a query, and saves the
referenced objects in the store; other middleware and driver-specific query processor implementations use functions
in the store to fetch those objects as needed.

For example, a driver might be converting a Field ID clause (e.g. `[:field-id 10]`) to its native query language. It
can fetch the underlying Metabase FieldInstance by calling `field`:

  (qp.store/field 10) ;; get Field 10

 Of course, it would be entirely possible to call `(Field 10)` every time you needed information about that Field,
but fetching all Fields in a single pass and storing them for reuse is dramatically more efficient than fetching
those Fields potentially dozens of times in a single query execution.
raw docstring

metabase.query-processor.util

Utility functions used by the global query processor and middleware functions.

Utility functions used by the global query processor and middleware functions.
raw docstring

metabase.related

Related entities recommendations.

Related entities recommendations.
raw docstring

metabase.routes

Main Compojure routes tables. See https://github.com/weavejester/compojure/wiki/Routes-In-Detail for details about how these work. /api/ routes are in metabase.api.routes.

Main Compojure routes tables. See https://github.com/weavejester/compojure/wiki/Routes-In-Detail for details about
how these work. `/api/` routes are in `metabase.api.routes`.
raw docstring

metabase.routes.index

Logic related to loading various versions of the index.html template. The actual template lives in resources/frontend_client/index_template.html; when the frontend is built (e.g. via ./bin/build frontend) different versions that include the FE app are created as index.html, public.html, and embed.html.

Logic related to loading various versions of the index.html template. The actual template lives in
`resources/frontend_client/index_template.html`; when the frontend is built (e.g. via `./bin/build frontend`)
different versions that include the FE app are created as `index.html`, `public.html`, and `embed.html`.
raw docstring

metabase.sync

Combined functions for running the entire Metabase sync process. This delegates to a few distinct steps, which in turn are broken out even further:

  1. Sync Metadata (metabase.sync.sync-metadata)
  2. Analysis (metabase.sync.analyze)
  3. Cache Field Values (metabase.sync.field-values)

In the near future these steps will be scheduled individually, meaning those functions will be called directly instead of calling the sync-database! function to do all three at once.

Combined functions for running the entire Metabase sync process.
This delegates to a few distinct steps, which in turn are broken out even further:

1.  Sync Metadata      (`metabase.sync.sync-metadata`)
2.  Analysis           (`metabase.sync.analyze`)
3.  Cache Field Values (`metabase.sync.field-values`)

In the near future these steps will be scheduled individually, meaning those functions will
be called directly instead of calling the `sync-database!` function to do all three at once.
raw docstring

metabase.sync.analyze

Logic responsible for doing deep 'analysis' of the data inside a database. This is significantly more expensive than the basic sync-metadata step, and involves things like running MBQL queries and fetching values to do things like determine Table row counts and infer field special types.

Logic responsible for doing deep 'analysis' of the data inside a database.
This is significantly more expensive than the basic sync-metadata step, and involves things
like running MBQL queries and fetching values to do things like determine Table row counts
and infer field special types.
raw docstring

metabase.sync.analyze.classifiers.category

Classifier that determines whether a Field should be marked as a :type/Category and/or as a list Field based on the number of distinct values it has.

As of Metabase v0.29, the Category now longer has any use inside of the Metabase backend; it is used only for frontend purposes (e.g. deciding which widget to show). Previously, makring something as a Category meant that its values should be cached and saved in a FieldValues object. With the changes in v0.29, this is instead managed by a column called has_field_values.

A value of list now means the values should be cached. Deciding whether a Field should be a list Field is still determined by the cardinality of the Field, like Category status. Thus it is entirely possibly for a Field to be both a Category and a list Field.

Classifier that determines whether a Field should be marked as a `:type/Category` and/or as a `list` Field based on
the number of distinct values it has.

As of Metabase v0.29, the Category now longer has any use inside of the Metabase backend; it is used
only for frontend purposes (e.g. deciding which widget to show). Previously, makring something as a Category meant
that its values should be cached and saved in a FieldValues object. With the changes in v0.29, this is instead
managed by a column called `has_field_values`.

A value of `list` now means the values should be cached. Deciding whether a Field should be a `list` Field is still
determined by the cardinality of the Field, like Category status. Thus it is entirely possibly for a Field to be
both a Category and a `list` Field.
raw docstring

metabase.sync.analyze.classifiers.name

Classifier that infers the special type of a Field based on its name and base type.

Classifier that infers the special type of a Field based on its name and base type.
raw docstring

metabase.sync.analyze.classifiers.no-preview-display

Classifier that decides whether a Field should be marked 'No Preview Display'. (This means Fields are generally not shown in Table results and the like, but still shown in a single-row object detail page.)

Classifier that decides whether a Field should be marked 'No Preview Display'.
(This means Fields are generally not shown in Table results and the like, but
still shown in a single-row object detail page.)
raw docstring

metabase.sync.analyze.classifiers.text-fingerprint

Logic for inferring the special types of Text fields based on their TextFingerprints. These tests only run against Fields that don't have existing special types.

Logic for inferring the special types of *Text* fields based on their TextFingerprints.
These tests only run against Fields that *don't* have existing special types.
raw docstring

metabase.sync.analyze.classify

Analysis sub-step that takes a fingerprint for a Field and infers and saves appropriate information like special type. Each 'classifier' takes the information available to it and decides whether or not to run. We currently have the following classifiers:

  1. name: Looks at the name of a Field and infers a special type if possible
  2. no-preview-display: Looks at average length of text Field recorded in fingerprint and decides whether or not we should hide this Field
  3. category: Looks at the number of distinct values of Field and determines whether it can be a Category
  4. text-fingerprint: Looks at percentages recorded in a text Fields' TextFingerprint and infers a special type if possible

All classifier functions take two arguments, a FieldInstance and a possibly nil Fingerprint, and should return the Field with any appropriate changes (such as a new special type). If no changes are appropriate, a classifier may return nil. Error handling is handled by run-classifiers below, so individual classiers do not need to handle errors themselves.

In the future, we plan to add more classifiers, including ML ones that run offline.

Analysis sub-step that takes a fingerprint for a Field and infers and saves appropriate information like special
type. Each 'classifier' takes the information available to it and decides whether or not to run. We currently have
the following classifiers:

1.  `name`: Looks at the name of a Field and infers a special type if possible
2.  `no-preview-display`: Looks at average length of text Field recorded in fingerprint and decides whether or not we
    should hide this Field
3.  `category`: Looks at the number of distinct values of Field and determines whether it can be a Category
4.  `text-fingerprint`: Looks at percentages recorded in a text Fields' TextFingerprint and infers a special type if
    possible

All classifier functions take two arguments, a `FieldInstance` and a possibly `nil` `Fingerprint`, and should return
the Field with any appropriate changes (such as a new special type). If no changes are appropriate, a classifier may
return nil. Error handling is handled by `run-classifiers` below, so individual classiers do not need to handle
errors themselves.

In the future, we plan to add more classifiers, including ML ones that run offline.
raw docstring

metabase.sync.analyze.fingerprint

Analysis sub-step that takes a sample of values for a Field and saving a non-identifying fingerprint used for classification. This fingerprint is saved as a column on the Field it belongs to.

Analysis sub-step that takes a sample of values for a Field and saving a non-identifying fingerprint
used for classification. This fingerprint is saved as a column on the Field it belongs to.
raw docstring

metabase.sync.analyze.fingerprint.fingerprinters

Non-identifying fingerprinters for various field types.

Non-identifying fingerprinters for various field types.
raw docstring

metabase.sync.analyze.fingerprint.insights

Deeper statistical analysis of results.

Deeper statistical analysis of results.
raw docstring

metabase.sync.analyze.query-results

Analysis similar to what we do as part of the Sync process, but aimed at analyzing and introspecting query results. The current focus of this namespace is around column metadata from the results of a query. Going forward this is likely to extend beyond just metadata about columns but also about the query results as a whole and over time.

Analysis similar to what we do as part of the Sync process, but aimed at analyzing and introspecting query
results. The current focus of this namespace is around column metadata from the results of a query. Going forward
this is likely to extend beyond just metadata about columns but also about the query results as a whole and over
time.
raw docstring

metabase.sync.analyze.table-row-count

Logic for updating a Table's row count by running appropriate MBQL queries.

Logic for updating a Table's row count by running appropriate MBQL queries.
raw docstring

metabase.sync.fetch-metadata

Fetch metadata functions fetch 'snapshots' of the schema for a data warehouse database, including information about tables, schemas, and fields, and their types. For example, with SQL databases, these functions use the JDBC DatabaseMetaData to get this information.

Fetch metadata functions fetch 'snapshots' of the schema for a data warehouse database, including information about
tables, schemas, and fields, and their types. For example, with SQL databases, these functions use the JDBC
DatabaseMetaData to get this information.
raw docstring

metabase.sync.field-values

Logic for updating cached FieldValues for fields in a database.

Logic for updating cached FieldValues for fields in a database.
raw docstring

metabase.sync.sync-metadata

Logic responsible for syncing the metadata for an entire database. Delegates to different subtasks:

  1. Sync tables (metabase.sync.sync-metadata.tables)
  2. Sync fields (metabase.sync.sync-metadata.fields)
  3. Sync FKs (metabase.sync.sync-metadata.fks)
  4. Sync Metabase Metadata table (metabase.sync.sync-metadata.metabase-metadata)
Logic responsible for syncing the metadata for an entire database.
Delegates to different subtasks:

1.  Sync tables (`metabase.sync.sync-metadata.tables`)
2.  Sync fields (`metabase.sync.sync-metadata.fields`)
3.  Sync FKs    (`metabase.sync.sync-metadata.fks`)
4.  Sync Metabase Metadata table (`metabase.sync.sync-metadata.metabase-metadata`)
raw docstring

metabase.sync.sync-metadata.fields

Logic for updating Metabase Field models from metadata fetched from a physical DB.

The basic idea here is to look at the metadata we get from calling describe-table on a connected database, then construct an identical set of metadata from what we have about that Table in the Metabase DB. Then we iterate over both sets of Metadata and perform whatever steps are needed to make sure the things in the DB match the things that came back from describe-table. These steps are broken out into three main parts:

  • Fetch Metadata - logic is in metabase.sync.sync-metadata.fields.fetch-metadata. Construct a map of metadata from the Metabase application database that matches the form of DB metadata about Fields in a Table. This metadata is used to next two steps to determine what sync operations need to be performed by comparing the differences in the two sets of Metadata.

  • Sync Field instances -- logic is in metabase.sync.sync-metadata.fields.sync-instances. Make sure the Field instances in the Metabase application database match up with those in the DB metadata, creating new Fields as needed, and marking existing ones as active or inactive as appropriate.

  • Update instance metadata -- logic is in metabase.sync.sync-metadata.fields.sync-metadata. Update metadata properties of Field instances in the application database as needed -- this includes the base type, database type, special type, and comment/remark (description) properties. This primarily affects Fields that were not newly created; newly created Fields are given appropriate metadata when first synced (by sync-instances).

A note on terminology used in metabase.sync.sync-metadata.fields.* namespaces:

  • db-metadata is a set of field-metadata maps coming back from the DB (e.g. from something like JDBC DatabaseMetaData) describing the columns (or equivalent) currently present in the table (or equivalent) that we're syncing.

  • field-metadata is a map of information describing a single columnn currently present in the table being synced

  • our-metadata is a set of maps of Field metadata reconstructed from the Metabase application database.

  • metabase-field is a single map of Field metadata reconstructed from the Metabase application database; there is a 1:1 correspondance between this metadata and a row in the Field table. Unlike field-metadata, these entries always have an :id associated with them (because they are present in the Metabase application DB).

Other notes:

  • In general the methods in these namespaces return the number of rows updated; these numbers are summed and used for logging purposes by higher-level sync logic.
Logic for updating Metabase Field models from metadata fetched from a physical DB.

The basic idea here is to look at the metadata we get from calling `describe-table` on a connected database, then
construct an identical set of metadata from what we have about that Table in the Metabase DB. Then we iterate over
both sets of Metadata and perform whatever steps are needed to make sure the things in the DB match the things that
came back from `describe-table`. These steps are broken out into three main parts:

* Fetch Metadata - logic is in `metabase.sync.sync-metadata.fields.fetch-metadata`. Construct a map of metadata from
  the Metabase application database that matches the form of DB metadata about Fields in a Table. This metadata is
  used to next two steps to determine what sync operations need to be performed by comparing the differences in the
  two sets of Metadata.

* Sync Field instances -- logic is in `metabase.sync.sync-metadata.fields.sync-instances`. Make sure the `Field`
  instances in the Metabase application database match up with those in the DB metadata, creating new Fields as
  needed, and marking existing ones as active or inactive as appropriate.

* Update instance metadata -- logic is in `metabase.sync.sync-metadata.fields.sync-metadata`. Update metadata
  properties of `Field` instances in the application database as needed -- this includes the base type, database type,
  special type, and comment/remark (description) properties. This primarily affects Fields that were not newly
  created; newly created Fields are given appropriate metadata when first synced (by `sync-instances`).

A note on terminology used in `metabase.sync.sync-metadata.fields.*` namespaces:

* `db-metadata` is a set of `field-metadata` maps coming back from the DB (e.g. from something like JDBC
  `DatabaseMetaData`) describing the columns (or equivalent) currently present in the table (or equivalent) that we're
  syncing.

*  `field-metadata` is a map of information describing a single columnn currently present in the table being synced

*  `our-metadata` is a set of maps of Field metadata reconstructed from the Metabase application database.

*  `metabase-field` is a single map of Field metadata reconstructed from the Metabase application database; there is
   a 1:1 correspondance between this metadata and a row in the `Field` table. Unlike `field-metadata`, these entries
   always have an `:id` associated with them (because they are present in the Metabase application DB).

Other notes:

* In general the methods in these namespaces return the number of rows updated; these numbers are summed and used
  for logging purposes by higher-level sync logic.
raw docstring

metabase.sync.sync-metadata.fields.common

Schemas and functions shared by different metabase.sync.sync-metadata.fields.* namespaces.

Schemas and functions shared by different `metabase.sync.sync-metadata.fields.*` namespaces.
raw docstring

metabase.sync.sync-metadata.fields.fetch-metadata

Logic for constructing a map of metadata from the Metabase application database that matches the form of DB metadata about Fields in a Table, and for fetching the DB metadata itself. This metadata is used by the logic in other metabase.sync.sync-metadata.fields.* namespaces to determine what sync operations need to be performed by comparing the differences in the two sets of Metadata.

Logic for constructing a map of metadata from the Metabase application database that matches the form of DB metadata
about Fields in a Table, and for fetching the DB metadata itself. This metadata is used by the logic in other
`metabase.sync.sync-metadata.fields.*` namespaces to determine what sync operations need to be performed by
comparing the differences in the two sets of Metadata.
raw docstring

metabase.sync.sync-metadata.fields.sync-instances

Logic for syncing the instances of Field in the Metabase application DB with the set of Fields in the DB metadata. Responsible for creating new instances of Field as needed, and marking existing ones as active or inactive as needed. Recursively handles nested Fields.

All nested Fields recursion is handled in one place, by the main entrypoint (sync-instances!) and helper functions sync-nested-field-instances! and sync-nested-fields-of-one-field!. All other functions in this namespace should ignore nested fields entirely; the will be invoked with those Fields as appropriate.

Logic for syncing the instances of `Field` in the Metabase application DB with the set of Fields in the DB metadata.
Responsible for creating new instances of `Field` as needed, and marking existing ones as active or inactive as
needed. Recursively handles nested Fields.

All nested Fields recursion is handled in one place, by the main entrypoint (`sync-instances!`) and helper
functions `sync-nested-field-instances!` and `sync-nested-fields-of-one-field!`. All other functions in this
namespace should ignore nested fields entirely; the will be invoked with those Fields as appropriate.
raw docstring

metabase.sync.sync-metadata.fields.sync-metadata

Logic for updating metadata properties of Field instances in the application database as needed -- this includes the base type, database type, special type, and comment/remark (description) properties. This primarily affects Fields that were not newly created; newly created Fields are given appropriate metadata when first synced.

Logic for updating metadata properties of `Field` instances in the application database as needed -- this includes
the base type, database type, special type, and comment/remark (description) properties. This primarily affects
Fields that were not newly created; newly created Fields are given appropriate metadata when first synced.
raw docstring

metabase.sync.sync-metadata.fks

Logic for updating FK properties of Fields from metadata fetched from a physical DB.

Logic for updating FK properties of Fields from metadata fetched from a physical DB.
raw docstring

metabase.sync.sync-metadata.metabase-metadata

Logic for syncing the special _metabase_metadata table, which is a way for datasets such as the Sample Dataset to specific properties such as special types that should be applied during sync.

Currently, this is only used by the Sample Dataset, but theoretically in the future we could add additional sample datasets and preconfigure them by populating this Table; or 3rd-party applications or users can add this table to their database for an enhanced Metabase experience out-of-the box.

Logic for syncing the special `_metabase_metadata` table, which is a way for datasets such as the Sample Dataset to
specific properties such as special types that should be applied during sync.

Currently, this is only used by the Sample Dataset, but theoretically in the future we could add additional sample
datasets and preconfigure them by populating this Table; or 3rd-party applications or users can add this table to
their database for an enhanced Metabase experience out-of-the box.
raw docstring

metabase.sync.sync-metadata.tables

Logic for updating Metabase Table models from metadata fetched from a physical DB.

Logic for updating Metabase Table models from metadata fetched from a physical DB.
raw docstring

metabase.task

Background task scheduling via Quartzite. Individual tasks are defined in metabase.task.*.

Regarding Task Initialization:

The most appropriate way to initialize tasks in any metabase.task.* namespace is to implement the task-init function which accepts zero arguments. This function is dynamically resolved and called exactly once when the application goes through normal startup procedures. Inside this function you can do any work needed and add your task to the scheduler as usual via schedule-task!.

Quartz JavaDoc

Find the JavaDoc for Quartz here: http://www.quartz-scheduler.org/api/2.3.0/index.html

Background task scheduling via Quartzite. Individual tasks are defined in `metabase.task.*`.

## Regarding Task Initialization:

The most appropriate way to initialize tasks in any `metabase.task.*` namespace is to implement the `task-init`
function which accepts zero arguments. This function is dynamically resolved and called exactly once when the
application goes through normal startup procedures. Inside this function you can do any work needed and add your
task to the scheduler as usual via `schedule-task!`.

## Quartz JavaDoc

Find the JavaDoc for Quartz here: http://www.quartz-scheduler.org/api/2.3.0/index.html
raw docstring

metabase.task.follow-up-emails

Tasks which follow up with Metabase users.

Tasks which follow up with Metabase users.
raw docstring

No vars found in this namespace.

metabase.task.send-anonymous-stats

Contains a Metabase task which periodically sends anonymous usage information to the Metabase team.

Contains a Metabase task which periodically sends anonymous usage information to the Metabase team.
raw docstring

No vars found in this namespace.

metabase.task.send-pulses

Tasks related to running Pulses.

Tasks related to running `Pulses`.
raw docstring

No vars found in this namespace.

metabase.task.sync-databases

Scheduled tasks for syncing metadata/analyzing and caching FieldValues for connected Databases.

Scheduled tasks for syncing metadata/analyzing and caching FieldValues for connected Databases.
raw docstring

metabase.task.task-history-cleanup

No vars found in this namespace.

metabase.task.upgrade-checks

Contains a Metabase task which periodically checks for the availability of new Metabase versions.

Contains a Metabase task which periodically checks for the availability of new Metabase versions.
raw docstring

No vars found in this namespace.

metabase.types

The Metabase Hierarchical Type System (MHTS). This is a hierarchy where types derive from one or more parent types, which in turn derive from their own parents. This makes it possible to add new types without needing to add corresponding mappings in the frontend or other places. For example, a Database may want a type called something like :type/CaseInsensitiveText; we can add this type as a derivative of :type/Text and everywhere else can continue to treat it as such until further notice.

The Metabase Hierarchical Type System (MHTS). This is a hierarchy where types derive from one or more parent types,
which in turn derive from their own parents. This makes it possible to add new types without needing to add
corresponding mappings in the frontend or other places. For example, a Database may want a type called something
like `:type/CaseInsensitiveText`; we can add this type as a derivative of `:type/Text` and everywhere else can
continue to treat it as such until further notice.
raw docstring

metabase.util.cron

Utility functions for converting frontend schedule dictionaries to cron strings and vice versa. See http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/crontrigger.html#format for details on cron format.

Utility functions for converting frontend schedule dictionaries to cron strings and vice versa.
See http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/crontrigger.html#format for details on cron
format.
raw docstring

metabase.util.embed

Utility functions for public links and embedding.

Utility functions for public links and embedding.
raw docstring

metabase.util.encryption

Utility functions for encrypting and decrypting strings using AES256 CBC + HMAC SHA512 and the MB_ENCRYPTION_SECRET_KEY env var.

Utility functions for encrypting and decrypting strings using AES256 CBC + HMAC SHA512 and the
`MB_ENCRYPTION_SECRET_KEY` env var.
raw docstring

metabase.util.infer-spaces

Logic for automatically inferring where spaces should go in table names. Ported from https://stackoverflow.com/questions/8870261/how-to-split-text-without-spaces-into-list-of-words/11642687#11642687.

Logic for automatically inferring where spaces should go in table names. Ported from
https://stackoverflow.com/questions/8870261/how-to-split-text-without-spaces-into-list-of-words/11642687#11642687.
raw docstring

metabase.util.password

Utility functions for checking passwords against hashes and for making sure passwords match complexity requirements.

Utility functions for checking passwords against hashes and for making sure passwords match complexity requirements.
raw docstring

metabase.util.pretty

Helpers to make it easier to nicely print our custom record types in the REPL or elsewhere.

Helpers to make it easier to nicely print our custom record types in the REPL or elsewhere.
raw docstring

metabase.util.stats

Functions which summarize the usage of an instance

Functions which summarize the usage of an instance
raw docstring

metabase.util.ui-logic

This namespace has clojure implementations of logic currently found in the UI, but is needed for the backend. Idealling code here would be refactored such that the logic for this isn't needed in two places

This namespace has clojure implementations of logic currently found in the UI, but is needed for the
backend. Idealling code here would be refactored such that the logic for this isn't needed in two places
raw docstring

metabase.util.urls

Utility functions for generating the frontend URLs that correspond various user-facing Metabase objects, like Cards or Dashboards. This is intended as the central place for all such URL-generation activity, so if frontend routes change, only this file need be changed on the backend.

Functions for generating URLs not related to Metabase objects generally do not belong here, unless they are used in many places in the codebase; one-off URL-generation functions should go in the same namespaces or modules where they are used.

Utility functions for generating the frontend URLs that correspond various user-facing Metabase *objects*, like Cards or Dashboards.
This is intended as the central place for all such URL-generation activity, so if frontend routes change, only this file need be changed
on the backend.

Functions for generating URLs not related to Metabase *objects* generally do not belong here, unless they are used in many places in the
codebase; one-off URL-generation functions should go in the same namespaces or modules where they are used.
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close