(copy-job service
sources
destination
&
{:keys [create-disposition write-disposition]
:or {create-disposition :needed write-disposition :empty}})
(create-dataset service
{:keys [project-id dataset-id friendly-name location description
table-lifetime-millis]
:as dataset})
(create-table service {:keys [project-id dataset-id table-id] :as table} fields)
Fields: sequence of fields representing the table schema. e.g. [{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]
Fields: sequence of fields representing the table schema. e.g. [{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]
(dataset service {:keys [project-id dataset-id] :as dataset})
(datasets service)
(delete-dataset service
{:keys [project-id dataset-id delete-contents?] :as dataset})
(delete-table service {:keys [project-id dataset-id table-id] :as table})
(execute-job service job)
(extract-job service
table
destination-uri
&
{:keys [format compression] :or {format :json compression :gzip}})
Extracts data from BigQuery into a Google Cloud Storage location. Table argument needs to be a map with project-id, dataset-id and table-id.
Extracts data from BigQuery into a Google Cloud Storage location. Table argument needs to be a map with project-id, dataset-id and table-id.
(insert-all service
{:keys [project-id dataset-id table-id skip-invalid? template-suffix
row-id]
:as table}
rows)
Performs a streaming insert of rows. row-id can be a function to return the unique identity of the row (e.g. row-hash). Template suffix can be used to create tables according to a template.
Performs a streaming insert of rows. row-id can be a function to return the unique identity of the row (e.g. row-hash). Template suffix can be used to create tables according to a template.
(job service {:keys [project-id job-id] :as job})
(load-job service
table
{:keys [format create-disposition write-disposition max-bad-records
schema]}
uris)
Loads data from Cloud Storage URIs into the specified table.
Table argument needs to be a map with project-id, dataset-id and table-id.
Options:
create-disposition
controls whether tables are created if
necessary, or assume to have been created already (default).
write-disposition
controls whether data should :append (default),
:truncate or :empty to fail if table exists.
:format :json or :csv
:schema sequence describing the table schema.[{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]
Loads data from Cloud Storage URIs into the specified table. Table argument needs to be a map with project-id, dataset-id and table-id. Options: `create-disposition` controls whether tables are created if necessary, or assume to have been created already (default). `write-disposition` controls whether data should :append (default), :truncate or :empty to fail if table exists. :format :json or :csv :schema sequence describing the table schema.[{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]
(query service
query
{:keys [max-results dry-run? max-wait-millis use-cache? default-dataset]
:as query-opts})
Executes a query. BigQuery will create a Query Job and block for the specified timeout. If the query returns within the time the results will be returned. Otherwise, results need to be retrieved separately using query-results. Status of the job can be checked using the job function, and checking completed?
Executes a query. BigQuery will create a Query Job and block for the specified timeout. If the query returns within the time the results will be returned. Otherwise, results need to be retrieved separately using query-results. Status of the job can be checked using the job function, and checking completed?
(query-job service
query
{:keys [create-disposition write-disposition large-results? dry-run?
destination-table default-dataset use-cache? flatten-results?
use-legacy-sql? priority udfs]})
(query-results service
{:keys [project-id job-id] :as job}
&
{:keys [max-wait-millis] :as opts})
Retrieves results for a Query job. Will throw exceptions unless Job has completed successfully. Check using job and completed? functions.
Retrieves results for a Query job. Will throw exceptions unless Job has completed successfully. Check using job and completed? functions.
(query-results-seq {:keys [results schema] :as query-results})
Takes a query result and coerces the results from being raw sequences into maps according to the schema and coercing values according to their type. e.g.: converts from query results of: {:results (("500")) :schema ({:name "col" :type :integer})} into... ({"col" 500} ...)
Takes a query result and coerces the results from being raw sequences into maps according to the schema and coercing values according to their type. e.g.: converts from query results of: {:results (("500")) :schema ({:name "col" :type :integer})} into... ({"col" 500} ...)
(row-hash m & {:keys [bits] :or {bits 128}})
Creates a hash suitable for identifying duplicate rows, useful when streaming to avoid inserting duplicate rows.
Creates a hash suitable for identifying duplicate rows, useful when streaming to avoid inserting duplicate rows.
(running? job)
(service)
(service {:keys [project-id] :as options})
(successful? job)
(table service {:keys [project-id dataset-id table-id] :as table})
(table-id {:keys [project-id dataset-id table-id]})
(tables service {:keys [project-id dataset-id] :as dataset})
Returns a sequence of table-ids. For complete table
information (schema, location, size etc.) you'll need to also use the
table
function.
Returns a sequence of table-ids. For complete table information (schema, location, size etc.) you'll need to also use the `table` function.
(to-clojure _)
(user-defined-function udf)
Creates a User Defined Function suitable for use in BigQuery queries. Can be a Google Cloud Storage uri (e.g. gs://bucket/path), or an inline JavaScript code blob.
Creates a User Defined Function suitable for use in BigQuery queries. Can be a Google Cloud Storage uri (e.g. gs://bucket/path), or an inline JavaScript code blob.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close