Liking cljdoc? Tell your friends :D

gclouj.bigquery


cell-coercionsclj


copy-jobclj

(copy-job service
          sources
          destination
          &
          {:keys [create-disposition write-disposition]
           :or {create-disposition :needed write-disposition :empty}})

create-datasetclj

(create-dataset service
                {:keys [project-id dataset-id friendly-name location description
                        table-lifetime-millis]
                 :as dataset})

create-dispositionsclj


create-tableclj

(create-table service {:keys [project-id dataset-id table-id] :as table} fields)

Fields: sequence of fields representing the table schema. e.g. [{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]

Fields: sequence of fields representing the table schema.
e.g. [{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]
raw docstring

datasetclj

(dataset service {:keys [project-id dataset-id] :as dataset})

datasetsclj

(datasets service)

delete-datasetclj

(delete-dataset service
                {:keys [project-id dataset-id delete-contents?] :as dataset})

delete-tableclj

(delete-table service {:keys [project-id dataset-id table-id] :as table})

execute-jobclj

(execute-job service job)

extract-compressionclj


extract-formatclj


extract-jobclj

(extract-job service
             table
             destination-uri
             &
             {:keys [format compression] :or {format :json compression :gzip}})

Extracts data from BigQuery into a Google Cloud Storage location. Table argument needs to be a map with project-id, dataset-id and table-id.

Extracts data from BigQuery into a Google Cloud Storage location.
Table argument needs to be a map with project-id, dataset-id and table-id.
raw docstring

field-value->clojurecljmultimethod


insert-allclj

(insert-all service
            {:keys [project-id dataset-id table-id skip-invalid? template-suffix
                    row-id]
             :as table}
            rows)

Performs a streaming insert of rows. row-id can be a function to return the unique identity of the row (e.g. row-hash). Template suffix can be used to create tables according to a template.

Performs a streaming insert of rows. row-id can be a function to
return the unique identity of the row (e.g. row-hash). Template suffix
can be used to create tables according to a template.
raw docstring

jobclj

(job service {:keys [project-id job-id] :as job})

load-jobclj

(load-job service
          table
          {:keys [format create-disposition write-disposition max-bad-records
                  schema]}
          uris)

Loads data from Cloud Storage URIs into the specified table. Table argument needs to be a map with project-id, dataset-id and table-id. Options: create-disposition controls whether tables are created if necessary, or assume to have been created already (default). write-disposition controls whether data should :append (default), :truncate or :empty to fail if table exists. :format :json or :csv :schema sequence describing the table schema.[{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]

Loads data from Cloud Storage URIs into the specified table.
Table argument needs to be a map with project-id, dataset-id and table-id.
Options:
`create-disposition` controls whether tables are created if
necessary, or assume to have been created already (default).
`write-disposition`  controls whether data should :append (default),
:truncate or :empty to fail if table exists.
:format              :json or :csv
:schema              sequence describing the table schema.[{:name "foo" :type :record :fields [{:name "bar" :type :integer}]}]
raw docstring

queryclj

(query service
       query
       {:keys [max-results dry-run? max-wait-millis use-cache? default-dataset]
        :as query-opts})

Executes a query. BigQuery will create a Query Job and block for the specified timeout. If the query returns within the time the results will be returned. Otherwise, results need to be retrieved separately using query-results. Status of the job can be checked using the job function, and checking completed?

Executes a query. BigQuery will create a Query Job and block for the
specified timeout. If the query returns within the time the results
will be returned. Otherwise, results need to be retrieved separately
using query-results. Status of the job can be checked using the job
function, and checking completed?
raw docstring

query-jobclj

(query-job service
           query
           {:keys [create-disposition write-disposition large-results? dry-run?
                   destination-table default-dataset use-cache? flatten-results?
                   use-legacy-sql? priority udfs]})

query-optioncljmultimethod


query-resultsclj

(query-results service
               {:keys [project-id job-id] :as job}
               &
               {:keys [max-wait-millis] :as opts})

Retrieves results for a Query job. Will throw exceptions unless Job has completed successfully. Check using job and completed? functions.

Retrieves results for a Query job. Will throw exceptions unless Job
has completed successfully. Check using job and completed? functions.
raw docstring

query-results-seqclj

(query-results-seq {:keys [results schema] :as query-results})

Takes a query result and coerces the results from being raw sequences into maps according to the schema and coercing values according to their type. e.g.: converts from query results of: {:results (("500")) :schema ({:name "col" :type :integer})} into... ({"col" 500} ...)

Takes a query result and coerces the results from being raw sequences
into maps according to the schema and coercing values according to
their type. e.g.:
converts from query results of: {:results (("500")) :schema ({:name "col" :type :integer})} into...
({"col" 500} ...)
raw docstring

row-hashclj

(row-hash m & {:keys [bits] :or {bits 128}})

Creates a hash suitable for identifying duplicate rows, useful when streaming to avoid inserting duplicate rows.

Creates a hash suitable for identifying duplicate rows, useful when
streaming to avoid inserting duplicate rows.
raw docstring

running?clj

(running? job)

serviceclj

(service)
(service {:keys [project-id] :as options})

successful?clj

(successful? job)

tableclj

(table service {:keys [project-id dataset-id table-id] :as table})

table-idclj

(table-id {:keys [project-id dataset-id table-id]})

tablesclj

(tables service {:keys [project-id dataset-id] :as dataset})

Returns a sequence of table-ids. For complete table information (schema, location, size etc.) you'll need to also use the table function.

Returns a sequence of table-ids. For complete table
information (schema, location, size etc.) you'll need to also use the
`table` function.
raw docstring

ToClojurecljprotocol

to-clojureclj

(to-clojure _)

user-defined-functionclj

(user-defined-function udf)

Creates a User Defined Function suitable for use in BigQuery queries. Can be a Google Cloud Storage uri (e.g. gs://bucket/path), or an inline JavaScript code blob.

Creates a User Defined Function suitable for use in BigQuery queries. Can be a Google Cloud Storage uri (e.g. gs://bucket/path), or an inline JavaScript code blob.
raw docstring

write-dispositionsclj

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close