Liking cljdoc? Tell your friends :D

cmql-core.operators.stages


addclj/s

(add & fields)

$addFields No need to type add in cmql A Map literal as a pipeline stage means add The only situation that is useful is if the new field has the name of an option in the command. For example {:allowDiskUse true} will be interpeted as option not as add-field Dont use a variable that was added in the same add,use separate add Call {:field1 .. :field2 .. :!field3 ...} The last one !field3 means replace the field3 if it existed This happens anyways but not always (for example add doc on doc => merge) Using :!field3 i know that i will replace it

$addFields
No need to type add in cmql
A Map literal as a pipeline stage means add
The only situation that is useful is if the new field has the name
of an option in the command.
For example
{:allowDiskUse true} will be interpeted as option not as add-field
Dont use a variable that was added in the same add,use separate add
Call
{:field1 .. :field2 .. :!field3 ...}
The last one !field3 means replace the field3 if it existed
This happens anyways but not always (for example add doc on doc => merge)
Using :!field3 i know that i will replace it
sourceraw docstring

add-to-rootclj/s

(add-to-root e-doc)

newRoot = merge(root+doc)+doc_field Embeded document is added to the document and remains embeded. {:field1 {:field2 value2}} (add-to-root :field1)-> {:field1 {:field2 value2} :field2 value2}

newRoot = merge(root+doc)+doc_field
Embeded document is added to the document
and remains embeded.
{:field1 {:field2 value2}}
(add-to-root :field1)->
{:field1 {:field2 value2}
 :field2 value2}
sourceraw docstring

bucketclj/s

(bucket group-id-field boundaries & args)

$bucket group => all members of the group have the same 1 value on the grouping field/fields bucket => all members of the group have value in a range of allowed values bucket allows you to define groups on range of values (same range=>same group) (bucket groups) [0,18,30,40,60] buckets => [0,18) [18,30) [30,40) [40,60) & args = default(optional,bucket name for those out of range, if not provided,and out of range => error) accumulators(optional) 1 doc with all the acculumators inside

$bucket
group => all members of the group have the same 1 value
          on the grouping field/fields
bucket => all members of the group have value in a range
          of allowed values
bucket allows you to define groups on range of values
(same range=>same group) (bucket groups)
[0,18,30,40,60]  buckets =>  [0,18) [18,30) [30,40) [40,60)
& args = default(optional,bucket name for those out of range,
                 if not provided,and out of range => error)
accumulators(optional) 1 doc with all the acculumators inside
sourceraw docstring

bucket-autoclj/s

(bucket-auto group-id-field buckets-number & args)

$bucketAuto same as bucket,but now i give just the number of buckets that i want and mongo tries to find the right ranges, so each bucket has the same number of members as possible group => all members of the group have the same 1 value on the grouping field/fields bucket => the number of buckets that mongo will auto-make & args = granularity(optional,a string,picking the way to make the ranges, for example granularity='POWERSOF2' see docs) Accumulators(optional) 1 doc with all the acculumators inside

$bucketAuto
same as bucket,but now i give just the number of buckets
that i want and mongo tries to find the right ranges,
so each bucket has the same number of members as possible
group => all members of the group have the same 1 value
         on the grouping field/fields
bucket => the number of buckets that mongo will auto-make
& args = granularity(optional,a string,picking the way to make the ranges,
         for example granularity='POWERSOF2' see docs)
Accumulators(optional) 1 doc with all the acculumators inside
sourceraw docstring

coll-stats-sclj/s

(coll-stats-s options-map)
source

count-sclj/s

(count-s)
(count-s ref-e)

$count Counts all documents in collection its equivalent with (group nil {:count (sum- 1)})

$count
Counts all documents in collection its equivalent with
(group nil {:count (sum- 1)})
sourceraw docstring

current-op-sclj/s

(current-op-s options-map)

{ $currentOp: { allUsers: <boolean>, idleConnections: <boolean>, idleCursors: <boolean>, idleSessions: <boolean>, localOps: <boolean> } }

{ $currentOp: { allUsers: <boolean>, idleConnections: <boolean>, idleCursors: <boolean>, idleSessions: <boolean>, localOps: <boolean> } }
sourceraw docstring

facetclj/s

(facet & fields)

$facet Run many pipelines in serial using as source the same pipeline Call (q ... ... (facet {:f1 pipeline1 :f2 pipeline2}) The result is one document with ONLY the fields :f1 :f2 f1,f2 will be arrays with the document results of each pipeline Restrictions I cant use those stages in facet pipelines collStats/facet/geoNear/indexStats/out/merge/planCacheStats

$facet
Run many pipelines in serial using as source the same pipeline
Call
(q ...
    ...
    (facet {:f1 pipeline1
            :f2 pipeline2})
The result is one document with ONLY the fields :f1 :f2
f1,f2 will be arrays with the document results of each pipeline
Restrictions
 I cant use those stages in facet pipelines
 collStats/facet/geoNear/indexStats/out/merge/planCacheStats
sourceraw docstring

glookupclj/s

(glookup coll-name start from to result-field & args)

Recursive lookups inside the 'from' collection. Start with a field from the document in the pipeline. And do recursive lookups. Optional field= maxDepth(max recursion, 0 means single lookup) depthField(keep the current depth in a field like unwind keep index option) restrictSearchWithMatch => filter with query operators, to allow or not the lookup

Recursive lookups inside the 'from' collection.
Start with a field from the document in the pipeline.
And do recursive lookups.
Optional field=
  maxDepth(max recursion, 0 means single lookup)
  depthField(keep the current depth in a field like unwind keep index option)
  restrictSearchWithMatch => filter with query operators, to allow or not the lookup
sourceraw docstring

groupclj/s

(group e & accumulators)

$group Groups in 1 document,using accumulators e = nil meaning {:_id nil} +remove the :_id field after = :field meaning {:_id :field} + rename after the :_id to :field For more than 1 fields use edoc like {:_id {:fiedl1 .. :field2 .. ...}} = {:_id edoc/field/nil} like mongo original group = {:field edoc/field/nil} like mongo original group + rename :_id to :field Acumulators one or many like (sum- :field) or (avg- :field) ....

$group
Groups in 1 document,using accumulators
 e = nil      meaning {:_id nil} +remove the :_id field after
   = :field   meaning {:_id :field} + rename after the :_id to :field
      For more than 1 fields use edoc like
        {:_id {:fiedl1 .. :field2 .. ...}}
   = {:_id edoc/field/nil} like mongo original group
   = {:field edoc/field/nil} like mongo original group + rename :_id to :field
Acumulators one or many like (sum- :field) or (avg- :field) .... 
sourceraw docstring

group-arrayclj/s

(group-array array-ref accumulators results-field)
(group-array array-ref group-field accumulators results-field)

Used to reduce an array to another array,because conj- is very slow Very fast uses lookup with pipepline facet,and group Requires a dummy collection with 1 document set in settings Stage operator ,results are added to the root document For nested arrays,results must be moved to that position manually Call (group-array :myarray ;; the ref of the array i want to group {:myagg1 (conj-each- :myarray) ;; reuse of the :myarray name :myagg2 (sum- :myarray)} :mygroups) Result { ...old_fields... :mygroups [{:myarray id1 :myagg1 ... :myagg2 ...} {:myarray id2 :myagg1 ... :myagg2 ...}] }

Used to reduce an array to another array,because conj- is very slow
Very fast uses lookup with pipepline facet,and group
Requires a dummy collection with 1 document set in settings
Stage operator ,results are added to the root document
For nested arrays,results must be moved to that position manually
Call
(group-array :myarray  ;; the ref of the array i want to group
             {:myagg1 (conj-each- :myarray) ;; reuse of the :myarray name
              :myagg2 (sum- :myarray)}
             :mygroups)
Result
{
 ...old_fields...
 :mygroups [{:myarray id1 :myagg1 ... :myagg2 ...} {:myarray id2 :myagg1 ... :myagg2 ...}]
}
sourceraw docstring

group-countclj/s

(group-count e)

$group (group e {'count' {'$sum' 1}}) used for simplicity because common

$group
(group e {'count' {'$sum' 1}})
used for simplicity because common
sourceraw docstring

group-count-sortclj/s

(group-count-sort e desc?)

group by e {:count {$sum 1}} sort by :!count(desc?true) or :count

group by e
{:count {$sum 1}}
sort by :!count(desc?true) or :count
sourceraw docstring

if-matchclj/s

(if-match fields let-or-when-matched when-not-matched)

$merge Helper used only as argument in merge

$merge
Helper used only as argument in merge
sourceraw docstring

joinclj/s

(join foreign-table-field)
(join localfield foreign-table-field)

$sql_join Like sql join,join when equal on field,replace left document with the merged document Call (join :localfield :foreignTable.foreingfField) (join :foreignTable.foreingfField) assumes localfield name to be foreingField name

$sql_join
Like sql join,join when equal on field,replace left document with
the merged document
Call
(join :localfield :foreignTable.foreingfField)
(join :foreignTable.foreingfField)
  assumes localfield name to be foreingField name 
sourceraw docstring

limitclj/s

(limit n)
source

list-local-sessionsclj/s

(list-local-sessions)
(list-local-sessions users-map)

users-map = {} or { allUsers: true } or { users: [ { user: <user>, db: <db> }, ... ] }

users-map = {} or { allUsers: true } or { users: [ { user: <user>, db: <db> }, ... ] } 
sourceraw docstring

lookupclj/s

(lookup this-field other-coll-field-path join-result-field)

$lookup Left equality join doc= {'a' 'b' 'c' 'd'} doc result (the one from the left,the doc i had in pipeline) {'a' 'b' 'c' 'd' :joined [joined_doc1 joined_doc2 ...]} The joined_doc has is like merge of the 2 joined docs The joined field is created even if empty array (zero joined) The joined fields can be an array, in this case we get an array with the joined. (like join each member) Call (lookip :a :coll2.e :joined) ; join if :a==:e

$lookup
Left equality join
doc= {'a' 'b' 'c' 'd'}
doc result (the one from the left,the doc i had in pipeline)
{'a' 'b' 'c' 'd' :joined [joined_doc1 joined_doc2 ...]}
The joined_doc has is like merge of the 2 joined docs
The joined field is created even if empty array (zero joined)
The joined fields can be an array, in this case we get an array with the joined. (like join each member)
Call
(lookip :a :coll2.e :joined) ; join if :a==:e
sourceraw docstring

matchclj/s

(match e-doc)

$match No need to use it in cmql unless you want to use Query operators cmql auto-generates a match stage from filters if one after another, auto-combine them with $expr $and Call (q .... (=_ :age 25) (>_ :weight 50)) Call (if we want to use a query operator) (match { views: { '$gte' 1000 }})

$match
No need to use it in cmql unless you want to use Query operators
cmql auto-generates a match stage from filters if one after another,
auto-combine them with $expr $and
Call
(q ....
   (=_ :age 25)
   (>_ :weight 50))
Call (if we want to use a query operator)
(match { views: { '$gte' 1000 }})
sourceraw docstring

merge-sclj/s

(merge-s coll-namespace)
(merge-s db-namespace if-match-e)

$merge Update/upsert one collection,using what its coming from the pipeline. Merge in document level and collection level. Join when i want mathcing parts from other collection. Merge when i want from both collections ,even if not match. collection -> pipeline -> collection (normal updates) any_pipeline -> collection (merge) For example keepExisting means keep what collection had Updates that collection,and returns nothing(an empty cursor) Requeries a unique index in the right collection,on the merged fields

3 call ways (merge :mydb.mycoll)

(merge :mydb.mycoll ;;no variables used (if-match [field1 fied2] ;;becomes :on [field1 field2] whenMatched ;;can also be pipeline whenNoMatched))

(merge :mydb.mycoll (if-match [field1 fied2] (let- [:v1- :f1 :v2- :f2 ...] ; to refer pipeline doc fields whenMatched ;;can also be pipeline whenNoMatced)))

whenMatched replace (keep pipelines) keepExisting (keep collections) merge (merge old+new document ,like mergeObjects) fail (stops in the middle if happen,no rollback) pipeline(used like update pipepline =>,i can use only $addFields=$set $project=$unset $replaceRoot=$replaceWith)

whenNotMatched insert (insert pipelines) discard (ignore pipelines) fail (if pipeline has ane not match fail,but no rollback)

$merge
Update/upsert one collection,using what its coming from the pipeline.
Merge in document level and collection level.
Join when i want mathcing  parts from other collection.
Merge when i want from both collections ,even if not match.
collection -> pipeline -> collection   (normal updates)
any_pipeline -> collection             (merge)
For example keepExisting means keep what collection had
Updates that collection,and returns nothing(an empty cursor)
Requeries a unique index in the right collection,on the merged fields

3 call ways
(merge :mydb.mycoll)

(merge :mydb.mycoll                ;;no variables used
       (if-match [field1 fied2]    ;;becomes  :on [field1 field2]
         whenMatched            ;;can also be pipeline
         whenNoMatched))

(merge :mydb.mycoll
       (if-match [field1 fied2]
         (let- [:v1- :f1 :v2- :f2 ...] ; to refer pipeline doc fields
           whenMatched     ;;can also be pipeline
           whenNoMatced)))

whenMatched
 replace      (keep pipelines)
 keepExisting (keep collections)
 merge   (merge old+new document ,like mergeObjects)
 fail    (stops in the middle if happen,no rollback)
 pipeline(used like update pipepline =>,i can use only
          $addFields=$set  $project=$unset $replaceRoot=$replaceWith)

whenNotMatched
 insert  (insert pipelines)
 discard (ignore pipelines)
 fail (if pipeline has ane not match fail,but no rollback)
sourceraw docstring

move-to-rootclj/s

(move-to-root e-doc)

newRoot=merge(root+doc)-doc_field Add to root,and remove the embeded doc =>as if it moved to root

newRoot=merge(root+doc)-doc_field
Add to root,and remove the embeded doc =>as if it moved to root
sourceraw docstring

outclj/s

(out db-namespace)
source

pipelineclj/s

(pipeline & args)

(pipeline stage1 stage2 ..) = [stage1 stage2 ...] Used optionally to avoid confusion

(pipeline stage1 stage2 ..) = [stage1 stage2 ...]
Used optionally to avoid confusion
sourceraw docstring

plookupclj/s

(plookup join-info pipeline join-result-field)
(plookup join-info let-vars pipeline join-result-field)

$lookup lookup with pipeline to allow more join creteria(not just equality on 2 field) also the pipeline allows the joined doc to have any shape (not just merge) Returns like lookup {'a' 'b' 'c' 'd' :joined [joined_doc1 joined_doc2 ...]} :joined is an array with the result of the pipeline inside the pipeline references refer to the right doc to refer to the left doc from pipeline use variables Using variables and coll2 references i make complex join creteria and withe the pipeline i can make the joined docs to have any shape Call (plookup :coll2 or [this-field :coll2.other-field-path] [:v1- :afield ...] ; optional [stage1 stage2] :joined)

$lookup
lookup with pipeline to allow more join creteria(not just equality on 2 field)
also the pipeline allows the joined doc to have any shape (not just merge)
Returns like lookup
{'a' 'b' 'c' 'd' :joined [joined_doc1 joined_doc2 ...]}
:joined is an array with the result of the pipeline
inside the pipeline references refer to the right doc
to refer to the left doc from pipeline use variables
Using variables and coll2 references i make complex join creteria
and withe the pipeline i can make the joined docs to have any shape
Call
(plookup  :coll2 or [this-field :coll2.other-field-path]
          [:v1- :afield ...] ; optional
          [stage1
           stage2]
          :joined)
sourceraw docstring

projectclj/s

(project & fields)

$project In cmql [...] inside a pipeline is a project stage(except nested stages) {:f 1} means {:f (literal- 1)} so don't use it for project inside [] If you want to use this notation use MQL directly {'$project' ....} Call 1)add those that i want to keep(and optionally :!id to remove it [:!id :f1 {:f3 (+ :a 1)} {:!f4 (* :a 1))}] (all others wil be removed) {:!f4 ..} means replace the old f4 (replace happens anyways(without {:! ..}) but not always) 2)add those that i want to remove [:!a :!b] (all others will be kept) *i never mix keep/remove except :!_id

$project
In cmql [...] inside a pipeline is a project stage(except nested stages)
{:f 1} means {:f (literal- 1)} so don't use it for project inside []
If you want to use this notation use MQL directly {'$project' ....}
Call
1)add those that i want to keep(and optionally :!_id to remove it
  [:!_id :f1 {:f3 (+_ :a 1)} {:!f4 (*_ :a 1))}] (all others wil be removed)
  {:!f4 ..} means replace the old f4
  (replace happens anyways(without {:! ..}) but not always)
2)add those that i want to remove
  [:!a :!b]         (all others will be kept)
*i never mix keep/remove except :!_id
sourceraw docstring

redactclj/s

(redact condition-DESCEND-PRUNE-KEEP)

$redact I keep or delete root document,or embeded documents based on condition instead of doing it by hand and paths,like auto-find all embeded documents stage => argument = 1 doc from the pipeline i start at level 0 {field0 {field1 {field2 ..}}} if condition $$DESCEND/$$PRUNE/$$KEEP else $$DESCEND/$$PRUNE/$$KEEP

if $$PRUNE i delete that document and everything inside if $$KEEP i keep that documents and everything inside if $$DESCEND i keep the document,but i re-run the condition in the next level,in my case i repeat on {field1 {field2 ..}}

$$DESCEND allows to check all levels so when done,i know that all the documents remained satisfy the condition (no matter the embed level they are)

Arg any expression that evaluates to the 3 system variables, normaly its a condition using field references (if i use references and descend i have to make sure they that they exist in all embeded documents or check inside the condition what to do when they dont exist) $$DESCEND (keep embeded document,but search seperatly the embeded ones?) $$PRUNE (remove embeded document,dont search more at this level?) $$KEEP (keep embeded document,dont search embeded ones?)

$redact
I keep or delete root document,or embeded documents based on condition
instead of doing it by hand and paths,like auto-find all embeded documents
stage => argument = 1 doc from the pipeline
i start at level 0 {field0 {field1 {field2 ..}}}
if condition $$DESCEND/$$PRUNE/$$KEEP
else $$DESCEND/$$PRUNE/$$KEEP

if $$PRUNE i delete that document and everything inside
if $$KEEP i keep that documents and everything inside
if $$DESCEND i keep the document,but i re-run the condition
  in the next level,in my case i repeat on {field1 {field2 ..}}

$$DESCEND allows to check all levels so when done,i know that
all the documents remained satisfy the condition
(no matter the embed level they are)

Arg
any expression that evaluates to the 3 system variables,
normaly its a condition using field references
(if i use references and descend i have to make sure they that
 they exist in all embeded documents or check inside the condition
 what to do when they dont exist)
$$DESCEND (keep embeded document,but search seperatly the embeded ones?)
$$PRUNE   (remove embeded document,dont search more at this level?)
$$KEEP    (keep embeded document,dont search embeded ones?)
sourceraw docstring

reduce-arrayclj/s

(reduce-array array-ref accumulators)

Used to reduce an array to another array,because conj- is very slow Very fast uses lookup with pipepline facet,and group Requires a dummy collection with 1 document set in settings Stage operator ,results are added to the root document For nested arrays,results must be moved to that position manually Call (group-array :myarray ;; the ref of the array i want to reduce {:myagg1 (conj-each- :myarray) ;; reuse of the :myarray ref :myagg2 (sum- :myarray)}) Result(i can add group-field and be like below,but not useful if no group) { ...old_fields... :myagg1 ..... :myagg2 ..... }

Used to reduce an array to another array,because conj- is very slow
Very fast uses lookup with pipepline facet,and group
Requires a dummy collection with 1 document set in settings
Stage operator ,results are added to the root document
For nested arrays,results must be moved to that position manually
Call
(group-array :myarray   ;; the ref of the array i want to reduce
             {:myagg1 (conj-each- :myarray) ;; reuse of the :myarray ref
             :myagg2 (sum- :myarray)})
Result(i can add group-field and be like below,but not useful if no group)
{
 ...old_fields...
 :myagg1 .....
 :myagg2 .....
 }
sourceraw docstring

replace-rootclj/s

(replace-root e-doc)

$replaceRoot Embeded document fully replaces the document including the :_id Call doc={:field1 {:field2 value2}} (replace-root :field1) outdoc={:field2 value2}

$replaceRoot
Embeded document fully replaces the document including the :_id
Call
doc={:field1 {:field2 value2}}
(replace-root :field1)
outdoc={:field2 value2}
sourceraw docstring

replace-withclj/s

(replace-with e-doc)

$replaceRoot Alias of replace-root

$replaceRoot
Alias of replace-root
sourceraw docstring

sampleclj/s

(sample number)

Usefull for very big collections if sample=first stage in pipeline number<5% collection size collection>100 documents i will get number random documents else random sort,full collection scan,and select number documents

Usefull for very big collections
if sample=first stage in pipeline
          number<5% collection size
          collection>100 documents
i will get number random documents
else random sort,full collection scan,and select number documents
sourceraw docstring

set-sclj/s

(set-s & fields)

$set Set is an alias of add,used in update Like add-fields set is not used,map literas means add-fields

$set
Set is an alias of add,used in update
Like add-fields set is not used,map literas means add-fields
sourceraw docstring

skipclj/s

(skip n)
source

sortclj/s

(sort & fields)

$sort Call (sort- :a :!b)

$sort
Call
(sort- :a :!b)
sourceraw docstring

union-sclj/s

(union-s coll-name & stages)

$unionWith Reads from the collection and add documents in the pipeline no processing is made,duplicates can be added also

$unionWith
Reads from the collection and add documents in the pipeline
no processing is made,duplicates can be added also
sourceraw docstring

unsetclj/s

(unset & fields)

like project using only :!field not useful, :! is better and only project

like project using only :!field
not useful, :! is better and only project
sourceraw docstring

unwindclj/s

(unwind field-reference & options)

$unwind 1 document with array of n documents becomes those n documents are like the old 1 document with one extra field with the array member Example { :field1 value1 :field2 [1 2]} -> { :field1 value1 :field2 1} { :field1 value2 :field2 2} Normally 1 document with an array of N members => N documents Options Include one field to keep the index the member had in the initial array (default no index) {:includeArrayIndex string}

Used in special case2 (default false) {:preserveNullAndEmptyArrays true/false} Array field special cases unwind to itself(1 document) (if i used includeArrayIndex,the index will have value null) 1)a single value(not array,not null) => Dissapear if {:preserveNullAndEmptyArrays: false}(default) unwind to itself(1 document) if {:preserveNullAndEmptyArrays: true} 2)null/empty array/missing field

$unwind
1 document with array of n documents becomes
those n documents are like the old 1 document
with one extra field with the array member
Example
{ :field1  value1 :field2 [1 2]}
->
{ :field1  value1  :field2 1}
{ :field1  value2  :field2 2}
Normally  1 document with an array of N members => N documents
Options
 Include one field to keep the index the member had in the
 initial array (default no index)
 {:includeArrayIndex  string}

 Used in special case2 (default false)
 {:preserveNullAndEmptyArrays true/false}
Array field special cases
 unwind to itself(1 document)
 (if i used includeArrayIndex,the index will have value null)
 1)a single value(not array,not null) =>
 Dissapear if {:preserveNullAndEmptyArrays: false}(default)
 unwind to itself(1 document) if {:preserveNullAndEmptyArrays: true}
 2)null/empty array/missing field
sourceraw docstring

unwind-add-to-rootclj/s

(unwind-add-to-root doc-e)

newRoot=doc+member 1 array with n members => n documents added to root Keeps the unwinded field also {'a' 'b' :myarray [doc1 doc2]} Replaced from the 2 docs {'a' 'b' :myarray doc1} {'a' 'b' :myarray doc2}

newRoot=doc+member
1 array with n members => n documents added to root
Keeps the unwinded field also
{'a' 'b'
 :myarray [doc1 doc2]}
Replaced from the 2 docs
{'a' 'b' :myarray doc1}
{'a' 'b' :myarray doc2}
sourceraw docstring

unwind-move-to-rootclj/s

(unwind-move-to-root doc-e)

newRoot=doc+member_fields 1 array with n members => n documents added to root Keeps the unwinded field also {'a' 'b' :myarray [doc1 doc2]} Replaced from the 2 docs,like doc moved to root (merge {'a' 'b'} doc1) (merge {'a' 'b'} doc2)

newRoot=doc+member_fields
1 array with n members => n documents added to root
Keeps the unwinded field also
{'a' 'b'
 :myarray [doc1 doc2]}
Replaced from the 2 docs,like doc moved to root
(merge {'a' 'b'} doc1)
(merge {'a' 'b'} doc2)
sourceraw docstring

unwind-replace-rootclj/s

(unwind-replace-root doc-e)

newRoot=doc-member 1 array with n members => n documents as roots Like replace collection with the array members {:a 'b' :myarray [doc1 doc2]} Replaces from the 2 docs doc1 doc2

newRoot=doc-member
1 array with n members => n documents as roots
Like replace collection with the array members
{:a 'b'
 :myarray [doc1 doc2]}
Replaces from the 2 docs
doc1
doc2
sourceraw docstring

wfieldsclj/s

(wfields & args)

$setWindowFields The 'current' string for the current document position in the output. The 'unbounded' string for the first or last document position in the partition. An integer for a position relative to the current document. Use a negative integer for a position before the current document. Use a positive integer for a position after the current document. 0 is the current document position.

its like $set but adds to each document the result of a group (if missing all collection 1 group) (in past we could do similar thing with group and unwind after) sort optionally inside the group output = the fields to append window operator = the accumulator on the group, or { $rank: { } } etc documents (based on sort order) ['unbounded','current'] (accumulator from first of group since the current cdocument) [-1 0] means current and previous document only range (based on value of the field, like range -10 +10 days) again unbount/current or numbers range can take a unit also

Call example (wfields :state //partition (sort :orderDate) ;;if in q enviroment no need for namespace {:cumulativeQuantityForState (sum :quantity) :documents ["unbounded" "current"]}) (wfields :state (sort :orderDate)
{:cumulativeQuantityForState (dense-rank)})

$setWindowFields
The 'current' string for the current document position in the output.
The 'unbounded' string for the first or last document position in the partition.
An integer for a position relative to the current document. Use a negative integer for a position before the current document.
Use a positive integer for a position after the current document. 0 is the current document position.

its like $set but adds to each document the result of a group (if missing all collection 1 group)
(in past we could do similar thing with group and unwind after)
sort optionally inside the group
output = the fields to append
window operator = the accumulator on the group, or { $rank: { } } etc
  documents (based on sort order)
  ['unbounded','current'] (accumulator from first of group since the current cdocument)
  [-1 0] means current and previous document only
range (based on value of the field, like range -10 +10 days)
  again unbount/current or numbers
  range can take a unit also

Call example
(wfields :state   //partition
         (sort :orderDate)    ;;if in q enviroment no need for namespace
         {:cumulativeQuantityForState (sum :quantity)
          :documents ["unbounded" "current"]})
(wfields :state
         (sort :orderDate)    
         {:cumulativeQuantityForState (dense-rank)})
sourceraw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close