Liking cljdoc? Tell your friends :D

tech-ml-version

“6.00-beta-10”

Introduction

tech.ml.dataset is a great and fast library which brings columnar dataset to the Clojure. Chris Nuernberger has been working on this library for last year as a part of bigger tech.ml stack.

I’ve started to test the library and help to fix uncovered bugs. My main goal was to compare functionalities with the other standards from other platforms. I focused on R solutions: dplyr, tidyr and data.table.

During conversions of the examples I’ve come up how to reorganized existing tech.ml.dataset functions into simple to use API. The main goals were:

  • Focus on dataset manipulation functionality, leaving other parts of tech.ml like pipelines, datatypes, readers, ML, etc.
  • Single entry point for common operations - one function dispatching on given arguments.
  • group-by results with special kind of dataset - a dataset containing subsets created after grouping as a column.
  • Most operations recognize regular dataset and grouped dataset and process data accordingly.
  • One function form to enable thread-first on dataset.

If you want to know more about tech.ml.dataset and dtype-next please refer their documentation:

SOURCE CODE

Join the discussion on Zulip

Let’s require main namespace and define dataset used in most examples:

(require '[tablecloth.api :as api]
         '[tech.v3.datatype.functional :as dfn])
(def DS (api/dataset {:V1 (take 9 (cycle [1 2]))
                      :V2 (range 1 10)
                      :V3 (take 9 (cycle [0.5 1.0 1.5]))
                      :V4 (take 9 (cycle ["A" "B" "C"]))}))
DS

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C

Functionality

Dataset

Dataset is a special type which can be considered as a map of columns implemented around tech.ml.datatype library. Each column can be considered as named sequence of typed data. Supported types include integers, floats, string, boolean, date/time, objects etc.

Dataset creation

Dataset can be created from various of types of Clojure structures and files:

  • single values
  • sequence of maps
  • map of sequences or values
  • sequence of columns (taken from other dataset or created manually)
  • sequence of pairs: [string column-data] or [keyword column-data]
  • array of native arrays
  • file types: raw/gzipped csv/tsv, json, xls(x) taken from local file system or URL
  • input stream

api/dataset accepts:

  • data
  • options (see documentation of tech.ml.dataset/->dataset function for full list):
    • :dataset-name - name of the dataset
    • :num-rows - number of rows to read from file
    • :header-row? - indication if first row in file is a header
    • :key-fn - function applied to column names (eg. keyword, to convert column names to keywords)
    • :separator - column separator
    • :single-value-column-name - name of the column when single value is provided
    • :column-names - in case you want to name columns - only works for sequential input (arrays)
    • :layout - for numerical, native array of arrays - treat entries :as-rows or :as-columns (default)

api/let-dataset accepts bindings symbol-column-data to simulate R’s tibble function. Each binding is converted into a column. You can refer previous columns to in further bindings (as in let).


Empty dataset.

(api/dataset)
_unnamed [0 0]

Dataset from single value.

(api/dataset 999)

_unnamed [1 1]:

:$value
999

Set column name for single value. Also set the dataset name.

(api/dataset 999 {:single-value-column-name "my-single-value"})
(api/dataset 999 {:single-value-column-name ""
                  :dataset-name "Single value"})

_unnamed [1 1]:

my-single-value
999

Single value [1 1]:

0
999

Sequence of pairs (first = column name, second = value(s)).

(api/dataset [[:A 33] [:B 5] [:C :a]])

_unnamed [1 3]:

| :A | :B | :C | | -: | -: | -- | | 33 | 5 | :a |


Not sequential values are repeated row-count number of times.

(api/dataset [[:A [1 2 3 4 5 6]] [:B "X"] [:C :a]])

_unnamed [6 3]:

| :A | :B | :C | | -: | -- | -- | | 1 | X | :a | | 2 | X | :a | | 3 | X | :a | | 4 | X | :a | | 5 | X | :a | | 6 | X | :a |


Dataset created from map (keys = column names, vals = value(s)). Works the same as sequence of pairs.

(api/dataset {:A 33})
(api/dataset {:A [1 2 3]})
(api/dataset {:A [3 4 5] :B "X"})

_unnamed [1 1]:

| :A | | -: | | 33 |

_unnamed [3 1]:

| :A | | -: | | 1 | | 2 | | 3 |

_unnamed [3 2]:

| :A | :B | | -: | -- | | 3 | X | | 4 | X | | 5 | X |


You can put any value inside a column

(api/dataset {:A [[3 4 5] [:a :b]] :B "X"})

_unnamed [2 2]:

| :A | :B | | --------- | -- | | [3 4 5] | X | | [:a :b] | X |


Sequence of maps

(api/dataset [{:a 1 :b 3} {:b 2 :a 99}])
(api/dataset [{:a 1 :b [1 2 3]} {:a 2 :b [3 4]}])

_unnamed [2 2]:

| :b | :a | | -: | -: | | 3 | 1 | | 2 | 99 |

_unnamed [2 2]:

| :b | :a | | --------- | -: | | [1 2 3] | 1 | | [3 4] | 2 |


Missing values are marked by nil

(api/dataset [{:a nil :b 1} {:a 3 :b 4} {:a 11}])

_unnamed [3 2]:

| :b | :a | | -: | -: | | 1 | | | 4 | 3 | | | 11 |


Reading from arrays, by default :as-columns

(-> (map int-array [[1 2] [3 4] [5 6]])
    (into-array)
    (api/dataset))

:_unnamed [2 3]:

| 0 | 1 | 2 | | -: | -: | -: | | 1 | 3 | 5 | | 2 | 4 | 6 |

:as-rows

(-> (map int-array [[1 2] [3 4] [5 6]])
    (into-array)
    (api/dataset {:layout :as-rows}))

:_unnamed [3 2]:

| 0 | 1 | | -: | -: | | 1 | 2 | | 3 | 4 | | 5 | 6 |

:as-rows with names

(-> (map int-array [[1 2] [3 4] [5 6]])
    (into-array)
    (api/dataset {:layout :as-rows
                  :column-names [:a :b]}))

:_unnamed [3 2]:

| :a | :b | | -: | -: | | 1 | 2 | | 3 | 4 | | 5 | 6 |


Create dataset using macro let-dataset to simulate R tibble function. Each binding is converted into a column.

(api/let-dataset [x (range 1 6)
                  y 1
                  z (dfn/+ x y)])

_unnamed [5 3]:

| :x | :y | :z | | -: | -: | -: | | 1 | 1 | 2 | | 2 | 1 | 3 | | 3 | 1 | 4 | | 4 | 1 | 5 | | 5 | 1 | 6 |


Import CSV file

(api/dataset "data/family.csv")

data/family.csv [5 5]:

familydob_child1dob_child2gender_child1gender_child2
11998-11-262000-01-2912
21996-06-22 2
32002-07-112004-04-0522
42004-10-102009-08-2711
52000-12-052005-02-2821

Import from URL

(defonce ds (api/dataset "https://vega.github.io/vega-lite/examples/data/seattle-weather.csv"))
ds

https://vega.github.io/vega-lite/examples/data/seattle-weather.csv [1461 6]:

dateprecipitationtemp_maxtemp_minwindweather
2012-01-010.012.85.04.7drizzle
2012-01-0210.910.62.84.5rain
2012-01-030.811.77.22.3rain
2012-01-0420.312.25.64.7rain
2012-01-051.38.92.86.1rain
2012-01-062.54.42.22.2rain
2012-01-070.07.22.82.3rain
2012-01-080.010.02.82.0sun
2012-01-094.39.45.03.4rain
2012-01-101.06.10.63.4rain
2012-01-110.06.1-1.15.1sun
2012-01-120.06.1-1.71.9sun
2012-01-130.05.0-2.81.3sun
2012-01-144.14.40.65.3snow
2012-01-155.31.1-3.33.2snow
2012-01-162.51.7-2.85.0snow
2012-01-178.13.30.05.6snow
2012-01-1819.80.0-2.85.0snow
2012-01-1915.2-1.1-2.81.6snow
2012-01-2013.57.2-1.12.3snow
2012-01-213.08.33.38.2rain
2012-01-226.16.72.24.8rain
2012-01-230.08.31.13.6rain
2012-01-248.610.02.25.1rain
2012-01-258.18.94.45.4rain

Saving

Export dataset to a file or output stream can be done by calling api/write!. Function accepts:

  • dataset
  • file name with one of the extensions: .csv, .tsv, .csv.gz and .tsv.gz or output stream
  • options:
    • :separator - string or separator char.
(api/write! ds "output.tsv.gz")
(.exists (clojure.java.io/file "output.tsv.gz"))
nil
true
Nippy
(api/write! DS "output.nippy.gz")
nil
(api/dataset "output.nippy.gz")

output.nippy.gz [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C

Dataset related functions

Summary functions about the dataset like number of rows, columns and basic stats.


Number of rows

(api/row-count ds)
1461

Number of columns

(api/column-count ds)
6

Shape of the dataset, [row count, column count]

(api/shape ds)
[1461 6]

General info about dataset. There are three variants:

  • default - containing information about columns with basic statistics
    • :basic - just name, row and column count and information if dataset is a result of group-by operation
    • :columns - columns’ metadata
(api/info ds)
(api/info ds :basic)
(api/info ds :columns)

https://vega.github.io/vega-lite/examples/data/seattle-weather.csv: descriptive-stats [6 12]:

:col-name:datatype:n-valid:n-missing:min:mean:mode:max:standard-deviation:skew:first:last
date:packed-local-date146102012-01-012013-12-31 2015-12-313.64520463E+101.30606880E-152012-01-012015-12-31
precipitation:float64146100.0003.029 55.906.68019432E+003.50564372E+000.0000.000
temp_max:float6414610-1.60016.44 35.607.34975810E+002.80929992E-0112.805.600
temp_min:float6414610-7.1008.235 18.305.02300418E+00-2.49458552E-015.000-2.100
weather:string14610 rain drizzlesun
wind:float64146100.40003.241 9.5001.43782506E+008.91667519E-014.7003.500

https://vega.github.io/vega-lite/examples/data/seattle-weather.csv :basic info [1 4]:

:name:grouped?:rows:columns
https://vega.github.io/vega-lite/examples/data/seattle-weather.csvfalse14616

https://vega.github.io/vega-lite/examples/data/seattle-weather.csv :column info [6 4]:

:n-elems:name:categorical?:datatype
1461date :packed-local-date
1461precipitation :float64
1461temp_max :float64
1461temp_min :float64
1461wind :float64
1461weathertrue:string

Getting a dataset name

(api/dataset-name ds)
"https://vega.github.io/vega-lite/examples/data/seattle-weather.csv"

Setting a dataset name (operation is immutable).

(->> "seattle-weather"
     (api/set-dataset-name ds)
     (api/dataset-name))
"seattle-weather"

Columns and rows

Get columns and rows as sequences. column, columns and rows treat grouped dataset as regular one. See Groups to read more about grouped datasets.

Possible result types:

  • :as-seq or :as-seqs - sequence of seqences (default)
  • :as-maps - sequence of maps (rows)
  • :as-map - map of sequences (columns)
  • :as-double-arrays - array of double arrays

Select column.

(ds "wind")
(api/column ds "date")
#tech.v3.dataset.column<float64>[1461]
wind
[4.700, 4.500, 2.300, 4.700, 6.100, 2.200, 2.300, 2.000, 3.400, 3.400, 5.100, 1.900, 1.300, 5.300, 3.200, 5.000, 5.600, 5.000, 1.600, 2.300...]
#tech.v3.dataset.column<packed-local-date>[1461]
date
[2012-01-01, 2012-01-02, 2012-01-03, 2012-01-04, 2012-01-05, 2012-01-06, 2012-01-07, 2012-01-08, 2012-01-09, 2012-01-10, 2012-01-11, 2012-01-12, 2012-01-13, 2012-01-14, 2012-01-15, 2012-01-16, 2012-01-17, 2012-01-18, 2012-01-19, 2012-01-20...]

Columns as sequence

(take 2 (api/columns ds))
(#tech.v3.dataset.column<packed-local-date>[1461]
date
[2012-01-01, 2012-01-02, 2012-01-03, 2012-01-04, 2012-01-05, 2012-01-06, 2012-01-07, 2012-01-08, 2012-01-09, 2012-01-10, 2012-01-11, 2012-01-12, 2012-01-13, 2012-01-14, 2012-01-15, 2012-01-16, 2012-01-17, 2012-01-18, 2012-01-19, 2012-01-20...] #tech.v3.dataset.column<float64>[1461]
precipitation
[0.000, 10.90, 0.8000, 20.30, 1.300, 2.500, 0.000, 0.000, 4.300, 1.000, 0.000, 0.000, 0.000, 4.100, 5.300, 2.500, 8.100, 19.80, 15.20, 13.50...])

Columns as map

(keys (api/columns ds :as-map))
("date" "precipitation" "temp_max" "temp_min" "wind" "weather")

Rows as sequence of sequences

(take 2 (api/rows ds))
([#object[java.time.LocalDate 0x463c2fb7 "2012-01-01"] 0.0 12.8 5.0 4.7 "drizzle"] [#object[java.time.LocalDate 0x296b30d1 "2012-01-02"] 10.9 10.6 2.8 4.5 "rain"])

Select rows/columns as double-double-array

(-> ds
    (api/select-columns :type/numerical)
    (api/head)
    (api/rows :as-double-arrays))
#object["[[D" 0x501f786b "[[D@501f786b"]
(-> ds
    (api/select-columns :type/numerical)
    (api/head)
    (api/columns :as-double-arrays))
#object["[[D" 0x27791c2 "[[D@27791c2"]

Rows as sequence of maps

(clojure.pprint/pprint (take 2 (api/rows ds :as-maps)))
({"date" #object[java.time.LocalDate 0x1438f8ca "2012-01-01"],
  "precipitation" 0.0,
  "temp_min" 5.0,
  "weather" "drizzle",
  "temp_max" 12.8,
  "wind" 4.7}
 {"date" #object[java.time.LocalDate 0x5a1433f7 "2012-01-02"],
  "precipitation" 10.9,
  "temp_min" 2.8,
  "weather" "rain",
  "temp_max" 10.6,
  "wind" 4.5})

Printing

Dataset is printed using dataset->str or print-dataset functions. Options are the same as in tech.ml.dataset/dataset-data->str. Most important is :print-line-policy which can be one of the: :single, :repl or :markdown.

(api/print-dataset (api/group-by DS :V1) {:print-line-policy :markdown})
_unnamed [2 3]:

| :group-id | :name |                                                                                                                                                                                                                                                             :data |
|----------:|------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|         0 |     1 | Group: 1 [5 4]:<br><br>\| :V1 \| :V2 \| :V3 \| :V4 \|<br>\|----:\|----:\|----:\|-----\|<br>\|   1 \|   1 \| 0.5 \|   A \|<br>\|   1 \|   3 \| 1.5 \|   C \|<br>\|   1 \|   5 \| 1.0 \|   B \|<br>\|   1 \|   7 \| 0.5 \|   A \|<br>\|   1 \|   9 \| 1.5 \|   C \| |
|         1 |     2 |                                   Group: 2 [4 4]:<br><br>\| :V1 \| :V2 \| :V3 \| :V4 \|<br>\|----:\|----:\|----:\|-----\|<br>\|   2 \|   2 \| 1.0 \|   B \|<br>\|   2 \|   4 \| 0.5 \|   A \|<br>\|   2 \|   6 \| 1.5 \|   C \|<br>\|   2 \|   8 \| 1.0 \|   B \| |
(api/print-dataset (api/group-by DS :V1) {:print-line-policy :repl})
_unnamed [2 3]:

| :group-id | :name |                          :data |
|----------:|------:|--------------------------------|
|         0 |     1 | Group: 1 [5 4]:                |
|           |       |                                |
|           |       | \| :V1 \| :V2 \| :V3 \| :V4 \| |
|           |       | \|----:\|----:\|----:\|-----\| |
|           |       | \|   1 \|   1 \| 0.5 \|   A \| |
|           |       | \|   1 \|   3 \| 1.5 \|   C \| |
|           |       | \|   1 \|   5 \| 1.0 \|   B \| |
|           |       | \|   1 \|   7 \| 0.5 \|   A \| |
|           |       | \|   1 \|   9 \| 1.5 \|   C \| |
|         1 |     2 | Group: 2 [4 4]:                |
|           |       |                                |
|           |       | \| :V1 \| :V2 \| :V3 \| :V4 \| |
|           |       | \|----:\|----:\|----:\|-----\| |
|           |       | \|   2 \|   2 \| 1.0 \|   B \| |
|           |       | \|   2 \|   4 \| 0.5 \|   A \| |
|           |       | \|   2 \|   6 \| 1.5 \|   C \| |
|           |       | \|   2 \|   8 \| 1.0 \|   B \| |
(api/print-dataset (api/group-by DS :V1) {:print-line-policy :single})
_unnamed [2 3]:

| :group-id | :name |           :data |
|----------:|------:|-----------------|
|         0 |     1 | Group: 1 [5 4]: |
|         1 |     2 | Group: 2 [4 4]: |

Group-by

Grouping by is an operation which splits dataset into subdatasets and pack it into new special type of… dataset. I distinguish two types of dataset: regular dataset and grouped dataset. The latter is the result of grouping.

Grouped dataset is annotated in by :grouped? meta tag and consist following columns:

  • :name - group name or structure
  • :group-id - integer assigned to the group
  • :data - groups as datasets

Almost all functions recognize type of the dataset (grouped or not) and operate accordingly.

You can’t apply reshaping or join/concat functions on grouped datasets.

Grouping

Grouping is done by calling group-by function with arguments:

  • ds - dataset
  • grouping-selector - what to use for grouping
  • options:
    • :result-type - what to return:
      • :as-dataset (default) - return grouped dataset
      • :as-indexes - return rows ids (row number from original dataset)
      • :as-map - return map with group names as keys and subdataset as values
      • :as-seq - return sequens of subdatasets
    • :select-keys - list of the columns passed to a grouping selector function

All subdatasets (groups) have set name as the group name, additionally group-id is in meta.

Grouping can be done by:

  • single column name
  • seq of column names
  • map of keys (group names) and row indexes
  • value returned by function taking row as map (limited to :select-keys)

Note: currently dataset inside dataset is printed recursively so it renders poorly from markdown. So I will use :as-seq result type to show just group names and groups.


List of columns in grouped dataset

(-> DS
    (api/group-by :V1)
    (api/column-names))
(:V1 :V2 :V3 :V4)

List of columns in grouped dataset treated as regular dataset

(-> DS
    (api/group-by :V1)
    (api/as-regular-dataset)
    (api/column-names))
(:group-id :name :data)

Content of the grouped dataset

(api/columns (api/group-by DS :V1) :as-map)
{:group-id #tech.v3.dataset.column<int64>[2]
:group-id
[0, 1], :name #tech.v3.dataset.column<int64>[2]
:name
[1, 2], :data #tech.v3.dataset.column<dataset>[2]
:data
[Group: 1 [5 4]:

| :V1 | :V2 | :V3 | :V4 |
|----:|----:|----:|-----|
|   1 |   1 | 0.5 |   A |
|   1 |   3 | 1.5 |   C |
|   1 |   5 | 1.0 |   B |
|   1 |   7 | 0.5 |   A |
|   1 |   9 | 1.5 |   C |
, Group: 2 [4 4]:

| :V1 | :V2 | :V3 | :V4 |
|----:|----:|----:|-----|
|   2 |   2 | 1.0 |   B |
|   2 |   4 | 0.5 |   A |
|   2 |   6 | 1.5 |   C |
|   2 |   8 | 1.0 |   B |
]}

Grouped dataset as map

(keys (api/group-by DS :V1 {:result-type :as-map}))
(1 2)
(vals (api/group-by DS :V1 {:result-type :as-map}))

(Group: 1 [5 4]:

:V1:V2:V3:V4
110.5A
131.5C
151.0B
170.5A
191.5C

Group: 2 [4 4]:

:V1:V2:V3:V4
221.0B
240.5A
261.5C
281.0B

)


Group dataset as map of indexes (row ids)

(api/group-by DS :V1 {:result-type :as-indexes})
{1 #list<int32>[5]
[0, 2, 4, 6, 8], 2 #list<int32>[4]
[1, 3, 5, 7]}

Grouped datasets are printed as follows by default.

(api/group-by DS :V1)

_unnamed [2 3]:

:group-id:name:data
01Group: 1 [5 4]:
12Group: 2 [4 4]:

To get groups as sequence or a map can be done from grouped dataset using groups->seq and groups->map functions.

Groups as seq can be obtained by just accessing :data column.

I will use temporary dataset here.

(let [ds (-> {"a" [1 1 2 2]
              "b" ["a" "b" "c" "d"]}
             (api/dataset)
             (api/group-by "a"))]
  (seq (ds :data))) ;; seq is not necessary but Markdown treats `:data` as command here

(Group: 1 [2 2]:

| a | b | | -: | - | | 1 | a | | 1 | b |

Group: 2 [2 2]:

| a | b | | -: | - | | 2 | c | | 2 | d |

)

(-> {"a" [1 1 2 2]
     "b" ["a" "b" "c" "d"]}
    (api/dataset)
    (api/group-by "a")
    (api/groups->seq))

(Group: 1 [2 2]:

| a | b | | -: | - | | 1 | a | | 1 | b |

Group: 2 [2 2]:

| a | b | | -: | - | | 2 | c | | 2 | d |

)


Groups as map

(-> {"a" [1 1 2 2]
     "b" ["a" "b" "c" "d"]}
    (api/dataset)
    (api/group-by "a")
    (api/groups->map))

{1 Group: 1 [2 2]:

| a | b | | -: | - | | 1 | a | | 1 | b |

, 2 Group: 2 [2 2]:

| a | b | | -: | - | | 2 | c | | 2 | d |

}


Grouping by more than one column. You can see that group names are maps. When ungrouping is done these maps are used to restore column names.

(api/group-by DS [:V1 :V3] {:result-type :as-seq})

(Group: {:V3 0.5, :V1 1} [2 4]:

:V1:V2:V3:V4
110.5A
170.5A

Group: {:V3 1.0, :V1 2} [2 4]:

:V1:V2:V3:V4
221.0B
281.0B

Group: {:V3 1.5, :V1 1} [2 4]:

:V1:V2:V3:V4
131.5C
191.5C

Group: {:V3 0.5, :V1 2} [1 4]:

:V1:V2:V3:V4
240.5A

Group: {:V3 1.0, :V1 1} [1 4]:

:V1:V2:V3:V4
151.0B

Group: {:V3 1.5, :V1 2} [1 4]:

:V1:V2:V3:V4
261.5C

)


Grouping can be done by providing just row indexes. This way you can assign the same row to more than one group.

(api/group-by DS {"group-a" [1 2 1 2]
                  "group-b" [5 5 5 1]} {:result-type :as-seq})

(Group: group-a [4 4]:

:V1:V2:V3:V4
221.0B
131.5C
221.0B
131.5C

Group: group-b [4 4]:

:V1:V2:V3:V4
261.5C
261.5C
261.5C
221.0B

)


You can group by a result of grouping function which gets row as map and should return group name. When map is used as a group name, ungrouping restore original column names.

(api/group-by DS (fn [row] (* (:V1 row)
                             (:V3 row))) {:result-type :as-seq})

(Group: 0.5 [2 4]:

:V1:V2:V3:V4
110.5A
170.5A

Group: 2.0 [2 4]:

:V1:V2:V3:V4
221.0B
281.0B

Group: 1.5 [2 4]:

:V1:V2:V3:V4
131.5C
191.5C

Group: 1.0 [2 4]:

:V1:V2:V3:V4
240.5A
151.0B

Group: 3.0 [1 4]:

:V1:V2:V3:V4
261.5C

)


You can use any predicate on column to split dataset into two groups.

(api/group-by DS (comp #(< % 1.0) :V3) {:result-type :as-seq})

(Group: true [3 4]:

:V1:V2:V3:V4
110.5A
240.5A
170.5A

Group: false [6 4]:

:V1:V2:V3:V4
221.0B
131.5C
151.0B
261.5C
281.0B
191.5C

)


juxt is also helpful

(api/group-by DS (juxt :V1 :V3) {:result-type :as-seq})

(Group: [1 0.5] [2 4]:

:V1:V2:V3:V4
110.5A
170.5A

Group: [2 1.0] [2 4]:

:V1:V2:V3:V4
221.0B
281.0B

Group: [1 1.5] [2 4]:

:V1:V2:V3:V4
131.5C
191.5C

Group: [2 0.5] [1 4]:

:V1:V2:V3:V4
240.5A

Group: [1 1.0] [1 4]:

:V1:V2:V3:V4
151.0B

Group: [2 1.5] [1 4]:

:V1:V2:V3:V4
261.5C

)


tech.ml.dataset provides an option to limit columns which are passed to grouping functions. It’s done for performance purposes.

(api/group-by DS identity {:result-type :as-seq
                           :select-keys [:V1]})

(Group: {:V1 1} [5 4]:

:V1:V2:V3:V4
110.5A
131.5C
151.0B
170.5A
191.5C

Group: {:V1 2} [4 4]:

:V1:V2:V3:V4
221.0B
240.5A
261.5C
281.0B

)

Ungrouping

Ungrouping simply concats all the groups into the dataset. Following options are possible

  • :order? - order groups according to the group name ascending order. Default: false
  • :add-group-as-column - should group name become a column? If yes column is created with provided name (or :$group-name if argument is true). Default: nil.
  • :add-group-id-as-column - should group id become a column? If yes column is created with provided name (or :$group-id if argument is true). Default: nil.
  • :dataset-name - to name resulting dataset. Default: nil (_unnamed)

If group name is a map, it will be splitted into separate columns. Be sure that groups (subdatasets) doesn’t contain the same columns already.

If group name is a vector, it will be splitted into separate columns. If you want to name them, set vector of target column names as :add-group-as-column argument.

After ungrouping, order of the rows is kept within the groups but groups are ordered according to the internal storage.


Grouping and ungrouping.

(-> DS
    (api/group-by :V3)
    (api/ungroup))

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
240.5A
170.5A
221.0B
151.0B
281.0B
131.5C
261.5C
191.5C

Groups sorted by group name and named.

(-> DS
    (api/group-by :V3)
    (api/ungroup {:order? true
                  :dataset-name "Ordered by V3"}))

Ordered by V3 [9 4]:

:V1:V2:V3:V4
110.5A
240.5A
170.5A
221.0B
151.0B
281.0B
131.5C
261.5C
191.5C

Groups sorted descending by group name and named.

(-> DS
    (api/group-by :V3)
    (api/ungroup {:order? :desc
                  :dataset-name "Ordered by V3 descending"}))

Ordered by V3 descending [9 4]:

:V1:V2:V3:V4
131.5C
261.5C
191.5C
221.0B
151.0B
281.0B
110.5A
240.5A
170.5A

Let’s add group name and id as additional columns

(-> DS
    (api/group-by (comp #(< % 4) :V2))
    (api/ungroup {:add-group-as-column true
                  :add-group-id-as-column true}))

_unnamed [9 6]:

| :(group-name | :)group-id | :V1 | :V2 | :V3 | :V4 | | | --------------------------- | --: | --: | --: | --: | - | | true | 0 | 1 | 1 | 0.5 | A | | true | 0 | 2 | 2 | 1.0 | B | | true | 0 | 1 | 3 | 1.5 | C | | false | 1 | 2 | 4 | 0.5 | A | | false | 1 | 1 | 5 | 1.0 | B | | false | 1 | 2 | 6 | 1.5 | C | | false | 1 | 1 | 7 | 0.5 | A | | false | 1 | 2 | 8 | 1.0 | B | | false | 1 | 1 | 9 | 1.5 | C |


Let’s assign different column names

(-> DS
    (api/group-by (comp #(< % 4) :V2))
    (api/ungroup {:add-group-as-column "Is V2 less than 4?"
                  :add-group-id-as-column "group id"}))

_unnamed [9 6]:

Is V2 less than 4?group id:V1:V2:V3:V4
true0110.5A
true0221.0B
true0131.5C
false1240.5A
false1151.0B
false1261.5C
false1170.5A
false1281.0B
false1191.5C

If we group by map, we can automatically create new columns out of group names.

(-> DS
    (api/group-by (fn [row] {"V1 and V3 multiplied" (* (:V1 row)
                                                      (:V3 row))
                            "V4 as lowercase" (clojure.string/lower-case (:V4 row))}))
    (api/ungroup {:add-group-as-column true}))

_unnamed [9 6]:

V4 as lowercaseV1 and V3 multiplied:V1:V2:V3:V4
a0.5110.5A
a0.5170.5A
b2.0221.0B
b2.0281.0B
c1.5131.5C
c1.5191.5C
a1.0240.5A
b1.0151.0B
c3.0261.5C

We can add group names without separation

(-> DS
    (api/group-by (fn [row] {"V1 and V3 multiplied" (* (:V1 row)
                                                      (:V3 row))
                            "V4 as lowercase" (clojure.string/lower-case (:V4 row))}))
    (api/ungroup {:add-group-as-column "just map"
                  :separate? false}))

_unnamed [9 5]:

just map:V1:V2:V3:V4
{“V1 and V3 multiplied” 0.5, “V4 as lowercase” “a”}110.5A
{“V1 and V3 multiplied” 0.5, “V4 as lowercase” “a”}170.5A
{“V1 and V3 multiplied” 2.0, “V4 as lowercase” “b”}221.0B
{“V1 and V3 multiplied” 2.0, “V4 as lowercase” “b”}281.0B
{“V1 and V3 multiplied” 1.5, “V4 as lowercase” “c”}131.5C
{“V1 and V3 multiplied” 1.5, “V4 as lowercase” “c”}191.5C
{“V1 and V3 multiplied” 1.0, “V4 as lowercase” “a”}240.5A
{“V1 and V3 multiplied” 1.0, “V4 as lowercase” “b”}151.0B
{“V1 and V3 multiplied” 3.0, “V4 as lowercase” “c”}261.5C

The same applies to group names as sequences

(-> DS
    (api/group-by (juxt :V1 :V3))
    (api/ungroup {:add-group-as-column "abc"}))

_unnamed [9 6]:

:abc-0:abc-1:V1:V2:V3:V4
10.5110.5A
10.5170.5A
21.0221.0B
21.0281.0B
11.5131.5C
11.5191.5C
20.5240.5A
11.0151.0B
21.5261.5C

Let’s provide column names

(-> DS
    (api/group-by (juxt :V1 :V3))
    (api/ungroup {:add-group-as-column ["v1" "v3"]}))

_unnamed [9 6]:

| v1 | v3 | :V1 | :V2 | :V3 | :V4 | | -: | --: | --: | --: | --: | --- | | 1 | 0.5 | 1 | 1 | 0.5 | A | | 1 | 0.5 | 1 | 7 | 0.5 | A | | 2 | 1.0 | 2 | 2 | 1.0 | B | | 2 | 1.0 | 2 | 8 | 1.0 | B | | 1 | 1.5 | 1 | 3 | 1.5 | C | | 1 | 1.5 | 1 | 9 | 1.5 | C | | 2 | 0.5 | 2 | 4 | 0.5 | A | | 1 | 1.0 | 1 | 5 | 1.0 | B | | 2 | 1.5 | 2 | 6 | 1.5 | C |


Also we can supress separation

(-> DS
    (api/group-by (juxt :V1 :V3))
    (api/ungroup {:separate? false
                  :add-group-as-column true}))
;; => _unnamed [9 5]:

_unnamed [9 5]:

:$group-name:V1:V2:V3:V4
[1 0.5]110.5A
[1 0.5]170.5A
[2 1.0]221.0B
[2 1.0]281.0B
[1 1.5]131.5C
[1 1.5]191.5C
[2 0.5]240.5A
[1 1.0]151.0B
[2 1.5]261.5C

Other functions

To check if dataset is grouped or not just use grouped? function.

(api/grouped? DS)
nil
(api/grouped? (api/group-by DS :V1))
true

If you want to remove grouping annotation (to make all the functions work as with regular dataset) you can use unmark-group or as-regular-dataset (alias) functions.

It can be important when you want to remove some groups (rows) from grouped dataset using drop-rows or something like that.

(-> DS
    (api/group-by :V1)
    (api/as-regular-dataset)
    (api/grouped?))
nil

You can also operate on grouped dataset as a regular one in case you want to access its columns using without-grouping-> threading macro.

(-> DS
    (api/group-by [:V4 :V1])
    (api/without-grouping->
     (api/order-by (comp (juxt :V4 :V1) :name))))

_unnamed [6 3]:

:group-id:name:data
0{:V4 “A”, :V1 1}Group: {:V4 “A”, :V1 1} [2 4]:
3{:V4 “A”, :V1 2}Group: {:V4 “A”, :V1 2} [1 4]:
4{:V4 “B”, :V1 1}Group: {:V4 “B”, :V1 1} [1 4]:
1{:V4 “B”, :V1 2}Group: {:V4 “B”, :V1 2} [2 4]:
2{:V4 “C”, :V1 1}Group: {:V4 “C”, :V1 1} [2 4]:
5{:V4 “C”, :V1 2}Group: {:V4 “C”, :V1 2} [1 4]:

This is considered internal.

If you want to implement your own mapping function on grouped dataset you can call process-group-data and pass function operating on datasets. Result should be a dataset to have ungrouping working.

(-> DS
    (api/group-by :V1)
    (api/process-group-data #(str "Shape: " (vector (api/row-count %) (api/column-count %))))
    (api/as-regular-dataset))

_unnamed [2 3]:

:group-id:name:data
01Shape: [5 4]
12Shape: [4 4]

Columns

Column is a special tech.ml.dataset structure based on tech.ml.datatype library. For our purposes we cat treat columns as typed and named sequence bound to particular dataset.

Type of the data is inferred from a sequence during column creation.

Names

To select dataset columns or column names columns-selector is used. columns-selector can be one of the following:

  • :all keyword - selects all columns
  • column name - for single column
  • sequence of column names - for collection of columns
  • regex - to apply pattern on column names or datatype
  • filter predicate - to filter column names or datatype
  • type namespaced keyword for specific datatype or group of datatypes

Column name can be anything.

column-names function returns names according to columns-selector and optional meta-field. meta-field is one of the following:

  • :name (default) - to operate on column names
  • :datatype - to operated on column types
  • :all - if you want to process all metadata

Datatype groups are:

  • :type/numerical - any numerical type
  • :type/float - floating point number (:float32 and :float64)
  • :type/integer - any integer
  • :type/datetime - any datetime type

If qualified keyword starts with :!type, complement set is used.


To select all column names you can use column-names function.

(api/column-names DS)
(:V1 :V2 :V3 :V4)

or

(api/column-names DS :all)
(:V1 :V2 :V3 :V4)

In case you want to select column which has name :all (or is sequence or map), put it into a vector. Below code returns empty sequence since there is no such column in the dataset.

(api/column-names DS [:all])
()

Obviously selecting single name returns it’s name if available

(api/column-names DS :V1)
(api/column-names DS "no such column")
(:V1)
()

Select sequence of column names.

(api/column-names DS [:V1 "V2" :V3 :V4 :V5])
(:V1 :V3 :V4)

Select names based on regex, columns ends with 1 or 4

(api/column-names DS #".*[14]")
(:V1 :V4)

Select names based on regex operating on type of the column (to check what are the column types, call (api/info DS :columns). Here we want to get integer columns only.

(api/column-names DS #"^:int.*" :datatype)
(:V1 :V2)

or

(api/column-names DS :type/integer)
(:V1 :V2)

And finally we can use predicate to select names. Let’s select double precision columns.

(api/column-names DS #{:float64} :datatype)
(:V3)

or

(api/column-names DS :type/float64)
(:V3)

If you want to select all columns but given, use complement function. Works only on a predicate.

(api/column-names DS (complement #{:V1}))
(api/column-names DS (complement #{:float64}) :datatype)
(api/column-names DS :!type/float64)
(:V2 :V3 :V4)
(:V1 :V2 :V4)
(:V1 :V2 :V4)

You can select column names based on all column metadata at once by using :all metadata selector. Below we want to select column names ending with 1 which have long datatype.

(api/column-names DS (fn [meta]
                       (and (= :int64 (:datatype meta))
                            (clojure.string/ends-with? (:name meta) "1"))) :all)
(:V1)

Select

select-columns creates dataset with columns selected by columns-selector as described above. Function works on regular and grouped dataset.


Select only float64 columns

(api/select-columns DS #(= :float64 %) :datatype)

_unnamed [9 1]:

:V3
0.5
1.0
1.5
0.5
1.0
1.5
0.5
1.0
1.5

or

(api/select-columns DS :type/float64)

_unnamed [9 1]:

:V3
0.5
1.0
1.5
0.5
1.0
1.5
0.5
1.0
1.5

Select all but :V1 columns

(api/select-columns DS (complement #{:V1}))

_unnamed [9 3]:

:V2:V3:V4
10.5A
21.0B
31.5C
40.5A
51.0B
61.5C
70.5A
81.0B
91.5C

If we have grouped data set, column selection is applied to every group separately.

(-> DS
    (api/group-by :V1)
    (api/select-columns [:V2 :V3])
    (api/groups->map))

{1 Group: 1 [5 2]:

:V2:V3
10.5
31.5
51.0
70.5
91.5

, 2 Group: 2 [4 2]:

:V2:V3
21.0
40.5
61.5
81.0

}

Drop

drop-columns creates dataset with removed columns.


Drop float64 columns

(api/drop-columns DS #(= :float64 %) :datatype)

_unnamed [9 3]:

:V1:V2:V4
11A
22B
13C
24A
15B
26C
17A
28B
19C

or

(api/drop-columns DS :type/float64)

_unnamed [9 3]:

:V1:V2:V4
11A
22B
13C
24A
15B
26C
17A
28B
19C

Drop all columns but :V1 and :V2

(api/drop-columns DS (complement #{:V1 :V2}))

_unnamed [9 2]:

:V1:V2
11
22
13
24
15
26
17
28
19

If we have grouped data set, column selection is applied to every group separately. Selected columns are dropped.

(-> DS
    (api/group-by :V1)
    (api/drop-columns [:V2 :V3])
    (api/groups->map))

{1 Group: 1 [5 2]:

:V1:V4
1A
1C
1B
1A
1C

, 2 Group: 2 [4 2]:

:V1:V4
2B
2A
2C
2B

}

Rename

If you want to rename colums use rename-columns and pass map where keys are old names, values new ones.

You can also pass mapping function with optional columns-selector

(api/rename-columns DS {:V1 "v1"
                        :V2 "v2"
                        :V3 [1 2 3]
                        :V4 (Object.)})

_unnamed [9 4]:

| v1 | v2 | [1 2 3] | java.lang.Object@6308e442 | | -: | -: | --------: | --------------------------- | | 1 | 1 | 0.5 | A | | 2 | 2 | 1.0 | B | | 1 | 3 | 1.5 | C | | 2 | 4 | 0.5 | A | | 1 | 5 | 1.0 | B | | 2 | 6 | 1.5 | C | | 1 | 7 | 0.5 | A | | 2 | 8 | 1.0 | B | | 1 | 9 | 1.5 | C |


Map all names with function

(api/rename-columns DS (comp str second name))

_unnamed [9 4]:

| 1 | 2 | 3 | 4 | | -: | -: | --: | - | | 1 | 1 | 0.5 | A | | 2 | 2 | 1.0 | B | | 1 | 3 | 1.5 | C | | 2 | 4 | 0.5 | A | | 1 | 5 | 1.0 | B | | 2 | 6 | 1.5 | C | | 1 | 7 | 0.5 | A | | 2 | 8 | 1.0 | B | | 1 | 9 | 1.5 | C |


Map selected names with function

(api/rename-columns DS [:V1 :V3] (comp str second name))

_unnamed [9 4]:

| 1 | :V2 | 3 | :V4 | | -: | --: | --: | --- | | 1 | 1 | 0.5 | A | | 2 | 2 | 1.0 | B | | 1 | 3 | 1.5 | C | | 2 | 4 | 0.5 | A | | 1 | 5 | 1.0 | B | | 2 | 6 | 1.5 | C | | 1 | 7 | 0.5 | A | | 2 | 8 | 1.0 | B | | 1 | 9 | 1.5 | C |


Function works on grouped dataset

(-> DS
    (api/group-by :V1)
    (api/rename-columns {:V1 "v1"
                         :V2 "v2"
                         :V3 [1 2 3]
                         :V4 (Object.)})
    (api/groups->map))

{1 Group: 1 [5 4]:

| v1 | v2 | [1 2 3] | java.lang.Object@1d90cf18 | | -: | -: | --------: | --------------------------- | | 1 | 1 | 0.5 | A | | 1 | 3 | 1.5 | C | | 1 | 5 | 1.0 | B | | 1 | 7 | 0.5 | A | | 1 | 9 | 1.5 | C |

, 2 Group: 2 [4 4]:

| v1 | v2 | [1 2 3] | java.lang.Object@1d90cf18 | | -: | -: | --------: | --------------------------- | | 2 | 2 | 1.0 | B | | 2 | 4 | 0.5 | A | | 2 | 6 | 1.5 | C | | 2 | 8 | 1.0 | B |

}

Add or update

To add (or replace existing) column call add-column function. Function accepts:

  • ds - a dataset
  • column-name - if it’s existing column name, column will be replaced
  • column - can be column (from other dataset), sequence, single value or function. Too big columns are always trimmed. Too small are cycled or extended with missing values (according to size-strategy argument)
  • size-strategy (optional) - when new column is shorter than dataset row count, following strategies are applied:
  • :cycle (default) - repeat data
  • :na - append missing values
  • :strict - throws an exception when sizes mismatch

Function works on grouped dataset.


Add single value as column

(api/add-column DS :V5 "X")

_unnamed [9 5]:

:V1:V2:V3:V4:V5
110.5AX
221.0BX
131.5CX
240.5AX
151.0BX
261.5CX
170.5AX
281.0BX
191.5CX

Replace one column (column is trimmed)

(api/add-column DS :V1 (repeatedly rand))

_unnamed [9 4]:

:V1:V2:V3:V4
0.2350253010.5A
0.3876410121.0B
0.1840216331.5C
0.9196067340.5A
0.5951914151.0B
0.1502741961.5C
0.3123756470.5A
0.7086663781.0B
0.3540590191.5C

Copy column

(api/add-column DS :V5 (DS :V1))

_unnamed [9 5]:

:V1:V2:V3:V4:V5
110.5A1
221.0B2
131.5C1
240.5A2
151.0B1
261.5C2
170.5A1
281.0B2
191.5C1

When function is used, argument is whole dataset and the result should be column, sequence or single value

(api/add-column DS :row-count api/row-count)

_unnamed [9 5]:

:V1:V2:V3:V4:row-count
110.5A9
221.0B9
131.5C9
240.5A9
151.0B9
261.5C9
170.5A9
281.0B9
191.5C9

Above example run on grouped dataset, applies function on each group separately.

(-> DS
    (api/group-by :V1)
    (api/add-column :row-count api/row-count)
    (api/ungroup))

_unnamed [9 5]:

:V1:V2:V3:V4:row-count
110.5A5
131.5C5
151.0B5
170.5A5
191.5C5
221.0B4
240.5A4
261.5C4
281.0B4

When column which is added is longer than row count in dataset, column is trimmed. When column is shorter, it’s cycled or missing values are appended.

(api/add-column DS :V5 [:r :b])

_unnamed [9 5]:

:V1:V2:V3:V4:V5
110.5A:r
221.0B:b
131.5C:r
240.5A:b
151.0B:r
261.5C:b
170.5A:r
281.0B:b
191.5C:r
(api/add-column DS :V5 [:r :b] :na)

_unnamed [9 5]:

:V1:V2:V3:V4:V5
110.5A:r
221.0B:b
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C

Exception is thrown when :strict strategy is used and column size is not equal row count

(try
  (api/add-column DS :V5 [:r :b] :strict)
  (catch Exception e (str "Exception caught: "(ex-message e))))
"Exception caught: Column size (2) should be exactly the same as dataset row count (9)"

Tha same applies for grouped dataset

(-> DS
    (api/group-by :V3)
    (api/add-column :V5 [:r :b] :na)
    (api/ungroup))

_unnamed [9 5]:

:V1:V2:V3:V4:V5
110.5A:r
240.5A:b
170.5A
221.0B:r
151.0B:b
281.0B
131.5C:r
261.5C:b
191.5C

Let’s use other column to fill groups

(-> DS
    (api/group-by :V3)
    (api/add-column :V5 (DS :V2))
    (api/ungroup))

_unnamed [9 5]:

:V1:V2:V3:V4:V5
110.5A1
240.5A2
170.5A3
221.0B1
151.0B2
281.0B3
131.5C1
261.5C2
191.5C3

In case you want to add or update several columns you can call add-columns and provide map where keys are column names, vals are columns.

(api/add-columns DS {:V1 #(map inc (% :V1))
                               :V5 #(map (comp keyword str) (% :V4))
                               :V6 11})

_unnamed [9 6]:

:V1:V2:V3:V4:V5:V6
210.5A:A11
321.0B:B11
231.5C:C11
340.5A:A11
251.0B:B11
361.5C:C11
270.5A:A11
381.0B:B11
291.5C:C11

Update

If you want to modify specific column(s) you can call update-columns. Arguments:

  • dataset
  • one of:
    • columns-selector and function (or sequence of functions)
    • map where keys are column names and vals are function

Functions accept column and have to return column or sequence


Reverse of columns

(api/update-columns DS :all reverse) 

_unnamed [9 4]:

:V1:V2:V3:V4
191.5C
281.0B
170.5A
261.5C
151.0B
240.5A
131.5C
221.0B
110.5A

Apply dec/inc on numerical columns

(api/update-columns DS :type/numerical [(partial map dec)
                                        (partial map inc)])

_unnamed [9 4]:

:V1:V2:V3:V4
02-0.5A
130.0B
040.5C
15-0.5A
060.0B
170.5C
08-0.5A
190.0B
0100.5C

You can also assign a function to a column by packing operations into the map.

(api/update-columns DS {:V1 reverse
                        :V2 (comp shuffle seq)})

_unnamed [9 4]:

:V1:V2:V3:V4
160.5A
231.0B
191.5C
210.5A
181.0B
221.5C
140.5A
271.0B
151.5C

Map

The other way of creating or updating column is to map rows as regular map function. The arity of mapping function should be the same as number of selected columns.

Arguments:

  • ds - dataset
  • column-name - target column name
  • columns-selector - columns selected
  • map-fn - mapping function

Let’s add numerical columns together

(api/map-columns DS
                 :sum-of-numbers
                 (api/column-names DS  #{:int64 :float64} :datatype)
                 (fn [& rows]
                   (reduce + rows)))

_unnamed [9 5]:

:V1:V2:V3:V4:sum-of-numbers
110.5A2.5
221.0B5.0
131.5C5.5
240.5A6.5
151.0B7.0
261.5C9.5
170.5A8.5
281.0B11.0
191.5C11.5

The same works on grouped dataset

(-> DS
    (api/group-by :V4)
    (api/map-columns :sum-of-numbers
                     (api/column-names DS  #{:int64 :float64} :datatype)
                     (fn [& rows]
                       (reduce + rows)))
    (api/ungroup))

_unnamed [9 5]:

:V1:V2:V3:V4:sum-of-numbers
110.5A2.5
240.5A6.5
170.5A8.5
221.0B5.0
151.0B7.0
281.0B11.0
131.5C5.5
261.5C9.5
191.5C11.5

Reorder

To reorder columns use columns selectors to choose what columns go first. The unseleted columns are appended to the end.

(api/reorder-columns DS :V4 [:V3 :V2])

_unnamed [9 4]:

:V4:V3:V2:V1
A0.511
B1.022
C1.531
A0.542
B1.051
C1.562
A0.571
B1.082
C1.591

This function doesn’t let you select meta field, so you have to call column-names in such case. Below we want to add integer columns at the end.

(api/reorder-columns DS (api/column-names DS (complement #{:int64}) :datatype))

_unnamed [9 4]:

:V3:V4:V1:V2
0.5A11
1.0B22
1.5C13
0.5A24
1.0B15
1.5C26
0.5A17
1.0B28
1.5C19

Type conversion

To convert column into given datatype can be done using convert-types function. Not all the types can be converted automatically also some types require slow parsing (every conversion from string). In case where conversion is not possible you can pass conversion function.

Arguments:

  • ds - dataset
  • Two options:
    • coltype-map in case when you want to convert several columns, keys are column names, vals are new types
    • column-selector and new-types - column name and new datatype (or datatypes as sequence)

new-types can be:

  • a type like :int64 or :string or sequence of types
  • or sequence of pair of datetype and conversion function

After conversion additional infomation is given on problematic values.

The other conversion is casting column into java array (->array) of the type column or provided as argument. Grouped dataset returns sequence of arrays.


Basic conversion

(-> DS
    (api/convert-types :V1 :float64)
    (api/info :columns))

_unnamed :column info [4 6]:

:n-elems:name:categorical?:unparsed-indexes:unparsed-data:datatype
9:V1 {}[]:float64
9:V2 :int64
9:V3 :float64
9:V4true :string

Using custom converter. Let’s treat :V4 as haxadecimal values. See that this way we can map column to any value.

(-> DS
    (api/convert-types :V4 [[:int16 #(Integer/parseInt % 16)]]))

_unnamed [9 4]:

:V1:V2:V3:V4
110.510
221.011
131.512
240.510
151.011
261.512
170.510
281.011
191.512

You can process several columns at once

(-> DS
    (api/convert-types {:V1 :float64
                        :V2 :object
                        :V3 [:boolean #(< % 1.0)]
                        :V4 :object})
    (api/info :columns))

_unnamed :column info [4 6]:

:n-elems:name:categorical?:unparsed-indexes:unparsed-data:datatype
9:V1 {}[]:float64
9:V2 {}[]:int64
9:V3 {}[]:boolean
9:V4true :object

Convert one type into another

(-> DS
    (api/convert-types :type/numerical :int16)
    (api/info :columns))

_unnamed :column info [4 6]:

:n-elems:name:categorical?:unparsed-indexes:unparsed-data:datatype
9:V1 {}[]:int16
9:V2 {}[]:int16
9:V3 {}[]:int16
9:V4true :string

Function works on the grouped dataset

(-> DS
    (api/group-by :V1)
    (api/convert-types :V1 :float32)
    (api/ungroup)
    (api/info :columns))

_unnamed :column info [4 6]:

:n-elems:name:categorical?:unparsed-indexes:unparsed-data:datatype
9:V1 {}[]:float32
9:V2 :int64
9:V3 :float64
9:V4true :string

Double array conversion.

(api/->array DS :V1)
#object["[J" 0x3114de80 "[J@3114de80"]

Function also works on grouped dataset

(-> DS
    (api/group-by :V3)
    (api/->array :V2))
(#object["[J" 0x7da25146 "[J@7da25146"] #object["[J" 0x5aac5a0c "[J@5aac5a0c"] #object["[J" 0x1d5dae43 "[J@1d5dae43"])

You can also cast the type to the other one (if casting is possible):

(api/->array DS :V4 :string)
(api/->array DS :V1 :float32)
#object["[Ljava.lang.String;" 0x124cd651 "[Ljava.lang.String;@124cd651"]
#object["[F" 0x948ffb1 "[F@948ffb1"]

Rows

Rows can be selected or dropped using various selectors:

  • row id(s) - row index as number or seqence of numbers (first row has index 0, second 1 and so on)
  • sequence of true/false values
  • filter by predicate (argument is row as a map)

When predicate is used you may want to limit columns passed to the function (select-keys option).

Additionally you may want to precalculate some values which will be visible for predicate as additional columns. It’s done internally by calling add-columns on a dataset. :pre is used as a column definitions.

Select

Select fifth row

(api/select-rows DS 4)

_unnamed [1 4]:

:V1:V2:V3:V4
151.0B

Select 3 rows

(api/select-rows DS [1 4 5])

_unnamed [3 4]:

:V1:V2:V3:V4
221.0B
151.0B
261.5C

Select rows using sequence of true/false values

(api/select-rows DS [true nil nil true])

_unnamed [2 4]:

:V1:V2:V3:V4
110.5A
240.5A

Select rows using predicate

(api/select-rows DS (comp #(< % 1) :V3))

_unnamed [3 4]:

:V1:V2:V3:V4
110.5A
240.5A
170.5A

The same works on grouped dataset, let’s select first row from every group.

(-> DS
    (api/group-by :V1)
    (api/select-rows 0)
    (api/ungroup))

_unnamed [2 4]:

:V1:V2:V3:V4
110.5A
221.0B

If you want to select :V2 values which are lower than or equal mean in grouped dataset you have to precalculate it using :pre.

(-> DS
    (api/group-by :V4)
    (api/select-rows (fn [row] (<= (:V2 row) (:mean row)))
                     {:pre {:mean #(tech.v3.datatype.functional/mean (% :V2))}})
    (api/ungroup))

_unnamed [6 4]:

:V1:V2:V3:V4
110.5A
240.5A
221.0B
151.0B
131.5C
261.5C

Drop

drop-rows removes rows, and accepts exactly the same parameters as select-rows


Drop values lower than or equal :V2 column mean in grouped dataset.

(-> DS
    (api/group-by :V4)
    (api/drop-rows (fn [row] (<= (:V2 row) (:mean row)))
                   {:pre {:mean #(tech.v3.datatype.functional/mean (% :V2))}})
    (api/ungroup))

_unnamed [3 4]:

:V1:V2:V3:V4
170.5A
281.0B
191.5C

Other

There are several function to select first, last, random rows, or display head, tail of the dataset. All functions work on grouped dataset.

All random functions accept :seed as an option if you want to fix returned result.


First row

(api/first DS)

_unnamed [1 4]:

:V1:V2:V3:V4
110.5A

Last row

(api/last DS)

_unnamed [1 4]:

:V1:V2:V3:V4
191.5C

Random row (single)

(api/rand-nth DS)

_unnamed [1 4]:

:V1:V2:V3:V4
191.5C

Random row (single) with seed

(api/rand-nth DS {:seed 42})

_unnamed [1 4]:

:V1:V2:V3:V4
261.5C

Random n (default: row count) rows with repetition.

(api/random DS)

_unnamed [9 4]:

:V1:V2:V3:V4
221.0B
221.0B
170.5A
281.0B
110.5A
261.5C
191.5C
281.0B
240.5A

Five random rows with repetition

(api/random DS 5)

_unnamed [5 4]:

:V1:V2:V3:V4
240.5A
151.0B
261.5C
131.5C
281.0B

Five random, non-repeating rows

(api/random DS 5 {:repeat? false})

_unnamed [5 4]:

:V1:V2:V3:V4
191.5C
151.0B
131.5C
170.5A
281.0B

Five random, with seed

(api/random DS 5 {:seed 42})

_unnamed [5 4]:

:V1:V2:V3:V4
261.5C
151.0B
131.5C
110.5A
191.5C

Shuffle dataset

(api/shuffle DS)

_unnamed [9 4]:

:V1:V2:V3:V4
131.5C
191.5C
110.5A
281.0B
261.5C
170.5A
221.0B
151.0B
240.5A

Shuffle with seed

(api/shuffle DS {:seed 42})

_unnamed [9 4]:

:V1:V2:V3:V4
151.0B
221.0B
261.5C
240.5A
281.0B
131.5C
170.5A
110.5A
191.5C

First n rows (default 5)

(api/head DS)

_unnamed [5 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B

Last n rows (default 5)

(api/tail DS)

_unnamed [5 4]:

:V1:V2:V3:V4
151.0B
261.5C
170.5A
281.0B
191.5C

by-rank calculates rank on column(s). It’s base on R rank() with addition of :dense (default) tie strategy which give consecutive rank numbering.

:desc? options (default: true) sorts input with descending order, giving top values under 0 value.

rank is zero based and is defined at tablecloth.api.utils namespace.


(api/by-rank DS :V3 zero?) ;; most V3 values

_unnamed [3 4]:

:V1:V2:V3:V4
131.5C
261.5C
191.5C
(api/by-rank DS :V3 zero? {:desc? false}) ;; least V3 values

_unnamed [3 4]:

:V1:V2:V3:V4
110.5A
240.5A
170.5A

Rank also works on multiple columns

(api/by-rank DS [:V1 :V3] zero? {:desc? false})

_unnamed [2 4]:

:V1:V2:V3:V4
110.5A
170.5A

Select 5 random rows from each group

(-> DS
    (api/group-by :V4)
    (api/random 5)
    (api/ungroup))

_unnamed [15 4]:

:V1:V2:V3:V4
240.5A
240.5A
170.5A
240.5A
170.5A
221.0B
151.0B
281.0B
221.0B
281.0B
191.5C
191.5C
131.5C
191.5C
261.5C

Aggregate

Aggregating is a function which produces single row out of dataset.

Aggregator is a function or sequence or map of functions which accept dataset as an argument and result single value, sequence of values or map.

Where map is given as an input or result, keys are treated as column names.

Grouped dataset is ungrouped after aggreation. This can be turned off by setting :ungroup to false. In case you want to pass additional ungrouping parameters add them to the options.

By default resulting column names are prefixed with summary prefix (set it with :default-column-name-prefix option).


Let’s calculate mean of some columns

(api/aggregate DS #(reduce + (% :V2)))

_unnamed [1 1]:

:summary
45

Let’s give resulting column a name.

(api/aggregate DS {:sum-of-V2 #(reduce + (% :V2))})

_unnamed [1 1]:

:sum-of-V2
45

Sequential result is spread into separate columns

(api/aggregate DS #(take 5(% :V2)))

_unnamed [1 5]:

:summary-0:summary-1:summary-2:summary-3:summary-4
12345

You can combine all variants and rename default prefix

(api/aggregate DS [#(take 3 (% :V2))
                   (fn [ds] {:sum-v1 (reduce + (ds :V1))
                            :prod-v3 (reduce * (ds :V3))})] {:default-column-name-prefix "V2-value"})

_unnamed [1 5]:

:V2-value-0-0:V2-value-0-1:V2-value-0-2:V2-value-1-sum-v1:V2-value-1-prod-v3
123130.421875

Processing grouped dataset

(-> DS
    (api/group-by [:V4])
    (api/aggregate [#(take 3 (% :V2))
                    (fn [ds] {:sum-v1 (reduce + (ds :V1))
                             :prod-v3 (reduce * (ds :V3))})] {:default-column-name-prefix "V2-value"}))

_unnamed [3 6]:

:V2-value-0-2:V4:V2-value-1-prod-v3:V2-value-1-sum-v1:V2-value-0-0:V2-value-0-1
7A0.125414
8B1.000525
9C3.375436

Result of aggregating is automatically ungrouped, you can skip this step by stetting :ungroup option to false.

(-> DS
    (api/group-by [:V3])
    (api/aggregate [#(take 3 (% :V2))
                    (fn [ds] {:sum-v1 (reduce + (ds :V1))
                             :prod-v3 (reduce * (ds :V3))})] {:default-column-name-prefix "V2-value"
                                                              :ungroup? false}))

_unnamed [3 3]:

:group-id:name:data
0{:V3 0.5}_unnamed [1 5]:
1{:V3 1.0}_unnamed [1 5]:
2{:V3 1.5}_unnamed [1 5]:

Column

You can perform columnar aggreagation also. aggregate-columns selects columns and apply aggregating function (or sequence of functions) for each column separately.

(api/aggregate-columns DS [:V1 :V2 :V3] #(reduce + %))

_unnamed [1 3]:

:V1:V2:V3
13459.0

(api/aggregate-columns DS [:V1 :V2 :V3] [#(reduce + %)
                                         #(reduce max %)
                                         #(reduce * %)])

_unnamed [1 3]:

:V1:V2:V3
1390.421875

(-> DS
    (api/group-by [:V4])
    (api/aggregate-columns [:V1 :V2 :V3] #(reduce + %)))

_unnamed [3 4]:

:V2:V4:V3:V1
12A1.54
15B3.05
18C4.54

Order

Ordering can be done by column(s) or any function operating on row. Possible order can be:

  • :asc for ascending order (default)
  • :desc for descending order
  • custom comparator

:select-keys limits row map provided to ordering functions.


Order by single column, ascending

(api/order-by DS :V1)

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
131.5C
151.0B
170.5A
191.5C
261.5C
240.5A
281.0B
221.0B

Descending order

(api/order-by DS :V1 :desc)

_unnamed [9 4]:

:V1:V2:V3:V4
221.0B
240.5A
261.5C
281.0B
151.0B
131.5C
170.5A
110.5A
191.5C

Order by two columns

(api/order-by DS [:V1 :V2])

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
131.5C
151.0B
170.5A
191.5C
221.0B
240.5A
261.5C
281.0B

Use different orders for columns

(api/order-by DS [:V1 :V2] [:asc :desc])

_unnamed [9 4]:

:V1:V2:V3:V4
191.5C
170.5A
151.0B
131.5C
110.5A
281.0B
261.5C
240.5A
221.0B
(api/order-by DS [:V1 :V2] [:desc :desc])

_unnamed [9 4]:

:V1:V2:V3:V4
281.0B
261.5C
240.5A
221.0B
191.5C
170.5A
151.0B
131.5C
110.5A
(api/order-by DS [:V1 :V3] [:desc :asc])

_unnamed [9 4]:

:V1:V2:V3:V4
240.5A
221.0B
281.0B
261.5C
110.5A
170.5A
151.0B
131.5C
191.5C

Custom function can be used to provided ordering key. Here order by :V4 descending, then by product of other columns ascending.

(api/order-by DS [:V4 (fn [row] (* (:V1 row)
                                  (:V2 row)
                                  (:V3 row)))] [:desc :asc])

_unnamed [9 4]:

:V1:V2:V3:V4
131.5C
191.5C
261.5C
221.0B
151.0B
281.0B
110.5A
170.5A
240.5A

Custom comparator also can be used in case objects are not comparable by default. Let’s define artificial one: if Euclidean distance is lower than 2, compare along z else along x and y. We use first three columns for that.

(defn dist
  [v1 v2]
  (->> v2
       (map - v1)
       (map #(* % %))
       (reduce +)
       (Math/sqrt)))
#'user/dist
(api/order-by DS [:V1 :V2 :V3] (fn [[x1 y1 z1 :as v1] [x2 y2 z2 :as v2]]
                                 (let [d (dist v1 v2)]
                                   (if (< d 2.0)
                                     (compare z1 z2)
                                     (compare [x1 y1] [x2 y2])))))

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
151.0B
170.5A
191.5C
221.0B
240.5A
131.5C
261.5C
281.0B

Unique

Remove rows which contains the same data. By default unique-by removes duplicates from whole dataset. You can also pass list of columns or functions (similar as in group-by) to remove duplicates limited by them. Default strategy is to keep the first row. More strategies below.

unique-by works on groups


Remove duplicates from whole dataset

(api/unique-by DS)

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C

Remove duplicates from each group selected by column.

(api/unique-by DS :V1)

_unnamed [2 4]:

:V1:V2:V3:V4
110.5A
221.0B

Pair of columns

(api/unique-by DS [:V1 :V3])

_unnamed [6 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C

Also function can be used, split dataset by modulo 3 on columns :V2

(api/unique-by DS (fn [m] (mod (:V2 m) 3)))

_unnamed [3 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C

The same can be achived with group-by

(-> DS
    (api/group-by (fn [m] (mod (:V2 m) 3)))
    (api/first)
    (api/ungroup))

_unnamed [3 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C

Grouped dataset

(-> DS
    (api/group-by :V4)
    (api/unique-by :V1)
    (api/ungroup))

_unnamed [6 4]:

:V1:V2:V3:V4
110.5A
240.5A
221.0B
151.0B
131.5C
261.5C

Strategies

There are 4 strategies defined:

  • :first - select first row (default)
  • :last - select last row
  • :random - select random row
  • any function - apply function to a columns which are subject of uniqueness

Last

(api/unique-by DS :V1 {:strategy :last})

_unnamed [2 4]:

:V1:V2:V3:V4
281.0B
191.5C

Random

(api/unique-by DS :V1 {:strategy :random})

_unnamed [2 4]:

:V1:V2:V3:V4
221.0B
191.5C

Pack columns into vector

(api/unique-by DS :V4 {:strategy vec})

_unnamed [3 3]:

:V2:V3:V1
[1 4 7][0.5 0.5 0.5][1 2 1]
[2 5 8][1.0 1.0 1.0][2 1 2]
[3 6 9][1.5 1.5 1.5][1 2 1]

Sum columns

(api/unique-by DS :V4 {:strategy (partial reduce +)})

_unnamed [3 3]:

:V2:V3:V1
121.54
153.05
184.54

Group by function and apply functions

(api/unique-by DS (fn [m] (mod (:V2 m) 3)) {:strategy vec})

_unnamed [3 4]:

:V2:V4:V3:V1
[1 4 7][“A” “A” “A”][0.5 0.5 0.5][1 2 1]
[2 5 8][“B” “B” “B”][1.0 1.0 1.0][2 1 2]
[3 6 9][“C” “C” “C”][1.5 1.5 1.5][1 2 1]

Grouped dataset

(-> DS
    (api/group-by :V1)
    (api/unique-by (fn [m] (mod (:V2 m) 3)) {:strategy vec})
    (api/ungroup {:add-group-as-column :from-V1}))

_unnamed [6 5]:

:from-V1:V2:V4:V3:V1
1[1 7][“A” “A”][0.5 0.5][1 1]
1[3 9][“C” “C”][1.5 1.5][1 1]
1[5][“B”][1.0][1]
2[2 8][“B” “B”][1.0 1.0][2 2]
2[4][“A”][0.5][2]
2[6][“C”][1.5][2]

Missing

When dataset contains missing values you can select or drop rows with missing values or replace them using some strategy.

column-selector can be used to limit considered columns

Let’s define dataset which contains missing values

(def DSm (api/dataset {:V1 (take 9 (cycle [1 2 nil]))
                       :V2 (range 1 10)
                       :V3 (take 9 (cycle [0.5 1.0 nil 1.5]))
                       :V4 (take 9 (cycle ["A" "B" "C"]))}))
DSm

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
3 C
141.5A
250.5B
61.0C
17 A
281.5B
90.5C

Select

Select rows with missing values

(api/select-missing DSm)

_unnamed [4 4]:

:V1:V2:V3:V4
3 C
61.0C
17 A
90.5C

Select rows with missing values in :V1

(api/select-missing DSm :V1)

_unnamed [3 4]:

:V1:V2:V3:V4
3 C
61.0C
90.5C

The same with grouped dataset

(-> DSm
    (api/group-by :V4)
    (api/select-missing :V3)
    (api/ungroup))

_unnamed [2 4]:

:V1:V2:V3:V4
17 A
3 C

Drop

Drop rows with missing values

(api/drop-missing DSm)

_unnamed [5 4]:

:V1:V2:V3:V4
110.5A
221.0B
141.5A
250.5B
281.5B

Drop rows with missing values in :V1

(api/drop-missing DSm :V1)

_unnamed [6 4]:

:V1:V2:V3:V4
110.5A
221.0B
141.5A
250.5B
17 A
281.5B

The same with grouped dataset

(-> DSm
    (api/group-by :V4)
    (api/drop-missing :V1)
    (api/ungroup))

_unnamed [6 4]:

:V1:V2:V3:V4
110.5A
141.5A
17 A
221.0B
250.5B
281.5B

Replace

Missing values can be replaced using several strategies. replace-missing accepts:

  • dataset
  • column selector, default: :all
  • strategy, default: :nearest
  • value (optional)
    • single value
    • sequence of values (cycled)
    • function, applied on column(s) with stripped missings

Strategies are:

  • :value - replace with given value
  • :up - copy values up and then down for missing values at the end
  • :down - copy values down and then up for missing values at the beginning
  • :mid or :nearest - copy values around known values
  • :midpoint - use average value from previous and next non-missing
  • :lerp - trying to lineary approximate values, works for numbers and datetime, otherwise applies :nearest. For numbers always results in float datatype.

Let’s define special dataset here:

(def DSm2 (api/dataset {:a [nil nil nil 1.0 2  nil nil nil nil  nil 4   nil  11 nil nil]
                        :b [2   2   2 nil nil nil nil nil nil 13   nil   3  4  5 5]}))
DSm2

_unnamed [15 2]:

| :a | :b | | ---: | -: | | | 2 | | | 2 | | | 2 | | 1.0 | | | 2.0 | | | | | | | | | | | | | | | | 13 | | 4.0 | | | | 3 | | 11.0 | 4 | | | 5 | | | 5 |


Replace missing with default strategy for all columns

(api/replace-missing DSm2)

_unnamed [15 2]:

| :a | :b | | ---: | -: | | 1.0 | 2 | | 1.0 | 2 | | 1.0 | 2 | | 1.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 13 | | 2.0 | 13 | | 4.0 | 13 | | 4.0 | 13 | | 4.0 | 13 | | 4.0 | 3 | | 11.0 | 4 | | 11.0 | 5 | | 11.0 | 5 |


Replace missing with single value in whole dataset

(api/replace-missing DSm2 :all :value 999)

_unnamed [15 2]:

:a:b
999.02
999.02
999.02
1.0999
2.0999
999.0999
999.0999
999.0999
999.0999
999.013
4.0999
999.03
11.04
999.05
999.05

Replace missing with single value in :a column

(api/replace-missing DSm2 :a :value 999)

_unnamed [15 2]:

| :a | :b | | ----: | -: | | 999.0 | 2 | | 999.0 | 2 | | 999.0 | 2 | | 1.0 | | | 2.0 | | | 999.0 | | | 999.0 | | | 999.0 | | | 999.0 | | | 999.0 | 13 | | 4.0 | | | 999.0 | 3 | | 11.0 | 4 | | 999.0 | 5 | | 999.0 | 5 |


Replace missing with sequence in :a column

(api/replace-missing DSm2 :a :value [-999 -998 -997])

_unnamed [15 2]:

| :a | :b | | ------: | -: | | -999.0 | 2 | | -998.0 | 2 | | -997.0 | 2 | | 1.0 | | | 2.0 | | | -999.0 | | | -998.0 | | | -997.0 | | | -999.0 | | | -998.0 | 13 | | 4.0 | | | -997.0 | 3 | | 11.0 | 4 | | -999.0 | 5 | | -998.0 | 5 |


Replace missing with a function (mean)

(api/replace-missing DSm2 :a :value tech.v3.datatype.functional/mean)

_unnamed [15 2]:

| :a | :b | | ---: | -: | | 4.5 | 2 | | 4.5 | 2 | | 4.5 | 2 | | 1.0 | | | 2.0 | | | 4.5 | | | 4.5 | | | 4.5 | | | 4.5 | | | 4.5 | 13 | | 4.0 | | | 4.5 | 3 | | 11.0 | 4 | | 4.5 | 5 | | 4.5 | 5 |


Using :down strategy, fills gaps with values from above. You can see that if missings are at the beginning, the are filled with first value

(api/replace-missing DSm2 [:a :b] :down)

_unnamed [15 2]:

| :a | :b | | ---: | -: | | 1.0 | 2 | | 1.0 | 2 | | 1.0 | 2 | | 1.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 13 | | 4.0 | 13 | | 4.0 | 3 | | 11.0 | 4 | | 11.0 | 5 | | 11.0 | 5 |


To fix above issue you can provide value

(api/replace-missing DSm2 [:a :b] :down 999)

_unnamed [15 2]:

| :a | :b | | ----: | -: | | 999.0 | 2 | | 999.0 | 2 | | 999.0 | 2 | | 1.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 13 | | 4.0 | 13 | | 4.0 | 3 | | 11.0 | 4 | | 11.0 | 5 | | 11.0 | 5 |


The same applies for :up strategy which is opposite direction.

(api/replace-missing DSm2 [:a :b] :up)

_unnamed [15 2]:

| :a | :b | | ---: | -: | | 1.0 | 2 | | 1.0 | 2 | | 1.0 | 2 | | 1.0 | 13 | | 2.0 | 13 | | 4.0 | 13 | | 4.0 | 13 | | 4.0 | 13 | | 4.0 | 13 | | 4.0 | 13 | | 4.0 | 3 | | 11.0 | 3 | | 11.0 | 4 | | 11.0 | 5 | | 11.0 | 5 |


The same applies for :up strategy which is opposite direction.

(api/replace-missing DSm2 [:a :b] :midpoint)

_unnamed [15 2]:

:a:b
1.02.0
1.02.0
1.02.0
1.07.5
2.07.5
3.07.5
3.07.5
3.07.5
3.07.5
3.013.0
4.08.0
7.53.0
11.04.0
11.05.0
11.05.0

We can use a function which is applied after applying :up or :down

(api/replace-missing DSm2 [:a :b] :down tech.v3.datatype.functional/mean)

_unnamed [15 2]:

| :a | :b | | ---: | -: | | 4.5 | 2 | | 4.5 | 2 | | 4.5 | 2 | | 1.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 2 | | 2.0 | 13 | | 4.0 | 13 | | 4.0 | 3 | | 11.0 | 4 | | 11.0 | 5 | | 11.0 | 5 |


Lerp tries to apply linear interpolation of the values

(api/replace-missing DSm2 [:a :b] :lerp)

_unnamed [15 2]:

:a:b
1.000000002.00000000
1.000000002.00000000
1.000000002.00000000
1.000000003.57142857
2.000000005.14285714
2.333333336.71428571
2.666666678.28571429
3.000000009.85714286
3.3333333311.42857143
3.6666666713.00000000
4.000000008.00000000
7.500000003.00000000
11.000000004.00000000
11.000000005.00000000
11.000000005.00000000

Lerp works also on dates

(-> (api/dataset {:dt [(java.time.LocalDateTime/of 2020 1 1 11 22 33)
                       nil nil nil nil nil nil nil
                       (java.time.LocalDateTime/of 2020 10 1 1 1 1)]})
    (api/replace-missing :lerp))

_unnamed [9 1]:

:dt
2020-01-01T11:22:33
2020-02-04T16:04:51.500
2020-03-09T20:47:10
2020-04-13T01:29:28.500
2020-05-17T06:11:47
2020-06-20T10:54:05.500
2020-07-24T15:36:24
2020-08-27T20:18:42.500
2020-10-01T01:01:01

Inject

When your column contains not continuous data range you can fill up with lacking values. Arguments:

  • dataset
  • column name
  • expected step (max-span, milliseconds in case of datetime column)
  • (optional) missing-strategy - how to replace missing, default :down (set to nil if none)
  • (optional) missing-value - optional value for replace missing

(-> (api/dataset {:a [1 2 9]
                  :b [:a :b :c]})
    (api/fill-range-replace :a 1))

_unnamed [9 2]:

| :a | :b | | --: | -- | | 1.0 | :a | | 2.0 | :b | | 3.0 | :b | | 4.0 | :b | | 5.0 | :b | | 6.0 | :b | | 7.0 | :b | | 8.0 | :b | | 9.0 | :c |

Join/Separate Columns

Joining or separating columns are operations which can help to tidy messy dataset.

  • join-columns joins content of the columns (as string concatenation or other structure) and stores it in new column
  • separate-column splits content of the columns into set of new columns

Join

join-columns accepts:

  • dataset
  • column selector (as in select-columns)
  • options
    • :separator (default "-")
    • :drop-columns? - whether to drop source columns or not (default true)
    • :result-type
      • :map - packs data into map
      • :seq - packs data into sequence
      • :string - join strings with separator (default)
      • or custom function which gets row as a vector
    • :missing-subst - substitution for missing value

Default usage. Create :joined column out of other columns.

(api/join-columns DSm :joined [:V1 :V2 :V4])

_unnamed [9 2]:

:V3:joined
0.51-1-A
1.02-2-B
3-C
1.51-4-A
0.52-5-B
1.06-C
1-7-A
1.52-8-B
0.59-C

Without dropping source columns.

(api/join-columns DSm :joined [:V1 :V2 :V4] {:drop-columns? false})

_unnamed [9 5]:

:V1:V2:V3:V4:joined
110.5A1-1-A
221.0B2-2-B
3 C3-C
141.5A1-4-A
250.5B2-5-B
61.0C6-C
17 A1-7-A
281.5B2-8-B
90.5C9-C

Let’s replace missing value with “NA” string.

(api/join-columns DSm :joined [:V1 :V2 :V4] {:missing-subst "NA"})

_unnamed [9 2]:

:V3:joined
0.51-1-A
1.02-2-B
NA-3-C
1.51-4-A
0.52-5-B
1.0NA-6-C
1-7-A
1.52-8-B
0.5NA-9-C

We can use custom separator.

(api/join-columns DSm :joined [:V1 :V2 :V4] {:separator "/"
                                             :missing-subst "."})

_unnamed [9 2]:

:V3:joined
0.51/1/A
1.02/2/B
./3/C
1.51/4/A
0.52/5/B
1.0./6/C
1/7/A
1.52/8/B
0.5./9/C

Or even sequence of separators.

(api/join-columns DSm :joined [:V1 :V2 :V4] {:separator ["-" "/"]
                                             :missing-subst "."})

_unnamed [9 2]:

:V3:joined
0.51-1/A
1.02-2/B
.-3/C
1.51-4/A
0.52-5/B
1.0.-6/C
1-7/A
1.52-8/B
0.5.-9/C

The other types of results, map:

(api/join-columns DSm :joined [:V1 :V2 :V4] {:result-type :map})

_unnamed [9 2]:

:V3:joined
0.5{:V1 1, :V2 1, :V4 “A”}
1.0{:V1 2, :V2 2, :V4 “B”}
{:V1 nil, :V2 3, :V4 “C”}
1.5{:V1 1, :V2 4, :V4 “A”}
0.5{:V1 2, :V2 5, :V4 “B”}
1.0{:V1 nil, :V2 6, :V4 “C”}
{:V1 1, :V2 7, :V4 “A”}
1.5{:V1 2, :V2 8, :V4 “B”}
0.5{:V1 nil, :V2 9, :V4 “C”}

Sequence

(api/join-columns DSm :joined [:V1 :V2 :V4] {:result-type :seq})

_unnamed [9 2]:

:V3:joined
0.5(1 1 “A”)
1.0(2 2 “B”)
(nil 3 “C”)
1.5(1 4 “A”)
0.5(2 5 “B”)
1.0(nil 6 “C”)
(1 7 “A”)
1.5(2 8 “B”)
0.5(nil 9 “C”)

Custom function, calculate hash

(api/join-columns DSm :joined [:V1 :V2 :V4] {:result-type hash})

_unnamed [9 2]:

:V3:joined
0.5535226087
1.01128801549
-1842240303
1.52022347171
0.51884312041
1.0-1555412370
1640237355
1.5-967279152
0.51128367958

Grouped dataset

(-> DSm
    (api/group-by :V4)
    (api/join-columns :joined [:V1 :V2 :V4])
    (api/ungroup))

_unnamed [9 2]:

:V3:joined
0.51-1-A
1.51-4-A
1-7-A
1.02-2-B
0.52-5-B
1.52-8-B
3-C
1.06-C
0.59-C

Tidyr examples

source

(def df (api/dataset {:x ["a" "a" nil nil]
                      :y ["b" nil "b" nil]}))
#'user/df
df

_unnamed [4 2]:

| :x | :y | | -- | -- | | a | b | | a | | | | b | | | |


(api/join-columns df "z" [:x :y] {:drop-columns? false
                                  :missing-subst "NA"
                                  :separator "_"})

_unnamed [4 3]:

| :x | :y | z | | -- | -- | ------ | | a | b | a_b | | a | | a_NA | | | b | NA_b | | | | NA_NA |


(api/join-columns df "z" [:x :y] {:drop-columns? false
                                  :separator "_"})

_unnamed [4 3]:

| :x | :y | z | | -- | -- | ---- | | a | b | a_b | | a | | a | | | b | b | | | | |

Separate

Column can be also separated into several other columns using string as separator, regex or custom function. Arguments:

  • dataset
  • source column
  • target columns - can be nil or :infer if separator returns map
  • separator as:
    • string - it’s converted to regular expression and passed to clojure.string/split function
    • regex
    • or custom function (default: identity)
  • options
    • :drop-columns? - whether drop source column(s) or not (default: true or :all in case of empty target-columns). When set to :all keeps only separation result.
    • :missing-subst - values which should be treated as missing, can be set, sequence, value or function (default: "")

Custom function (as separator) should return seqence of values for given value.


Separate float into integer and factional values

(api/separate-column DS :V3 [:int-part :frac-part] (fn [^double v]
                                                     [(int (quot v 1.0))
                                                      (mod v 1.0)]))

_unnamed [9 5]:

:V1:V2:int-part:frac-part:V4
1100.5A
2210.0B
1310.5C
2400.5A
1510.0B
2610.5C
1700.5A
2810.0B
1910.5C

Source column can be kept

(api/separate-column DS :V3 [:int-part :frac-part] (fn [^double v]
                                                     [(int (quot v 1.0))
                                                      (mod v 1.0)]) {:drop-column? false})

_unnamed [9 6]:

:V1:V2:V3:int-part:frac-part:V4
110.500.5A
221.010.0B
131.510.5C
240.500.5A
151.010.0B
261.510.5C
170.500.5A
281.010.0B
191.510.5C

We can treat 0 or 0.0 as missing value

(api/separate-column DS :V3 [:int-part :frac-part] (fn [^double v]
                                                     [(int (quot v 1.0))
                                                      (mod v 1.0)]) {:missing-subst [0 0.0]})

_unnamed [9 5]:

:V1:V2:int-part:frac-part:V4
11 0.5A
221 B
1310.5C
24 0.5A
151 B
2610.5C
17 0.5A
281 B
1910.5C

Works on grouped dataset

(-> DS
    (api/group-by :V4)
    (api/separate-column :V3 [:int-part :fract-part] (fn [^double v]
                                                       [(int (quot v 1.0))
                                                        (mod v 1.0)]))
    (api/ungroup))

_unnamed [9 5]:

:V1:V2:int-part:fract-part:V4
1100.5A
2400.5A
1700.5A
2210.0B
1510.0B
2810.0B
1310.5C
2610.5C
1910.5C

Separate using separator returning sequence of maps, in this case we drop all other columns.

(api/separate-column DS :V3 (fn [^double v]
                              {:int-part (int (quot v 1.0))
                               :fract-part (mod v 1.0)}))

_unnamed [9 2]:

:int-part:fract-part
00.5
10.0
10.5
00.5
10.0
10.5
00.5
10.0
10.5

Keeping all columns

(api/separate-column DS :V3 nil (fn [^double v]
                                  {:int-part (int (quot v 1.0))
                                   :fract-part (mod v 1.0)}) {:drop-column? false})

_unnamed [9 6]:

:V1:V2:V3:int-part:fract-part:V4
110.500.5A
221.010.0B
131.510.5C
240.500.5A
151.010.0B
261.510.5C
170.500.5A
281.010.0B
191.510.5C

Join and separate together.

(-> DSm
    (api/join-columns :joined [:V1 :V2 :V4] {:result-type :map})
    (api/separate-column :joined [:v1 :v2 :v4] (juxt :V1 :V2 :V4)))

_unnamed [9 4]:

:V3:v1:v2:v4
0.511A
1.022B
3C
1.514A
0.525B
1.0 6C
17A
1.528B
0.5 9C
(-> DSm
    (api/join-columns :joined [:V1 :V2 :V4] {:result-type :seq})
    (api/separate-column :joined [:v1 :v2 :v4] identity))

_unnamed [9 4]:

:V3:v1:v2:v4
0.511A
1.022B
3C
1.514A
0.525B
1.0 6C
17A
1.528B
0.5 9C
Tidyr examples

separate source extract source

(def df-separate (api/dataset {:x [nil "a.b" "a.d" "b.c"]}))
(def df-separate2 (api/dataset {:x ["a" "a b" nil "a b c"]}))
(def df-separate3 (api/dataset {:x ["a?b" nil "a.b" "b:c"]}))
(def df-extract (api/dataset {:x [nil "a-b" "a-d" "b-c" "d-e"]}))
#'user/df-separate
#'user/df-separate2
#'user/df-separate3
#'user/df-extract
df-separate

_unnamed [4 1]:

:x
a.b
a.d
b.c
df-separate2

_unnamed [4 1]:

:x
a
a b
a b c
df-separate3

_unnamed [4 1]:

:x
a?b
a.b
b:c
df-extract

_unnamed [5 1]:

:x
a-b
a-d
b-c
d-e

(api/separate-column df-separate :x [:A :B] "\\.")

_unnamed [4 2]:

| :A | :B | | -- | -- | | | | | a | b | | a | d | | b | c |


You can drop columns after separation by setting nil as a name. We need second value here.

(api/separate-column df-separate :x [nil :B] "\\.")

_unnamed [4 1]:

| :B | | -- | | | | b | | d | | c |


Extra data is dropped

(api/separate-column df-separate2 :x ["a" "b"] " ")

_unnamed [4 2]:

| a | b | | - | - | | a | | | a | b | | | | | a | b |


Split with regular expression

(api/separate-column df-separate3 :x ["a" "b"] "[?\\.:]")

_unnamed [4 2]:

| a | b | | - | - | | a | b | | | | | a | b | | b | c |


Or just regular expression to extract values

(api/separate-column df-separate3 :x ["a" "b"] #"(.).(.)")

_unnamed [4 2]:

| a | b | | - | - | | a | b | | | | | a | b | | b | c |


Extract first value only

(api/separate-column df-extract :x ["A"] "-")

_unnamed [5 1]:

| A | | - | | | | a | | a | | b | | d |


Split with regex

(api/separate-column df-extract :x ["A" "B"] #"(\p{Alnum})-(\p{Alnum})")

_unnamed [5 2]:

| A | B | | - | - | | | | | a | b | | a | d | | b | c | | d | e |


Only a,b,c,d strings

(api/separate-column df-extract :x ["A" "B"] #"([a-d]+)-([a-d]+)")

_unnamed [5 2]:

| A | B | | - | - | | | | | a | b | | a | d | | b | c | | | |

Fold/Unroll Rows

To pack or unpack the data into single value you can use fold-by and unroll functions.

fold-by groups dataset and packs columns data from each group separately into desired datastructure (like vector or sequence). unroll does the opposite.

Fold-by

Group-by and pack columns into vector

(api/fold-by DS [:V3 :V4 :V1])

_unnamed [6 4]:

:V4:V3:V1:V2
A0.51[1 7]
B1.02[2 8]
C1.51[3 9]
A0.52[4]
B1.01[5]
C1.52[6]

You can pack several columns at once.

(api/fold-by DS [:V4])

_unnamed [3 4]:

:V4:V2:V3:V1
A[1 4 7][0.5 0.5 0.5][1 2 1]
B[2 5 8][1.0 1.0 1.0][2 1 2]
C[3 6 9][1.5 1.5 1.5][1 2 1]

You can use custom packing function

(api/fold-by DS [:V4] seq)

_unnamed [3 4]:

:V4:V2:V3:V1
A(1 4 7)(0.5 0.5 0.5)(1 2 1)
B(2 5 8)(1.0 1.0 1.0)(2 1 2)
C(3 6 9)(1.5 1.5 1.5)(1 2 1)

or

(api/fold-by DS [:V4] set)

_unnamed [3 4]:

:V4:V2:V3:V1
A#{7 1 4}#{0.5}#{1 2}
B#{2 5 8}#{1.0}#{1 2}
C#{6 3 9}#{1.5}#{1 2}

This works also on grouped dataset

(-> DS
    (api/group-by :V1)
    (api/fold-by :V4)
    (api/ungroup))

_unnamed [6 4]:

:V4:V2:V3:V1
A[1 7][0.5 0.5][1 1]
C[3 9][1.5 1.5][1 1]
B[5][1.0][1]
B[2 8][1.0 1.0][2 2]
A[4][0.5][2]
C[6][1.5][2]

Unroll

unroll unfolds sequences stored in data, multiplying other ones when necessary. You can unroll more than one column at once (folded data should have the same size!).

Options:

  • :indexes? if true (or column name), information about index of unrolled sequence is added.
  • :datatypes list of datatypes which should be applied to restored columns, a map

Unroll one column

(api/unroll (api/fold-by DS [:V4]) [:V1])

_unnamed [9 4]:

:V4:V2:V3:V1
A[1 4 7][0.5 0.5 0.5]1
A[1 4 7][0.5 0.5 0.5]2
A[1 4 7][0.5 0.5 0.5]1
B[2 5 8][1.0 1.0 1.0]2
B[2 5 8][1.0 1.0 1.0]1
B[2 5 8][1.0 1.0 1.0]2
C[3 6 9][1.5 1.5 1.5]1
C[3 6 9][1.5 1.5 1.5]2
C[3 6 9][1.5 1.5 1.5]1

Unroll all folded columns

(api/unroll (api/fold-by DS [:V4]) [:V1 :V2 :V3])

_unnamed [9 4]:

:V4:V1:V2:V3
A110.5
A240.5
A170.5
B221.0
B151.0
B281.0
C131.5
C261.5
C191.5

Unroll one by one leads to cartesian product

(-> DS
    (api/fold-by [:V4 :V1])
    (api/unroll [:V2])
    (api/unroll [:V3]))

_unnamed [15 4]:

:V4:V1:V2:V3
A110.5
A110.5
A170.5
A170.5
B221.0
B221.0
B281.0
B281.0
C131.5
C131.5
C191.5
C191.5
A240.5
B151.0
C261.5

You can add indexes

(api/unroll (api/fold-by DS [:V1]) [:V4 :V2 :V3] {:indexes? true})

_unnamed [9 5]:

:V1:indexes:V4:V2:V3
10A10.5
11C31.5
12B51.0
13A70.5
14C91.5
20B21.0
21A40.5
22C61.5
23B81.0
(api/unroll (api/fold-by DS [:V1]) [:V4 :V2 :V3] {:indexes? "vector idx"})

_unnamed [9 5]:

:V1vector idx:V4:V2:V3
10A10.5
11C31.5
12B51.0
13A70.5
14C91.5
20B21.0
21A40.5
22C61.5
23B81.0

You can also force datatypes

(-> DS
    (api/fold-by [:V1])
    (api/unroll [:V4 :V2 :V3] {:datatypes {:V4 :string
                                           :V2 :int16
                                           :V3 :float32}})
    (api/info :columns))

_unnamed :column info [4 4]:

:n-elems:name:categorical?:datatype
9:V1 :int64
9:V4true:string
9:V2 :int16
9:V3 :float32

This works also on grouped dataset

(-> DS
    (api/group-by :V1)
    (api/fold-by [:V1 :V4])
    (api/unroll :V3 {:indexes? true})
    (api/ungroup))

_unnamed [9 5]:

:V4:V1:V2:indexes:V3
A1[1 7]00.5
A1[1 7]10.5
C1[3 9]01.5
C1[3 9]11.5
B1[5]01.0
B2[2 8]01.0
B2[2 8]11.0
A2[4]00.5
C2[6]01.5

Reshape

Reshaping data provides two types of operations:

  • pivot->longer - converting columns to rows
  • pivot->wider - converting rows to columns

Both functions are inspired on tidyr R package and provide almost the same functionality.

All examples are taken from mentioned above documentation.

Both functions work only on regular dataset.

Longer

pivot->longer converts columns to rows. Column names are treated as data.

Arguments:

  • dataset
  • columns selector
  • options:
    • :target-columns - names of the columns created or columns pattern (see below) (default: :$column)
    • :value-column-name - name of the column for values (default: :$value)
    • :splitter - string, regular expression or function which splits source column names into data
    • :drop-missing? - remove rows with missing? (default: :true)
    • :datatypes - map of target columns data types

:target-columns - can be:

  • column name - source columns names are put there as a data
  • column names as seqence - source columns names after split are put separately into :target-columns as data
  • pattern - is a sequence of names, where some of the names are nil. nil is replaced by a name taken from splitter and such column is used for values.

Create rows from all columns but "religion".

(def relig-income (api/dataset "data/relig_income.csv"))
relig-income

data/relig_income.csv [18 11]:

religion<$10k$10-20k$20-30k$30-40k$40-50k$50-75k$75-100k$100-150k>150kDon’t know/refused
Agnostic27346081761371221098496
Atheist12273752357073597476
Buddhist27213034335862395354
Catholic41861773267063811169497926331489
Don’t know/refused151415111035211718116
Evangelical Prot575869106498288114869497234141529
Hindu1979113447485437
Historically Black Prot2282442362381972231318178339
Jehovah’s Witness2027242421301511637
Jewish1919252530956987151162
Mainline Prot28949561965565111079397536341328
Mormon294048515611285494269
Muslim67910923168622
Orthodox13172332324738424673
Other Christian971113131418141218
Other Faiths20334046496346404171
Other World Religions5234273448
Unaffiliated217299374365341528407321258597
(api/pivot->longer relig-income (complement #{"religion"}))

data/relig_income.csv [180 3]:

| religion | :(column | :)value | | | ----------------------- | -------------------- | ---: | | Agnostic | <$10k | 27 | | Atheist | <$10k | 12 | | Buddhist | <$10k | 27 | | Catholic | <$10k | 418 | | Don’t know/refused | <$10k | 15 | | Evangelical Prot | <$10k | 575 | | Hindu | <$10k | 1 | | Historically Black Prot | <$10k | 228 | | Jehovah’s Witness | <$10k | 20 | | Jewish | <$10k | 19 | | Mainline Prot | <$10k | 289 | | Mormon | <$10k | 29 | | Muslim | <$10k | 6 | | Orthodox | <$10k | 13 | | Other Christian | <$10k | 9 | | Other Faiths | <$10k | 20 | | Other World Religions | <$10k | 5 | | Unaffiliated | <$10k | 217 | | Agnostic | Don’t know/refused | 96 | | Atheist | Don’t know/refused | 76 | | Buddhist | Don’t know/refused | 54 | | Catholic | Don’t know/refused | 1489 | | Don’t know/refused | Don’t know/refused | 116 | | Evangelical Prot | Don’t know/refused | 1529 | | Hindu | Don’t know/refused | 37 |


Convert only columns starting with "wk" and pack them into :week column, values go to :rank column

(def bilboard (-> (api/dataset "data/billboard.csv.gz")
                  (api/drop-columns :type/boolean))) ;; drop some boolean columns, tidyr just skips them
(->> bilboard
     (api/column-names)
     (take 13)
     (api/select-columns bilboard))

data/billboard.csv.gz [317 13]:

artisttrackdate.enteredwk1wk2wk3wk4wk5wk6wk7wk8wk9wk10
2 PacBaby Don’t Cry (Keep…2000-02-2687827277879499
2Ge+herThe Hardest Part Of …2000-09-02918792
3 Doors DownKryptonite2000-04-0881706867665754535151
3 Doors DownLoser2000-10-2176767269676555596261
504 BoyzWobble Wobble2000-04-1557342517173136495357
98^0Give Me Just One Nig…2000-08-195139342626192236
A*TeensDancing Queen2000-07-0897979695100
AaliyahI Don’t Wanna2000-01-2984625141383535383836
AaliyahTry Again2000-03-1859533828211816141210
Adams, YolandaOpen My Heart2000-08-2676767469686761585759
Adkins, TraceMore2000-04-2984847573736968657383
Aguilera, ChristinaCome On Over Baby (A…2000-08-05574745292318119911
Aguilera, ChristinaI Turn To You2000-04-1550393028211920171717
Aguilera, ChristinaWhat A Girl Wants1999-11-2771512818131311112
Alice DeejayBetter Off Alone2000-04-0879655348453634292730
Allan, GarySmoke Rings In The D…2000-01-228078767792
AmberSexual1999-07-1799999696100939396
AnastaciaI’m Outta Love2000-04-0192 95
Anthony, MarcMy Baby You2000-09-1682767670828174807676
Anthony, MarcYou Sang To Me2000-02-2677545043302721181513
AvantMy First Love2000-11-0470625643393326262631
AvantSeparated2000-04-2962323023263035323225
BBMakBack Here2000-04-2999866052383428211818
Backstreet Boys, TheShape Of My Heart2000-10-143925241512121091012
Backstreet Boys, TheShow Me The Meaning …2000-01-017462552516141210129
(api/pivot->longer bilboard #(clojure.string/starts-with? % "wk") {:target-columns :week
                                                                   :value-column-name :rank})

data/billboard.csv.gz [5307 5]:

artisttrackdate.entered:week:rank
3 Doors DownKryptonite2000-04-08wk354
Braxton, ToniHe Wasn’t Man Enough2000-03-18wk3534
CreedHigher1999-09-11wk3522
CreedWith Arms Wide Open2000-05-13wk355
Hill, FaithBreathe1999-11-06wk358
JoeI Wanna Know2000-01-01wk355
LonestarAmazed1999-06-05wk3514
Vertical HorizonEverything You Want2000-01-22wk3527
matchbox twentyBent2000-04-29wk3533
CreedHigher1999-09-11wk5521
LonestarAmazed1999-06-05wk5522
3 Doors DownKryptonite2000-04-08wk1918
3 Doors DownLoser2000-10-21wk1973
98^0Give Me Just One Nig…2000-08-19wk1993
AaliyahI Don’t Wanna2000-01-29wk1983
AaliyahTry Again2000-03-18wk193
Adams, YolandaOpen My Heart2000-08-26wk1979
Aguilera, ChristinaCome On Over Baby (A…2000-08-05wk1923
Aguilera, ChristinaI Turn To You2000-04-15wk1929
Aguilera, ChristinaWhat A Girl Wants1999-11-27wk1918
Alice DeejayBetter Off Alone2000-04-08wk1979
AmberSexual1999-07-17wk1995
Anthony, MarcMy Baby You2000-09-16wk1991
Anthony, MarcYou Sang To Me2000-02-26wk199
AvantMy First Love2000-11-04wk1981

We can create numerical column out of column names

(api/pivot->longer bilboard #(clojure.string/starts-with? % "wk") {:target-columns :week
                                                                   :value-column-name :rank
                                                                   :splitter #"wk(.*)"
                                                                   :datatypes {:week :int16}})

data/billboard.csv.gz [5307 5]:

artisttrackdate.entered:week:rank
3 Doors DownKryptonite2000-04-084621
CreedHigher1999-09-11467
CreedWith Arms Wide Open2000-05-134637
Hill, FaithBreathe1999-11-064631
LonestarAmazed1999-06-05465
3 Doors DownKryptonite2000-04-085142
CreedHigher1999-09-115114
Hill, FaithBreathe1999-11-065149
LonestarAmazed1999-06-055112
2 PacBaby Don’t Cry (Keep…2000-02-26694
3 Doors DownKryptonite2000-04-08657
3 Doors DownLoser2000-10-21665
504 BoyzWobble Wobble2000-04-15631
98^0Give Me Just One Nig…2000-08-19619
AaliyahI Don’t Wanna2000-01-29635
AaliyahTry Again2000-03-18618
Adams, YolandaOpen My Heart2000-08-26667
Adkins, TraceMore2000-04-29669
Aguilera, ChristinaCome On Over Baby (A…2000-08-05618
Aguilera, ChristinaI Turn To You2000-04-15619
Aguilera, ChristinaWhat A Girl Wants1999-11-27613
Alice DeejayBetter Off Alone2000-04-08636
AmberSexual1999-07-17693
Anthony, MarcMy Baby You2000-09-16681
Anthony, MarcYou Sang To Me2000-02-26627

When column names contain observation data, such column names can be splitted and data can be restored into separate columns.

(def who (api/dataset "data/who.csv.gz"))
(->> who
     (api/column-names)
     (take 10)
     (api/select-columns who))

data/who.csv.gz [7240 10]:

countryiso2iso3yearnew_sp_m014new_sp_m1524new_sp_m2534new_sp_m3544new_sp_m4554new_sp_m5564
AfghanistanAFAFG1980
AfghanistanAFAFG1981
AfghanistanAFAFG1982
AfghanistanAFAFG1983
AfghanistanAFAFG1984
AfghanistanAFAFG1985
AfghanistanAFAFG1986
AfghanistanAFAFG1987
AfghanistanAFAFG1988
AfghanistanAFAFG1989
AfghanistanAFAFG1990
AfghanistanAFAFG1991
AfghanistanAFAFG1992
AfghanistanAFAFG1993
AfghanistanAFAFG1994
AfghanistanAFAFG1995
AfghanistanAFAFG1996
AfghanistanAFAFG19970106352
AfghanistanAFAFG199830129128908964
AfghanistanAFAFG199985555473421
AfghanistanAFAFG20005222818314912994
AfghanistanAFAFG2001129379349274204139
AfghanistanAFAFG200290476481368246241
AfghanistanAFAFG2003127511436284256288
AfghanistanAFAFG2004139537568360358386
(api/pivot->longer who #(clojure.string/starts-with? % "new") {:target-columns [:diagnosis :gender :age]
                                                               :splitter #"new_?(.*)_(.)(.*)"
                                                               :value-column-name :count})

data/who.csv.gz [76046 8]:

countryiso2iso3year:diagnosis:gender:age:count
AlbaniaALALB2013relm152460
AlgeriaDZDZA2013relm15241021
AndorraADAND2013relm15240
AngolaAOAGO2013relm15242992
AnguillaAIAIA2013relm15240
Antigua and BarbudaAGATG2013relm15241
ArgentinaARARG2013relm15241124
ArmeniaAMARM2013relm1524116
AustraliaAUAUS2013relm1524105
AustriaATAUT2013relm152444
AzerbaijanAZAZE2013relm1524958
BahamasBSBHS2013relm15242
BahrainBHBHR2013relm152413
BangladeshBDBGD2013relm152414705
BarbadosBBBRB2013relm15240
BelarusBYBLR2013relm1524162
BelgiumBEBEL2013relm152463
BelizeBZBLZ2013relm15248
BeninBJBEN2013relm1524301
BermudaBMBMU2013relm15240
BhutanBTBTN2013relm1524180
Bolivia (Plurinational State of)BOBOL2013relm15241470
Bonaire, Saint Eustatius and SabaBQBES2013relm15240
Bosnia and HerzegovinaBABIH2013relm152457
BotswanaBWBWA2013relm1524423

When data contains multiple observations per row, we can use splitter and pattern for target columns to create new columns and put values there. In following dataset we have two obseravations dob and gender for two childs. We want to put child infomation into the column and leave dob and gender for values.

(def family (api/dataset "data/family.csv"))
family

data/family.csv [5 5]:

familydob_child1dob_child2gender_child1gender_child2
11998-11-262000-01-2912
21996-06-22 2
32002-07-112004-04-0522
42004-10-102009-08-2711
52000-12-052005-02-2821
(api/pivot->longer family (complement #{"family"}) {:target-columns [nil :child]
                                                    :splitter "_"
                                                    :datatypes {"gender" :int16}})

data/family.csv [9 4]:

family:childdobgender
1child11998-11-261
2child11996-06-222
3child12002-07-112
4child12004-10-101
5child12000-12-052
1child22000-01-292
3child22004-04-052
4child22009-08-271
5child22005-02-281

Similar here, we have two observations: x and y in four groups.

(def anscombe (api/dataset "data/anscombe.csv"))
anscombe

data/anscombe.csv [11 8]:

| x1 | x2 | x3 | x4 | y1 | y2 | y3 | y4 | | -: | -: | -: | -: | ----: | ---: | ----: | ----: | | 10 | 10 | 10 | 8 | 8.04 | 9.14 | 7.46 | 6.58 | | 8 | 8 | 8 | 8 | 6.95 | 8.14 | 6.77 | 5.76 | | 13 | 13 | 13 | 8 | 7.58 | 8.74 | 12.74 | 7.71 | | 9 | 9 | 9 | 8 | 8.81 | 8.77 | 7.11 | 8.84 | | 11 | 11 | 11 | 8 | 8.33 | 9.26 | 7.81 | 8.47 | | 14 | 14 | 14 | 8 | 9.96 | 8.10 | 8.84 | 7.04 | | 6 | 6 | 6 | 8 | 7.24 | 6.13 | 6.08 | 5.25 | | 4 | 4 | 4 | 19 | 4.26 | 3.10 | 5.39 | 12.50 | | 12 | 12 | 12 | 8 | 10.84 | 9.13 | 8.15 | 5.56 | | 7 | 7 | 7 | 8 | 4.82 | 7.26 | 6.42 | 7.91 | | 5 | 5 | 5 | 8 | 5.68 | 4.74 | 5.73 | 6.89 |

(api/pivot->longer anscombe :all {:splitter #"(.)(.)"
                                  :target-columns [nil :set]})

data/anscombe.csv [44 3]:

| :set | x | y | | ---: | -: | ----: | | 1 | 10 | 8.04 | | 1 | 8 | 6.95 | | 1 | 13 | 7.58 | | 1 | 9 | 8.81 | | 1 | 11 | 8.33 | | 1 | 14 | 9.96 | | 1 | 6 | 7.24 | | 1 | 4 | 4.26 | | 1 | 12 | 10.84 | | 1 | 7 | 4.82 | | 1 | 5 | 5.68 | | 2 | 10 | 9.14 | | 2 | 8 | 8.14 | | 2 | 13 | 8.74 | | 2 | 9 | 8.77 | | 2 | 11 | 9.26 | | 2 | 14 | 8.10 | | 2 | 6 | 6.13 | | 2 | 4 | 3.10 | | 2 | 12 | 9.13 | | 2 | 7 | 7.26 | | 2 | 5 | 4.74 | | 3 | 10 | 7.46 | | 3 | 8 | 6.77 | | 3 | 13 | 12.74 |


(def pnl (api/dataset {:x [1 2 3 4]
                       :a [1 1 0 0]
                       :b [0 1 1 1]
                       :y1 (repeatedly 4 rand)
                       :y2 (repeatedly 4 rand)
                       :z1 [3 3 3 3]
                       :z2 [-2 -2 -2 -2]}))
pnl

_unnamed [4 7]:

| :x | :a | :b | :y1 | :y2 | :z1 | :z2 | | -: | -: | -: | ---------: | ---------: | --: | --: | | 1 | 1 | 0 | 0.31370559 | 0.27224674 | 3 | -2 | | 2 | 1 | 1 | 0.67838148 | 0.09148037 | 3 | -2 | | 3 | 0 | 1 | 0.02716565 | 0.08897719 | 3 | -2 | | 4 | 0 | 1 | 0.23729872 | 0.61929921 | 3 | -2 |

(api/pivot->longer pnl [:y1 :y2 :z1 :z2] {:target-columns [nil :times]
                                          :splitter #":(.)(.)"})

_unnamed [8 6]:

| :x | :a | :b | :times | y | z | | -: | -: | -: | -----: | ---------: | --: | | 1 | 1 | 0 | 1 | 0.31370559 | 3 | | 2 | 1 | 1 | 1 | 0.67838148 | 3 | | 3 | 0 | 1 | 1 | 0.02716565 | 3 | | 4 | 0 | 1 | 1 | 0.23729872 | 3 | | 1 | 1 | 0 | 2 | 0.27224674 | -2 | | 2 | 1 | 1 | 2 | 0.09148037 | -2 | | 3 | 0 | 1 | 2 | 0.08897719 | -2 | | 4 | 0 | 1 | 2 | 0.61929921 | -2 |

Wider

pivot->wider converts rows to columns.

Arguments:

  • dataset
  • columns-selector - values from selected columns are converted to new columns
  • value-columns - what are values

When multiple columns are used as columns selector, names are joined using :concat-columns-with option. :concat-columns-with can be a string or function (default: "_"). Function accepts sequence of names.

When columns-selector creates non unique set of values, they are folded using :fold-fn (default: vec) option.

When value-columns is a sequence, multiple observations as columns are created appending value column names into new columns. Column names are joined using :concat-value-with option. :concat-value-with can be a string or function (default: “-”). Function accepts current column name and value.


Use station as a name source for columns and seen for values

(def fish (api/dataset "data/fish_encounters.csv"))
fish

data/fish_encounters.csv [114 3]:

fishstationseen
4842Release1
4842I80_11
4842Lisbon1
4842Rstr1
4842Base_TD1
4842BCE1
4842BCW1
4842BCE21
4842BCW21
4842MAE1
4842MAW1
4843Release1
4843I80_11
4843Lisbon1
4843Rstr1
4843Base_TD1
4843BCE1
4843BCW1
4843BCE21
4843BCW21
4843MAE1
4843MAW1
4844Release1
4844I80_11
4844Lisbon1
(api/pivot->wider fish "station" "seen" {:drop-missing? false})

data/fish_encounters.csv [19 12]:

fishReleaseI80_1LisbonRstrBase_TDBCEBCWBCE2BCW2MAEMAW
484211111111111
484311111111111
484411111111111
485811111111111
486111111111111
4857111111111
4862111111111
485011 1111
484511111
485511111
485911111
48481111
4847111
4865111
484911
485111
485411
486311
486411

If selected columns contain multiple values, such values should be folded.

(def warpbreaks (api/dataset "data/warpbreaks.csv"))
warpbreaks

data/warpbreaks.csv [54 3]:

breakswooltension
26AL
30AL
54AL
25AL
70AL
52AL
51AL
26AL
67AL
18AM
21AM
29AM
17AM
12AM
18AM
35AM
30AM
36AM
36AH
21AH
24AH
18AH
10AH
43AH
28AH

Let’s see how many values are for each type of wool and tension groups

(-> warpbreaks
    (api/group-by ["wool" "tension"])
    (api/aggregate {:n api/row-count}))

_unnamed [6 3]:

| tension | wool | :n | | ------- | ---- | -: | | L | A | 9 | | M | A | 9 | | H | A | 9 | | L | B | 9 | | M | B | 9 | | H | B | 9 |

(-> warpbreaks
    (api/reorder-columns ["wool" "tension" "breaks"])
    (api/pivot->wider "wool" "breaks" {:fold-fn vec}))

data/warpbreaks.csv [3 3]:

AtensionB
[26 30 54 25 70 52 51 26 67]L[27 14 29 19 29 31 41 20 44]
[18 21 29 17 12 18 35 30 36]M[42 26 19 16 39 28 21 39 29]
[36 21 24 18 10 43 28 15 26]H[20 21 24 17 13 15 15 16 28]

We can also calculate mean (aggreate values)

(-> warpbreaks
    (api/reorder-columns ["wool" "tension" "breaks"])
    (api/pivot->wider "wool" "breaks" {:fold-fn tech.v3.datatype.functional/mean}))

data/warpbreaks.csv [3 3]:

AtensionB
44.55555556L28.22222222
24.00000000M28.77777778
24.55555556H18.77777778

Multiple source columns, joined with default separator.

(def production (api/dataset "data/production.csv"))
production

data/production.csv [45 4]:

productcountryyearproduction
AAI20001.63727158
AAI20010.15870784
AAI2002-1.56797745
AAI2003-0.44455509
AAI2004-0.07133701
AAI20051.61183090
AAI2006-0.70434682
AAI2007-1.53550542
AAI20080.83907155
AAI2009-0.37424110
AAI2010-0.71158926
AAI20111.12805634
AAI20121.45718247
AAI2013-1.55934101
AAI2014-0.11695838
BAI2000-0.02617661
BAI2001-0.68863576
BAI20020.06248741
BAI2003-0.72339686
BAI20040.47248952
BAI2005-0.94173861
BAI2006-0.34782108
BAI20070.52425284
BAI20081.83230937
BAI20090.10706491
(api/pivot->wider production ["product" "country"] "production")

data/production.csv [15 4]:

yearA_AIB_AIB_EI
20001.63727158-0.026176611.40470848
20010.15870784-0.68863576-0.59618369
2002-1.567977450.06248741-0.26568579
2003-0.44455509-0.723396860.65257808
2004-0.071337010.472489520.62564999
20051.61183090-0.94173861-1.34530299
2006-0.70434682-0.34782108-0.97184975
2007-1.535505420.52425284-1.69715821
20080.839071551.832309370.04556128
2009-0.374241100.107064911.19315043
2010-0.71158926-0.32903664-1.60557503
20111.12805634-1.78319121-0.77235497
20121.457182470.61125798-2.50262738
2013-1.55934101-0.78526092-1.62753769
2014-0.116958380.978436350.03329645

Joined with custom function

(api/pivot->wider production ["product" "country"] "production" {:concat-columns-with vec})

data/production.csv [15 4]:

year[“A” “AI”][“B” “AI”][“B” “EI”]
20001.63727158-0.026176611.40470848
20010.15870784-0.68863576-0.59618369
2002-1.567977450.06248741-0.26568579
2003-0.44455509-0.723396860.65257808
2004-0.071337010.472489520.62564999
20051.61183090-0.94173861-1.34530299
2006-0.70434682-0.34782108-0.97184975
2007-1.535505420.52425284-1.69715821
20080.839071551.832309370.04556128
2009-0.374241100.107064911.19315043
2010-0.71158926-0.32903664-1.60557503
20111.12805634-1.78319121-0.77235497
20121.457182470.61125798-2.50262738
2013-1.55934101-0.78526092-1.62753769
2014-0.116958380.978436350.03329645

Multiple value columns

(def income (api/dataset "data/us_rent_income.csv"))
income

data/us_rent_income.csv [104 5]:

GEOIDNAMEvariableestimatemoe
1Alabamaincome24476136
1Alabamarent7473
2Alaskaincome32940508
2Alaskarent120013
4Arizonaincome27517148
4Arizonarent9724
5Arkansasincome23789165
5Arkansasrent7095
6Californiaincome29454109
6Californiarent13583
8Coloradoincome32401109
8Coloradorent11255
9Connecticutincome35326195
9Connecticutrent11235
10Delawareincome31560247
10Delawarerent107610
11District of Columbiaincome43198681
11District of Columbiarent142417
12Floridaincome2595270
12Floridarent10773
13Georgiaincome27024106
13Georgiarent9273
15Hawaiiincome32453218
15Hawaiirent150718
16Idahoincome25298208
(api/pivot->wider income "variable" ["estimate" "moe"] {:drop-missing? false})

data/us_rent_income.csv [52 6]:

GEOIDNAMEincome-estimateincome-moerent-estimaterent-moe
1Alabama244761367473
2Alaska32940508120013
4Arizona275171489724
5Arkansas237891657095
6California2945410913583
8Colorado3240110911255
9Connecticut3532619511235
10Delaware31560247107610
11District of Columbia43198681142417
12Florida259527010773
13Georgia270241069273
15Hawaii32453218150718
16Idaho252982087927
17Illinois30684839523
18Indiana272471177823
19Iowa300021437404
20Kansas291262088015
21Kentucky247021597134
22Louisiana250861558254
23Maine268411878087
24Maryland3714715213115
25Massachusetts3449819911735
26Michigan26987828243
27Minnesota327341899064
28Mississippi227661947405

Value concatenated by custom function

(api/pivot->wider income "variable" ["estimate" "moe"] {:concat-columns-with vec
                                                        :concat-value-with vector
                                                        :drop-missing? false})

data/us_rent_income.csv [52 6]:

GEOIDNAME[“income” “estimate”][“income” “moe”][“rent” “estimate”][“rent” “moe”]
1Alabama244761367473
2Alaska32940508120013
4Arizona275171489724
5Arkansas237891657095
6California2945410913583
8Colorado3240110911255
9Connecticut3532619511235
10Delaware31560247107610
11District of Columbia43198681142417
12Florida259527010773
13Georgia270241069273
15Hawaii32453218150718
16Idaho252982087927
17Illinois30684839523
18Indiana272471177823
19Iowa300021437404
20Kansas291262088015
21Kentucky247021597134
22Louisiana250861558254
23Maine268411878087
24Maryland3714715213115
25Massachusetts3449819911735
26Michigan26987828243
27Minnesota327341899064
28Mississippi227661947405

Reshape contact data

(def contacts (api/dataset "data/contacts.csv"))
contacts

data/contacts.csv [6 3]:

fieldvalueperson_id
nameJiena McLellan1
companyToyota1
nameJohn Smith2
companygoogle2
emailjohn@google.com2
nameHuxley Ratcliffe3
(api/pivot->wider contacts "field" "value" {:drop-missing? false})

data/contacts.csv [3 4]:

person_idnamecompanyemail
2John Smithgooglejohn@google.com
1Jiena McLellanToyota
3Huxley Ratcliffe

Reshaping

A couple of tidyr examples of more complex reshaping.


World bank

(def world-bank-pop (api/dataset "data/world_bank_pop.csv.gz"))
(->> world-bank-pop
     (api/column-names)
     (take 8)
     (api/select-columns world-bank-pop))

data/world_bank_pop.csv.gz [1056 8]:

countryindicator200020012002200320042005
ABWSP.URB.TOTL4.24440000E+044.30480000E+044.36700000E+044.42460000E+044.46690000E+044.48890000E+04
ABWSP.URB.GROW1.18263237E+001.41302122E+001.43455953E+001.31036044E+009.51477684E-014.91302715E-01
ABWSP.POP.TOTL9.08530000E+049.28980000E+049.49920000E+049.70170000E+049.87370000E+041.00031000E+05
ABWSP.POP.GROW2.05502678E+002.22593013E+002.22905605E+002.10935434E+001.75735287E+001.30203884E+00
AFGSP.URB.TOTL4.43629900E+064.64805500E+064.89295100E+065.15568600E+065.42677000E+065.69182300E+06
AFGSP.URB.GROW3.91222846E+004.66283822E+005.13467454E+005.23045853E+005.12439302E+004.76864700E+00
AFGSP.POP.TOTL2.00937560E+072.09664630E+072.19799230E+072.30648510E+072.41189790E+072.50707980E+07
AFGSP.POP.GROW3.49465874E+004.25150411E+004.72052846E+004.81804112E+004.46891840E+003.87047016E+00
AGOSP.URB.TOTL8.23476600E+068.70800000E+069.21878700E+069.76519700E+061.03435060E+071.09494240E+07
AGOSP.URB.GROW5.43749411E+005.58771954E+005.70013237E+005.75812711E+005.75341450E+005.69279690E+00
AGOSP.POP.TOTL1.64409240E+071.69832660E+071.75726490E+071.82033690E+071.88657160E+071.95525420E+07
AGOSP.POP.GROW3.03294342E+003.24549139E+003.41151529E+003.52630277E+003.57396197E+003.57589970E+00
ALBSP.URB.TOTL1.28939100E+061.29858400E+061.32722000E+061.35484800E+061.38182800E+061.40729800E+06
ALBSP.URB.GROW7.42478629E-017.10442618E-012.18120890E+002.06027418E+001.97179894E+001.82642936E+00
ALBSP.POP.TOTL3.08902700E+063.06017300E+063.05101000E+063.03961600E+063.02693900E+063.01148700E+06
ALBSP.POP.GROW-6.37356834E-01-9.38470428E-01-2.99876697E-01-3.74149169E-01-4.17931378E-01-5.11790116E-01
ANDSP.URB.TOTL6.04170000E+046.19910000E+046.41940000E+046.67470000E+046.91920000E+047.12050000E+04
ANDSP.URB.GROW1.27931383E+002.57186909E+003.49205352E+003.89996041E+003.59758966E+002.86777917E+00
ANDSP.POP.TOTL6.53900000E+046.73410000E+047.00490000E+047.31820000E+047.62440000E+047.88670000E+04
ANDSP.POP.GROW1.57216555E+002.93999221E+003.94257335E+004.37544919E+004.09892348E+003.38241655E+00
ARBSP.URB.TOTL1.49981223E+081.53924351E+081.57985738E+081.62267754E+081.66820459E+081.71813698E+08
ARBSP.URB.GROW2.59956290E+002.62908111E+002.63856042E+002.71038136E+002.80567450E+002.99318143E+00
ARBSP.POP.TOTL2.83832016E+082.89850357E+082.96026575E+083.02434519E+083.09162029E+083.16264728E+08
ARBSP.POP.GROW2.11148598E+002.12038835E+002.13082988E+002.16465160E+002.22445177E+002.29740341E+00
ARESP.URB.TOTL2.53138600E+062.68261100E+062.84320800E+063.04862700E+063.34683000E+063.76723900E+06

Step 1 - convert years column into values

(def pop2 (api/pivot->longer world-bank-pop (map str (range 2000 2018)) {:drop-missing? false
                                                                         :target-columns ["year"]
                                                                         :value-column-name "value"}))
pop2

data/world_bank_pop.csv.gz [19008 4]:

countryindicatoryearvalue
ABWSP.URB.TOTL20134.43600000E+04
ABWSP.URB.GROW20136.69503994E-01
ABWSP.POP.TOTL20131.03187000E+05
ABWSP.POP.GROW20135.92914005E-01
AFGSP.URB.TOTL20137.73396400E+06
AFGSP.URB.GROW20134.19297967E+00
AFGSP.POP.TOTL20133.17316880E+07
AFGSP.POP.GROW20133.31522413E+00
AGOSP.URB.TOTL20131.61194910E+07
AGOSP.URB.GROW20134.72272270E+00
AGOSP.POP.TOTL20132.59983400E+07
AGOSP.POP.GROW20133.53182419E+00
ALBSP.URB.TOTL20131.60350500E+06
ALBSP.URB.GROW20131.74363937E+00
ALBSP.POP.TOTL20132.89509200E+06
ALBSP.POP.GROW2013-1.83211385E-01
ANDSP.URB.TOTL20137.15270000E+04
ANDSP.URB.GROW2013-2.11923331E+00
ANDSP.POP.TOTL20138.07880000E+04
ANDSP.POP.GROW2013-2.01331401E+00
ARBSP.URB.TOTL20132.18605128E+08
ARBSP.URB.GROW20132.78289395E+00
ARBSP.POP.TOTL20133.81702086E+08
ARBSP.POP.GROW20132.24884429E+00
ARESP.URB.TOTL20137.66126800E+06

Step 2 - separate "indicate" column

(def pop3 (api/separate-column pop2
                               "indicator" ["area" "variable"]
                               #(rest (clojure.string/split % #"\."))))
pop3

data/world_bank_pop.csv.gz [19008 5]:

countryareavariableyearvalue
ABWURBTOTL20134.43600000E+04
ABWURBGROW20136.69503994E-01
ABWPOPTOTL20131.03187000E+05
ABWPOPGROW20135.92914005E-01
AFGURBTOTL20137.73396400E+06
AFGURBGROW20134.19297967E+00
AFGPOPTOTL20133.17316880E+07
AFGPOPGROW20133.31522413E+00
AGOURBTOTL20131.61194910E+07
AGOURBGROW20134.72272270E+00
AGOPOPTOTL20132.59983400E+07
AGOPOPGROW20133.53182419E+00
ALBURBTOTL20131.60350500E+06
ALBURBGROW20131.74363937E+00
ALBPOPTOTL20132.89509200E+06
ALBPOPGROW2013-1.83211385E-01
ANDURBTOTL20137.15270000E+04
ANDURBGROW2013-2.11923331E+00
ANDPOPTOTL20138.07880000E+04
ANDPOPGROW2013-2.01331401E+00
ARBURBTOTL20132.18605128E+08
ARBURBGROW20132.78289395E+00
ARBPOPTOTL20133.81702086E+08
ARBPOPGROW20132.24884429E+00
AREURBTOTL20137.66126800E+06

Step 3 - Make columns based on "variable" values.

(api/pivot->wider pop3 "variable" "value" {:drop-missing? false})

data/world_bank_pop.csv.gz [9504 5]:

countryareayearTOTLGROW
ABWURB20134.43600000E+040.66950399
ABWPOP20131.03187000E+050.59291401
AFGURB20137.73396400E+064.19297967
AFGPOP20133.17316880E+073.31522413
AGOURB20131.61194910E+074.72272270
AGOPOP20132.59983400E+073.53182419
ALBURB20131.60350500E+061.74363937
ALBPOP20132.89509200E+06-0.18321138
ANDURB20137.15270000E+04-2.11923331
ANDPOP20138.07880000E+04-2.01331401
ARBURB20132.18605128E+082.78289395
ARBPOP20133.81702086E+082.24884429
AREURB20137.66126800E+061.55515587
AREPOP20139.00626300E+061.18180499
ARGURB20133.88172560E+071.18764913
ARGPOP20134.25399250E+071.04727675
ARMURB20131.82765600E+060.28102719
ARMPOP20132.89350900E+060.40125198
ASMURB20134.83100000E+040.05797582
ASMPOP20135.53070000E+040.13931989
ATGURB20132.47980000E+040.38383110
ATGPOP20139.78240000E+041.07605830
AUSURB20131.97902080E+071.87536404
AUSPOP20132.31459010E+071.75833808
AUTURB20134.86199100E+060.91956020


Multi-choice

(def multi (api/dataset {:id [1 2 3 4]
                         :choice1 ["A" "C" "D" "B"]
                         :choice2 ["B" "B" nil "D"]
                         :choice3 ["C" nil nil nil]}))
multi

_unnamed [4 4]:

:id:choice1:choice2:choice3
1ABC
2CB
3D
4BD

Step 1 - convert all choices into rows and add artificial column to all values which are not missing.

(def multi2 (-> multi
                (api/pivot->longer (complement #{:id}))
                (api/add-column :checked true)))
multi2

_unnamed [8 4]:

| :id | :(column | :)value | :checked | | | --: | -------------------- | -------- | ---- | | 1 | :choice1 | A | true | | 2 | :choice1 | C | true | | 3 | :choice1 | D | true | | 4 | :choice1 | B | true | | 1 | :choice2 | B | true | | 2 | :choice2 | B | true | | 4 | :choice2 | D | true | | 1 | :choice3 | C | true |

Step 2 - Convert back to wide form with actual choices as columns

(-> multi2
    (api/drop-columns :$column)
    (api/pivot->wider :$value :checked {:drop-missing? false})
    (api/order-by :id))

_unnamed [4 5]:

:idACDB
1truetrue true
2 true true
3 true
4 truetrue


Construction

(def construction (api/dataset "data/construction.csv"))
(def construction-unit-map {"1 unit" "1"
                            "2 to 4 units" "2-4"
                            "5 units or more" "5+"})
construction

data/construction.csv [9 9]:

YearMonth1 unit2 to 4 units5 units or moreNortheastMidwestSouthWest
2018January859 348114169596339
2018February882 400138160655336
2018March862 356150154595330
2018April797 447144196613304
2018May875 36490169673319
2018June867 34276170610360
2018July829 360108183594310
2018August939 28690205649286
2018September835 304117175560296

Conversion 1 - Group two column types

(-> construction
    (api/pivot->longer #"^[125NWS].*|Midwest" {:target-columns [:units :region]
                                               :splitter (fn [col-name]
                                                           (if (re-matches #"^[125].*" col-name)
                                                             [(construction-unit-map col-name) nil]
                                                             [nil col-name]))
                                               :value-column-name :n
                                               :drop-missing? false}))

data/construction.csv [63 5]:

YearMonth:units:region:n
2018January1 859
2018February1 882
2018March1 862
2018April1 797
2018May1 875
2018June1 867
2018July1 829
2018August1 939
2018September1 835
2018January2-4
2018February2-4
2018March2-4
2018April2-4
2018May2-4
2018June2-4
2018July2-4
2018August2-4
2018September2-4
2018January5+ 348
2018February5+ 400
2018March5+ 356
2018April5+ 447
2018May5+ 364
2018June5+ 342
2018July5+ 360

Conversion 2 - Convert to longer form and back and rename columns

(-> construction
    (api/pivot->longer #"^[125NWS].*|Midwest" {:target-columns [:units :region]
                                               :splitter (fn [col-name]
                                                           (if (re-matches #"^[125].*" col-name)
                                                             [(construction-unit-map col-name) nil]
                                                             [nil col-name]))
                                               :value-column-name :n
                                               :drop-missing? false})
    (api/pivot->wider [:units :region] :n {:drop-missing? false})
    (api/rename-columns (zipmap (vals construction-unit-map)
                                (keys construction-unit-map))))

data/construction.csv [9 9]:

YearMonth1 unit2 to 4 units5 units or moreNortheastMidwestSouthWest
2018January859 348114169596339
2018February882 400138160655336
2018March862 356150154595330
2018April797 447144196613304
2018May875 36490169673319
2018June867 34276170610360
2018July829 360108183594310
2018August939 28690205649286
2018September835 304117175560296

Various operations on stocks, examples taken from gather and spread manuals.

(def stocks-tidyr (api/dataset "data/stockstidyr.csv"))
stocks-tidyr

data/stockstidyr.csv [10 4]:

timeXYZ
2009-01-011.30989806-1.89040193-1.77946880
2009-01-02-0.29993804-1.824730902.39892513
2009-01-030.53647501-1.03606860-3.98697977
2009-01-04-1.88390802-0.52178390-2.83065490
2009-01-05-0.96052361-2.216833491.43715171
2009-01-06-1.18528966-2.893509243.39784140
2009-01-07-0.85207056-2.16794818-1.20108258
2009-01-080.25234172-0.32854117-1.53160473
2009-01-090.402571361.96407898-6.80878830
2009-01-10-0.643835002.68618382-2.55909321

Convert to longer form

(def stocks-long (api/pivot->longer stocks-tidyr ["X" "Y" "Z"] {:value-column-name :price
                                                                :target-columns :stocks}))
stocks-long

data/stockstidyr.csv [30 3]:

time:stocks:price
2009-01-01X1.30989806
2009-01-02X-0.29993804
2009-01-03X0.53647501
2009-01-04X-1.88390802
2009-01-05X-0.96052361
2009-01-06X-1.18528966
2009-01-07X-0.85207056
2009-01-08X0.25234172
2009-01-09X0.40257136
2009-01-10X-0.64383500
2009-01-01Y-1.89040193
2009-01-02Y-1.82473090
2009-01-03Y-1.03606860
2009-01-04Y-0.52178390
2009-01-05Y-2.21683349
2009-01-06Y-2.89350924
2009-01-07Y-2.16794818
2009-01-08Y-0.32854117
2009-01-09Y1.96407898
2009-01-10Y2.68618382
2009-01-01Z-1.77946880
2009-01-02Z2.39892513
2009-01-03Z-3.98697977
2009-01-04Z-2.83065490
2009-01-05Z1.43715171

Convert back to wide form

(api/pivot->wider stocks-long :stocks :price)

data/stockstidyr.csv [10 4]:

timeXYZ
2009-01-011.30989806-1.89040193-1.77946880
2009-01-02-0.29993804-1.824730902.39892513
2009-01-030.53647501-1.03606860-3.98697977
2009-01-04-1.88390802-0.52178390-2.83065490
2009-01-05-0.96052361-2.216833491.43715171
2009-01-06-1.18528966-2.893509243.39784140
2009-01-07-0.85207056-2.16794818-1.20108258
2009-01-080.25234172-0.32854117-1.53160473
2009-01-090.402571361.96407898-6.80878830
2009-01-10-0.643835002.68618382-2.55909321

Convert to wide form on time column (let’s limit values to a couple of rows)

(-> stocks-long
    (api/select-rows (range 0 30 4))
    (api/pivot->wider "time" :price {:drop-missing? false}))

data/stockstidyr.csv [3 6]:

:stocks2009-01-012009-01-052009-01-092009-01-032009-01-07
Y -1.0360686-2.16794818
X1.30989806-0.960523610.40257136
Z-1.779468801.43715171-6.80878830

Join/Concat Datasets

Dataset join and concatenation functions.

Joins accept left-side and right-side datasets and columns selector. Options are the same as in tech.ml.dataset functions.

The difference between tech.ml.dataset join functions are: arguments order (first datasets) and possibility to join on multiple columns.

Additionally set operations are defined: intersect and difference.

To concat two datasets rowwise you can choose:

  • concat - concats rows for matching columns, the number of columns should be equal.
  • union - like concat but returns unique values
  • bind - concats rows add missing, empty columns

To add two datasets columnwise use bind. The number of rows should be equal.

Datasets used in examples:

(def ds1 (api/dataset {:a [1 2 1 2 3 4 nil nil 4]
                       :b (range 101 110)
                       :c (map str "abs tract")}))
(def ds2 (api/dataset {:a [nil 1 2 5 4 3 2 1 nil]
                       :b (range 110 101 -1)
                       :c (map str "datatable")
                       :d (symbol "X")}))
ds1
ds2

_unnamed [9 3]:

| :a | :b | :c | | -: | --: | -- | | 1 | 101 | a | | 2 | 102 | b | | 1 | 103 | s | | 2 | 104 | | | 3 | 105 | t | | 4 | 106 | r | | | 107 | a | | | 108 | c | | 4 | 109 | t |

_unnamed [9 4]:

| :a | :b | :c | :d | | -: | --: | -- | -- | | | 110 | d | X | | 1 | 109 | a | X | | 2 | 108 | t | X | | 5 | 107 | a | X | | 4 | 106 | t | X | | 3 | 105 | a | X | | 2 | 104 | b | X | | 1 | 103 | l | X | | | 102 | e | X |

Left

(api/left-join ds1 ds2 :b)

left-outer-join [9 7]:

| :b | :a | :c | :right.b | :right.a | :right.c | :d | | --: | -: | -- | -------: | -------: | -------- | -- | | 109 | 4 | t | 109 | 1 | a | X | | 108 | | c | 108 | 2 | t | X | | 107 | | a | 107 | 5 | a | X | | 106 | 4 | r | 106 | 4 | t | X | | 105 | 3 | t | 105 | 3 | a | X | | 104 | 2 | | 104 | 2 | b | X | | 103 | 1 | s | 103 | 1 | l | X | | 102 | 2 | b | 102 | | e | X | | 101 | 1 | a | | | | |


(api/left-join ds2 ds1 :b)

left-outer-join [9 7]:

| :b | :a | :c | :d | :right.b | :right.a | :right.c | | --: | -: | -- | -- | -------: | -------: | -------- | | 102 | | e | X | 102 | 2 | b | | 103 | 1 | l | X | 103 | 1 | s | | 104 | 2 | b | X | 104 | 2 | | | 105 | 3 | a | X | 105 | 3 | t | | 106 | 4 | t | X | 106 | 4 | r | | 107 | 5 | a | X | 107 | | a | | 108 | 2 | t | X | 108 | | c | | 109 | 1 | a | X | 109 | 4 | t | | 110 | | d | X | | | |


(api/left-join ds1 ds2 [:a :b])

left-outer-join [9 7]:

| :a | :b | :c | :right.a | :right.b | :right.c | :d | | -: | --: | -- | -------: | -------: | -------- | -- | | 4 | 106 | r | 4 | 106 | t | X | | 3 | 105 | t | 3 | 105 | a | X | | 2 | 104 | | 2 | 104 | b | X | | 1 | 103 | s | 1 | 103 | l | X | | 1 | 101 | a | | | | | | 2 | 102 | b | | | | | | | 107 | a | | | | | | | 108 | c | | | | | | 4 | 109 | t | | | | |


(api/left-join ds2 ds1 [:a :b])

left-outer-join [9 7]:

| :a | :b | :c | :d | :right.a | :right.b | :right.c | | -: | --: | -- | -- | -------: | -------: | -------- | | 1 | 103 | l | X | 1 | 103 | s | | 2 | 104 | b | X | 2 | 104 | | | 3 | 105 | a | X | 3 | 105 | t | | 4 | 106 | t | X | 4 | 106 | r | | | 110 | d | X | | | | | 1 | 109 | a | X | | | | | 2 | 108 | t | X | | | | | 5 | 107 | a | X | | | | | | 102 | e | X | | | |

Right

(api/right-join ds1 ds2 :b)

right-outer-join [9 7]:

| :b | :a | :c | :right.b | :right.a | :right.c | :d | | --: | -: | -- | -------: | -------: | -------- | -- | | 109 | 4 | t | 109 | 1 | a | X | | 108 | | c | 108 | 2 | t | X | | 107 | | a | 107 | 5 | a | X | | 106 | 4 | r | 106 | 4 | t | X | | 105 | 3 | t | 105 | 3 | a | X | | 104 | 2 | | 104 | 2 | b | X | | 103 | 1 | s | 103 | 1 | l | X | | 102 | 2 | b | 102 | | e | X | | | | | 110 | | d | X |


(api/right-join ds2 ds1 :b)

right-outer-join [9 7]:

| :b | :a | :c | :d | :right.b | :right.a | :right.c | | --: | -: | -- | -- | -------: | -------: | -------- | | 102 | | e | X | 102 | 2 | b | | 103 | 1 | l | X | 103 | 1 | s | | 104 | 2 | b | X | 104 | 2 | | | 105 | 3 | a | X | 105 | 3 | t | | 106 | 4 | t | X | 106 | 4 | r | | 107 | 5 | a | X | 107 | | a | | 108 | 2 | t | X | 108 | | c | | 109 | 1 | a | X | 109 | 4 | t | | | | | | 101 | 1 | a |


(api/right-join ds1 ds2 [:a :b])

right-outer-join [9 7]:

| :a | :b | :c | :right.a | :right.b | :right.c | :d | | -: | --: | -- | -------: | -------: | -------- | -- | | 4 | 106 | r | 4 | 106 | t | X | | 3 | 105 | t | 3 | 105 | a | X | | 2 | 104 | | 2 | 104 | b | X | | 1 | 103 | s | 1 | 103 | l | X | | | | | | 110 | d | X | | | | | 1 | 109 | a | X | | | | | 2 | 108 | t | X | | | | | 5 | 107 | a | X | | | | | | 102 | e | X |


(api/right-join ds2 ds1 [:a :b])

right-outer-join [9 7]:

| :a | :b | :c | :d | :right.a | :right.b | :right.c | | -: | --: | -- | -- | -------: | -------: | -------- | | 1 | 103 | l | X | 1 | 103 | s | | 2 | 104 | b | X | 2 | 104 | | | 3 | 105 | a | X | 3 | 105 | t | | 4 | 106 | t | X | 4 | 106 | r | | | | | | 1 | 101 | a | | | | | | 2 | 102 | b | | | | | | | 107 | a | | | | | | | 108 | c | | | | | | 4 | 109 | t |

Inner

(api/inner-join ds1 ds2 :b)

inner-join [8 6]:

| :b | :a | :c | :right.a | :right.c | :d | | --: | -: | -- | -------: | -------- | -- | | 109 | 4 | t | 1 | a | X | | 108 | | c | 2 | t | X | | 107 | | a | 5 | a | X | | 106 | 4 | r | 4 | t | X | | 105 | 3 | t | 3 | a | X | | 104 | 2 | | 2 | b | X | | 103 | 1 | s | 1 | l | X | | 102 | 2 | b | | e | X |


(api/inner-join ds2 ds1 :b)

inner-join [8 6]:

| :b | :a | :c | :d | :right.a | :right.c | | --: | -: | -- | -- | -------: | -------- | | 102 | | e | X | 2 | b | | 103 | 1 | l | X | 1 | s | | 104 | 2 | b | X | 2 | | | 105 | 3 | a | X | 3 | t | | 106 | 4 | t | X | 4 | r | | 107 | 5 | a | X | | a | | 108 | 2 | t | X | | c | | 109 | 1 | a | X | 4 | t |


(api/inner-join ds1 ds2 [:a :b])

inner-join [4 7]:

| :a | :b | :c | :right.a | :right.b | :right.c | :d | | -: | --: | -- | -------: | -------: | -------- | -- | | 4 | 106 | r | 4 | 106 | t | X | | 3 | 105 | t | 3 | 105 | a | X | | 2 | 104 | | 2 | 104 | b | X | | 1 | 103 | s | 1 | 103 | l | X |


(api/inner-join ds2 ds1 [:a :b])

inner-join [4 7]:

| :a | :b | :c | :d | :right.a | :right.b | :right.c | | -: | --: | -- | -- | -------: | -------: | -------- | | 1 | 103 | l | X | 1 | 103 | s | | 2 | 104 | b | X | 2 | 104 | | | 3 | 105 | a | X | 3 | 105 | t | | 4 | 106 | t | X | 4 | 106 | r |

Full

Join keeping all rows

(api/full-join ds1 ds2 :b)

full-join [10 7]:

| :b | :a | :c | :right.b | :right.a | :right.c | :d | | --: | -: | -- | -------: | -------: | -------- | -- | | 109 | 4 | t | 109 | 1 | a | X | | 108 | | c | 108 | 2 | t | X | | 107 | | a | 107 | 5 | a | X | | 106 | 4 | r | 106 | 4 | t | X | | 105 | 3 | t | 105 | 3 | a | X | | 104 | 2 | | 104 | 2 | b | X | | 103 | 1 | s | 103 | 1 | l | X | | 102 | 2 | b | 102 | | e | X | | 101 | 1 | a | | | | | | | | | 110 | | d | X |


(api/full-join ds2 ds1 :b)

full-join [10 7]:

| :b | :a | :c | :d | :right.b | :right.a | :right.c | | --: | -: | -- | -- | -------: | -------: | -------- | | 102 | | e | X | 102 | 2 | b | | 103 | 1 | l | X | 103 | 1 | s | | 104 | 2 | b | X | 104 | 2 | | | 105 | 3 | a | X | 105 | 3 | t | | 106 | 4 | t | X | 106 | 4 | r | | 107 | 5 | a | X | 107 | | a | | 108 | 2 | t | X | 108 | | c | | 109 | 1 | a | X | 109 | 4 | t | | 110 | | d | X | | | | | | | | | 101 | 1 | a |


(api/full-join ds1 ds2 [:a :b])

full-join [14 7]:

| :a | :b | :c | :right.a | :right.b | :right.c | :d | | -: | --: | -- | -------: | -------: | -------- | -- | | 4 | 106 | r | 4 | 106 | t | X | | 3 | 105 | t | 3 | 105 | a | X | | 2 | 104 | | 2 | 104 | b | X | | 1 | 103 | s | 1 | 103 | l | X | | 1 | 101 | a | | | | | | 2 | 102 | b | | | | | | | 107 | a | | | | | | | 108 | c | | | | | | 4 | 109 | t | | | | | | | | | | 110 | d | X | | | | | 1 | 109 | a | X | | | | | 2 | 108 | t | X | | | | | 5 | 107 | a | X | | | | | | 102 | e | X |


(api/full-join ds2 ds1 [:a :b])

full-join [14 7]:

| :a | :b | :c | :d | :right.a | :right.b | :right.c | | -: | --: | -- | -- | -------: | -------: | -------- | | 1 | 103 | l | X | 1 | 103 | s | | 2 | 104 | b | X | 2 | 104 | | | 3 | 105 | a | X | 3 | 105 | t | | 4 | 106 | t | X | 4 | 106 | r | | | 110 | d | X | | | | | 1 | 109 | a | X | | | | | 2 | 108 | t | X | | | | | 5 | 107 | a | X | | | | | | 102 | e | X | | | | | | | | | 1 | 101 | a | | | | | | 2 | 102 | b | | | | | | | 107 | a | | | | | | | 108 | c | | | | | | 4 | 109 | t |

Semi

Return rows from ds1 matching ds2

(api/semi-join ds1 ds2 :b)

semi-join [5 3]:

| :b | :a | :c | | --: | -: | -- | | 109 | 4 | t | | 106 | 4 | r | | 105 | 3 | t | | 104 | 2 | | | 103 | 1 | s |


(api/semi-join ds2 ds1 :b)

semi-join [5 4]:

| :b | :a | :c | :d | | --: | -: | -- | -- | | 103 | 1 | l | X | | 104 | 2 | b | X | | 105 | 3 | a | X | | 106 | 4 | t | X | | 109 | 1 | a | X |


(api/semi-join ds1 ds2 [:a :b])

semi-join [4 3]:

| :a | :b | :c | | -: | --: | -- | | 4 | 106 | r | | 3 | 105 | t | | 2 | 104 | | | 1 | 103 | s |


(api/semi-join ds2 ds1 [:a :b])

semi-join [4 4]:

| :a | :b | :c | :d | | -: | --: | -- | -- | | 1 | 103 | l | X | | 2 | 104 | b | X | | 3 | 105 | a | X | | 4 | 106 | t | X |

Anti

Return rows from ds1 not matching ds2

(api/anti-join ds1 ds2 :b)

anti-join [4 3]:

| :b | :a | :c | | --: | -: | -- | | 108 | | c | | 107 | | a | | 102 | 2 | b | | 101 | 1 | a |


(api/anti-join ds2 ds1 :b)

anti-join [4 4]:

| :b | :a | :c | :d | | --: | -: | -- | -- | | 102 | | e | X | | 107 | 5 | a | X | | 108 | 2 | t | X | | 110 | | d | X |


(api/anti-join ds1 ds2 [:a :b])

anti-join [5 3]:

| :a | :b | :c | | -: | --: | -- | | 1 | 101 | a | | 2 | 102 | b | | | 107 | a | | | 108 | c | | 4 | 109 | t |


(api/anti-join ds2 ds1 [:a :b])

anti-join [5 4]:

| :a | :b | :c | :d | | -: | --: | -- | -- | | | 110 | d | X | | 1 | 109 | a | X | | 2 | 108 | t | X | | 5 | 107 | a | X | | | 102 | e | X |

asof

(def left-ds (api/dataset {:a [1 5 10]
                           :left-val ["a" "b" "c"]}))
(def right-ds (api/dataset {:a [1 2 3 6 7]
                            :right-val [:a :b :c :d :e]}))
left-ds
right-ds

_unnamed [3 2]:

| :a | :left-val | | -: | --------- | | 1 | a | | 5 | b | | 10 | c |

_unnamed [5 2]:

| :a | :right-val | | -: | ---------- | | 1 | :a | | 2 | :b | | 3 | :c | | 6 | :d | | 7 | :e |

(api/asof-join left-ds right-ds :a)

asof-<= [3 4]:

| :a | :left-val | :right.a | :right-val | | -: | --------- | -------: | ---------- | | 1 | a | 1 | :a | | 5 | b | 6 | :d | | 10 | c | | |

(api/asof-join left-ds right-ds :a {:asof-op :nearest})

asof-nearest [3 4]:

| :a | :left-val | :right.a | :right-val | | -: | --------- | -------: | ---------- | | 1 | a | 1 | :a | | 5 | b | 6 | :d | | 10 | c | 7 | :e |

(api/asof-join left-ds right-ds :a {:asof-op :>=})

asof->= [3 4]:

| :a | :left-val | :right.a | :right-val | | -: | --------- | -------: | ---------- | | 1 | a | 1 | :a | | 5 | b | 3 | :c | | 10 | c | 7 | :e |

Concat

contact joins rows from other datasets

(api/concat ds1)

_unnamed [9 3]:

| :a | :b | :c | | -: | --: | -- | | 1 | 101 | a | | 2 | 102 | b | | 1 | 103 | s | | 2 | 104 | | | 3 | 105 | t | | 4 | 106 | r | | | 107 | a | | | 108 | c | | 4 | 109 | t |


concat-copying ensures all readers are evaluated.

(api/concat-copying ds1)

_unnamed [9 3]:

| :a | :b | :c | | -: | --: | -- | | 1 | 101 | a | | 2 | 102 | b | | 1 | 103 | s | | 2 | 104 | | | 3 | 105 | t | | 4 | 106 | r | | | 107 | a | | | 108 | c | | 4 | 109 | t |


(api/concat ds1 (api/drop-columns ds2 :d))

null [18 3]:

| :a | :b | :c | | -: | --: | -- | | 1 | 101 | a | | 2 | 102 | b | | 1 | 103 | s | | 2 | 104 | | | 3 | 105 | t | | 4 | 106 | r | | | 107 | a | | | 108 | c | | 4 | 109 | t | | | 110 | d | | 1 | 109 | a | | 2 | 108 | t | | 5 | 107 | a | | 4 | 106 | t | | 3 | 105 | a | | 2 | 104 | b | | 1 | 103 | l | | | 102 | e |


(apply api/concat (repeatedly 3 #(api/random DS)))

null [27 4]:

:V1:V2:V3:V4
191.5C
170.5A
240.5A
131.5C
191.5C
261.5C
191.5C
261.5C
191.5C
281.0B
191.5C
281.0B
261.5C
151.0B
221.0B
110.5A
191.5C
240.5A
110.5A
191.5C
281.0B
170.5A
240.5A
191.5C
221.0B
Concat grouped dataset

Concatenation of grouped datasets results also in grouped dataset.

(api/concat (api/group-by DS [:V3])
            (api/group-by DS [:V4]))

null [6 3]:

:group-id:name:data
0{:V3 0.5}Group: {:V3 0.5} [3 4]:
1{:V3 1.0}Group: {:V3 1.0} [3 4]:
2{:V3 1.5}Group: {:V3 1.5} [3 4]:
3{:V4 “A”}Group: {:V4 “A”} [3 4]:
4{:V4 “B”}Group: {:V4 “B”} [3 4]:
5{:V4 “C”}Group: {:V4 “C”} [3 4]:

Union

The same as concat but returns unique rows

(apply api/union (api/drop-columns ds2 :d) (repeat 10 ds1))

union [18 3]:

| :a | :b | :c | | -: | --: | -- | | | 110 | d | | 1 | 109 | a | | 2 | 108 | t | | 5 | 107 | a | | 4 | 106 | t | | 3 | 105 | a | | 2 | 104 | b | | 1 | 103 | l | | | 102 | e | | 1 | 101 | a | | 2 | 102 | b | | 1 | 103 | s | | 2 | 104 | | | 3 | 105 | t | | 4 | 106 | r | | | 107 | a | | | 108 | c | | 4 | 109 | t |


(apply api/union (repeatedly 10 #(api/random DS)))

union [9 4]:

:V1:V2:V3:V4
261.5C
131.5C
151.0B
281.0B
191.5C
221.0B
240.5A
170.5A
110.5A

Bind

bind adds empty columns during concat

(api/bind ds1 ds2)

null [18 4]:

| :a | :b | :c | :d | | -: | --: | -- | -- | | 1 | 101 | a | | | 2 | 102 | b | | | 1 | 103 | s | | | 2 | 104 | | | | 3 | 105 | t | | | 4 | 106 | r | | | | 107 | a | | | | 108 | c | | | 4 | 109 | t | | | | 110 | d | X | | 1 | 109 | a | X | | 2 | 108 | t | X | | 5 | 107 | a | X | | 4 | 106 | t | X | | 3 | 105 | a | X | | 2 | 104 | b | X | | 1 | 103 | l | X | | | 102 | e | X |


(api/bind ds2 ds1)

null [18 4]:

| :a | :b | :c | :d | | -: | --: | -- | -- | | | 110 | d | X | | 1 | 109 | a | X | | 2 | 108 | t | X | | 5 | 107 | a | X | | 4 | 106 | t | X | | 3 | 105 | a | X | | 2 | 104 | b | X | | 1 | 103 | l | X | | | 102 | e | X | | 1 | 101 | a | | | 2 | 102 | b | | | 1 | 103 | s | | | 2 | 104 | | | | 3 | 105 | t | | | 4 | 106 | r | | | | 107 | a | | | | 108 | c | | | 4 | 109 | t | |

Append

append concats columns

(api/append ds1 ds2)

_unnamed [9 7]:

| :a | :b | :c | :a | :b | :c | :d | | -: | --: | -- | -: | --: | -- | -- | | 1 | 101 | a | | 110 | d | X | | 2 | 102 | b | 1 | 109 | a | X | | 1 | 103 | s | 2 | 108 | t | X | | 2 | 104 | | 5 | 107 | a | X | | 3 | 105 | t | 4 | 106 | t | X | | 4 | 106 | r | 3 | 105 | a | X | | | 107 | a | 2 | 104 | b | X | | | 108 | c | 1 | 103 | l | X | | 4 | 109 | t | | 102 | e | X |

Intersection

(api/intersect (api/select-columns ds1 :b)
               (api/select-columns ds2 :b))

intersection [8 1]:

:b
109
108
107
106
105
104
103
102

Difference

(api/difference (api/select-columns ds1 :b)
                (api/select-columns ds2 :b))

difference [1 1]:

:b
101

(api/difference (api/select-columns ds2 :b)
                (api/select-columns ds1 :b))

difference [1 1]:

:b
110

Split into train/test

In ML world very often you need to test given model and prepare collection of train and test datasets. split creates new dataset with two additional columns:

  • :$split-name - with :train, :test, :split-2, … values
  • :$split-id - id of splitted group (for k-fold and repeating)

split-type can be one of the following:

  • :kfold (default) - k-fold strategy, :k defines number of folds (defaults to 5), produces k splits
  • :bootstrap - :ratio defines ratio of observations put into result (defaults to 1.0), produces 1 split
  • :holdout - split into two or more parts with given ratio(s) (defaults to 2/3), produces 1 split
  • :holdouts - splits into two parts for ascending ratio. Range of rations is given by steps option
  • :loo - leave one out, produces the same number of splits as number of observations

:holdout can accept also probabilites or ratios and can split to more than 2 subdatasets

Additionally you can provide:

  • :seed - for random number generator
  • :shuffle? - turn on/off shuffle of the rows (default: true)
  • :repeats - repeat procedure :repeats times
  • :partition-selector - same as in group-by for stratified splitting to reflect dataset structure in splits.
  • :split-names names of subdatasets different than default, ie. [:train :test :split-2 ...]
  • :split-col-name - a column where name of split is stored, either :train or :test values (default: :$split-name)
  • :split-id-col-name - a column where id of the train/test pair is stored (default: :$split-id)

In case of grouped dataset each group is processed separately.

See more

(def for-splitting (api/dataset (map-indexed (fn [id v] {:id id
                                                        :partition v
                                                        :group (rand-nth [:g1 :g2 :g3])})
                                             (concat (repeat 20 :a) (repeat 5 :b)))))
for-splitting

_unnamed [25 3]:

:group:partition:id
:g1:a0
:g1:a1
:g1:a2
:g2:a3
:g1:a4
:g2:a5
:g2:a6
:g3:a7
:g2:a8
:g2:a9
:g1:a10
:g1:a11
:g2:a12
:g3:a13
:g3:a14
:g1:a15
:g2:a16
:g1:a17
:g2:a18
:g1:a19
:g3:b20
:g1:b21
:g3:b22
:g3:b23
:g2:b24

k-Fold

Returns k=5 maps

(-> for-splitting
    (api/split)
    (api/head 30))

_unnamed, (splitted) [30 5]:

| :group | :partition | :id | :(split-name | :)split-id | | | ------ | ---------- | --: | --------------------------- | -: | | :g2 | :a | 3 | :train | 0 | | :g2 | :a | 8 | :train | 0 | | :g2 | :a | 12 | :train | 0 | | :g2 | :a | 6 | :train | 0 | | :g1 | :a | 15 | :train | 0 | | :g2 | :a | 9 | :train | 0 | | :g3 | :a | 7 | :train | 0 | | :g1 | :a | 0 | :train | 0 | | :g1 | :a | 19 | :train | 0 | | :g1 | :b | 21 | :train | 0 | | :g3 | :a | 13 | :train | 0 | | :g2 | :b | 24 | :train | 0 | | :g2 | :a | 18 | :train | 0 | | :g2 | :a | 16 | :train | 0 | | :g2 | :a | 5 | :train | 0 | | :g3 | :b | 22 | :train | 0 | | :g1 | :a | 17 | :train | 0 | | :g3 | :b | 23 | :train | 0 | | :g1 | :a | 1 | :train | 0 | | :g1 | :a | 11 | :train | 0 | | :g1 | :a | 10 | :test | 0 | | :g1 | :a | 4 | :test | 0 | | :g3 | :a | 14 | :test | 0 | | :g3 | :b | 20 | :test | 0 | | :g1 | :a | 2 | :test | 0 | | :g1 | :a | 10 | :train | 1 | | :g1 | :a | 4 | :train | 1 | | :g3 | :a | 14 | :train | 1 | | :g3 | :b | 20 | :train | 1 | | :g1 | :a | 2 | :train | 1 |

Partition according to :k column to reflect it’s distribution

(-> for-splitting
    (api/split :kfold {:partition-selector :partition})
    (api/head 30))

_unnamed, (splitted) [30 5]:

| :group | :partition | :id | :(split-name | :)split-id | | | ------ | ---------- | --: | --------------------------- | -: | | :g1 | :a | 10 | :train | 0 | | :g2 | :a | 5 | :train | 0 | | :g2 | :a | 6 | :train | 0 | | :g2 | :a | 9 | :train | 0 | | :g1 | :a | 2 | :train | 0 | | :g1 | :a | 0 | :train | 0 | | :g2 | :a | 12 | :train | 0 | | :g1 | :a | 4 | :train | 0 | | :g2 | :a | 16 | :train | 0 | | :g1 | :a | 17 | :train | 0 | | :g1 | :a | 1 | :train | 0 | | :g3 | :a | 13 | :train | 0 | | :g1 | :a | 15 | :train | 0 | | :g2 | :a | 3 | :train | 0 | | :g2 | :a | 8 | :train | 0 | | :g3 | :a | 7 | :train | 0 | | :g1 | :a | 19 | :test | 0 | | :g2 | :a | 18 | :test | 0 | | :g3 | :a | 14 | :test | 0 | | :g1 | :a | 11 | :test | 0 | | :g1 | :a | 19 | :train | 1 | | :g2 | :a | 18 | :train | 1 | | :g3 | :a | 14 | :train | 1 | | :g1 | :a | 11 | :train | 1 | | :g1 | :a | 2 | :train | 1 | | :g1 | :a | 0 | :train | 1 | | :g2 | :a | 12 | :train | 1 | | :g1 | :a | 4 | :train | 1 | | :g2 | :a | 16 | :train | 1 | | :g1 | :a | 17 | :train | 1 |

Bootstrap

(api/split for-splitting :bootstrap)

_unnamed, (splitted) [34 5]:

| :group | :partition | :id | :(split-name | :)split-id | | | ------ | ---------- | --: | --------------------------- | -: | | :g2 | :b | 24 | :train | 0 | | :g3 | :b | 22 | :train | 0 | | :g1 | :a | 2 | :train | 0 | | :g2 | :a | 12 | :train | 0 | | :g1 | :b | 21 | :train | 0 | | :g2 | :a | 6 | :train | 0 | | :g3 | :a | 13 | :train | 0 | | :g1 | :a | 0 | :train | 0 | | :g3 | :a | 14 | :train | 0 | | :g1 | :a | 2 | :train | 0 | | :g3 | :b | 22 | :train | 0 | | :g1 | :b | 21 | :train | 0 | | :g3 | :b | 23 | :train | 0 | | :g1 | :a | 1 | :train | 0 | | :g3 | :b | 20 | :train | 0 | | :g3 | :a | 13 | :train | 0 | | :g3 | :a | 14 | :train | 0 | | :g2 | :a | 6 | :train | 0 | | :g3 | :a | 14 | :train | 0 | | :g2 | :a | 3 | :train | 0 | | :g1 | :a | 10 | :train | 0 | | :g3 | :b | 22 | :train | 0 | | :g1 | :a | 11 | :train | 0 | | :g2 | :a | 12 | :train | 0 | | :g1 | :a | 4 | :train | 0 |

with repeats, to get 100 splits

(-> for-splitting
    (api/split :bootstrap {:repeats 100})
    (:$split-id)
    (distinct)
    (count))

100

Holdout

with small ratio

(api/split for-splitting :holdout {:ratio 0.2})

_unnamed, (splitted) [25 5]:

| :group | :partition | :id | :(split-name | :)split-id | | | ------ | ---------- | --: | --------------------------- | -: | | :g1 | :b | 21 | :train | 0 | | :g1 | :a | 2 | :train | 0 | | :g2 | :a | 6 | :train | 0 | | :g1 | :a | 15 | :train | 0 | | :g2 | :a | 8 | :train | 0 | | :g2 | :b | 24 | :test | 0 | | :g3 | :a | 13 | :test | 0 | | :g2 | :a | 5 | :test | 0 | | :g2 | :a | 12 | :test | 0 | | :g1 | :a | 19 | :test | 0 | | :g3 | :b | 22 | :test | 0 | | :g1 | :a | 10 | :test | 0 | | :g2 | :a | 18 | :test | 0 | | :g1 | :a | 11 | :test | 0 | | :g1 | :a | 0 | :test | 0 | | :g3 | :b | 23 | :test | 0 | | :g1 | :a | 1 | :test | 0 | | :g2 | :a | 16 | :test | 0 | | :g3 | :a | 7 | :test | 0 | | :g1 | :a | 17 | :test | 0 | | :g2 | :a | 3 | :test | 0 | | :g3 | :a | 14 | :test | 0 | | :g1 | :a | 4 | :test | 0 | | :g3 | :b | 20 | :test | 0 | | :g2 | :a | 9 | :test | 0 |

you can split to more than two subdatasets with holdout

(api/split for-splitting :holdout {:ratio [0.1 0.2 0.3 0.15 0.25]})

_unnamed, (splitted) [25 5]:

| :group | :partition | :id | :(split-name | :)split-id | | | ------ | ---------- | --: | --------------------------- | -: | | :g2 | :a | 3 | :train | 0 | | :g2 | :b | 24 | :train | 0 | | :g3 | :b | 23 | :test | 0 | | :g1 | :a | 0 | :test | 0 | | :g1 | :a | 17 | :test | 0 | | :g2 | :a | 12 | :test | 0 | | :g1 | :a | 1 | :test | 0 | | :g1 | :a | 15 | :split-2 | 0 | | :g2 | :a | 16 | :split-2 | 0 | | :g2 | :a | 18 | :split-2 | 0 | | :g1 | :a | 19 | :split-2 | 0 | | :g2 | :a | 6 | :split-2 | 0 | | :g2 | :a | 5 | :split-2 | 0 | | :g1 | :a | 11 | :split-2 | 0 | | :g1 | :a | 2 | :split-3 | 0 | | :g1 | :a | 4 | :split-3 | 0 | | :g3 | :b | 22 | :split-3 | 0 | | :g2 | :a | 8 | :split-4 | 0 | | :g1 | :a | 10 | :split-4 | 0 | | :g3 | :a | 7 | :split-4 | 0 | | :g3 | :b | 20 | :split-4 | 0 | | :g3 | :a | 13 | :split-4 | 0 | | :g3 | :a | 14 | :split-4 | 0 | | :g1 | :b | 21 | :split-4 | 0 | | :g2 | :a | 9 | :split-4 | 0 |

you can use also proportions with custom names

(api/split for-splitting :holdout {:ratio [5 3 11 2]
                                   :split-names ["small" "smaller" "big" "the rest"]})

_unnamed, (splitted) [25 5]:

| :group | :partition | :id | :(split-name | :)split-id | | | ------ | ---------- | --: | --------------------------- | -: | | :g1 | :a | 1 | small | 0 | | :g1 | :a | 19 | small | 0 | | :g1 | :a | 15 | small | 0 | | :g2 | :b | 24 | small | 0 | | :g3 | :a | 7 | small | 0 | | :g3 | :a | 14 | smaller | 0 | | :g2 | :a | 3 | smaller | 0 | | :g3 | :b | 23 | smaller | 0 | | :g2 | :a | 12 | big | 0 | | :g2 | :a | 8 | big | 0 | | :g1 | :a | 0 | big | 0 | | :g3 | :a | 13 | big | 0 | | :g1 | :a | 4 | big | 0 | | :g1 | :a | 11 | big | 0 | | :g2 | :a | 18 | big | 0 | | :g1 | :a | 10 | big | 0 | | :g2 | :a | 5 | big | 0 | | :g1 | :a | 17 | big | 0 | | :g2 | :a | 16 | big | 0 | | :g3 | :b | 20 | big | 0 | | :g2 | :a | 6 | big | 0 | | :g3 | :b | 22 | the rest | 0 | | :g1 | :a | 2 | the rest | 0 | | :g1 | :b | 21 | the rest | 0 | | :g2 | :a | 9 | the rest | 0 |

Holdouts

With ratios from 5% to 95% of the dataset with step 1.5 generates 15 splits with ascending rows in train dataset.

(-> (api/split for-splitting :holdouts {:steps [0.05 0.95 1.5]
                                        :shuffle? false})
    (api/group-by [:$split-id :$split-name]))

_unnamed [30 3]:

:group-id:name:data
0{:(split-name :train, :)split-id 0}Group: {:(split-name :train, :)split-id 0} [1 5]:
1{:(split-name :test, :)split-id 0}Group: {:(split-name :test, :)split-id 0} [24 5]:
2{:(split-name :train, :)split-id 1}Group: {:(split-name :train, :)split-id 1} [2 5]:
3{:(split-name :test, :)split-id 1}Group: {:(split-name :test, :)split-id 1} [23 5]:
4{:(split-name :train, :)split-id 2}Group: {:(split-name :train, :)split-id 2} [4 5]:
5{:(split-name :test, :)split-id 2}Group: {:(split-name :test, :)split-id 2} [21 5]:
6{:(split-name :train, :)split-id 3}Group: {:(split-name :train, :)split-id 3} [5 5]:
7{:(split-name :test, :)split-id 3}Group: {:(split-name :test, :)split-id 3} [20 5]:
8{:(split-name :train, :)split-id 4}Group: {:(split-name :train, :)split-id 4} [7 5]:
9{:(split-name :test, :)split-id 4}Group: {:(split-name :test, :)split-id 4} [18 5]:
10{:(split-name :train, :)split-id 5}Group: {:(split-name :train, :)split-id 5} [8 5]:
11{:(split-name :test, :)split-id 5}Group: {:(split-name :test, :)split-id 5} [17 5]:
12{:(split-name :train, :)split-id 6}Group: {:(split-name :train, :)split-id 6} [10 5]:
13{:(split-name :test, :)split-id 6}Group: {:(split-name :test, :)split-id 6} [15 5]:
14{:(split-name :train, :)split-id 7}Group: {:(split-name :train, :)split-id 7} [11 5]:
15{:(split-name :test, :)split-id 7}Group: {:(split-name :test, :)split-id 7} [14 5]:
16{:(split-name :train, :)split-id 8}Group: {:(split-name :train, :)split-id 8} [13 5]:
17{:(split-name :test, :)split-id 8}Group: {:(split-name :test, :)split-id 8} [12 5]:
18{:(split-name :train, :)split-id 9}Group: {:(split-name :train, :)split-id 9} [14 5]:
19{:(split-name :test, :)split-id 9}Group: {:(split-name :test, :)split-id 9} [11 5]:
20{:(split-name :train, :)split-id 10}Group: {:(split-name :train, :)split-id 10} [16 5]:
21{:(split-name :test, :)split-id 10}Group: {:(split-name :test, :)split-id 10} [9 5]:
22{:(split-name :train, :)split-id 11}Group: {:(split-name :train, :)split-id 11} [17 5]:
23{:(split-name :test, :)split-id 11}Group: {:(split-name :test, :)split-id 11} [8 5]:
24{:(split-name :train, :)split-id 12}Group: {:(split-name :train, :)split-id 12} [19 5]:

Leave One Out

(-> for-splitting
    (api/split :loo)
    (api/head 30))

_unnamed, (splitted) [30 5]:

| :group | :partition | :id | :(split-name | :)split-id | | | ------ | ---------- | --: | --------------------------- | -: | | :g1 | :a | 10 | :train | 0 | | :g1 | :a | 2 | :train | 0 | | :g3 | :b | 22 | :train | 0 | | :g3 | :b | 20 | :train | 0 | | :g3 | :a | 7 | :train | 0 | | :g3 | :b | 23 | :train | 0 | | :g1 | :a | 19 | :train | 0 | | :g2 | :a | 9 | :train | 0 | | :g2 | :a | 12 | :train | 0 | | :g3 | :a | 13 | :train | 0 | | :g2 | :a | 5 | :train | 0 | | :g2 | :a | 16 | :train | 0 | | :g2 | :a | 8 | :train | 0 | | :g1 | :a | 17 | :train | 0 | | :g1 | :a | 15 | :train | 0 | | :g1 | :a | 4 | :train | 0 | | :g1 | :b | 21 | :train | 0 | | :g3 | :a | 14 | :train | 0 | | :g1 | :a | 1 | :train | 0 | | :g1 | :a | 0 | :train | 0 | | :g2 | :a | 18 | :train | 0 | | :g1 | :a | 11 | :train | 0 | | :g2 | :b | 24 | :train | 0 | | :g2 | :a | 3 | :train | 0 | | :g2 | :a | 6 | :test | 0 | | :g2 | :a | 6 | :train | 1 | | :g1 | :a | 2 | :train | 1 | | :g3 | :b | 22 | :train | 1 | | :g3 | :b | 20 | :train | 1 | | :g3 | :a | 7 | :train | 1 |

(-> for-splitting
    (api/split :loo)
    (api/row-count))

625

Grouped dataset with partitioning

(-> for-splitting
    (api/group-by :group)
    (api/split :bootstrap {:partition-selector :partition :seed 11 :ratio 0.8}))

_unnamed [3 3]:

:group-id:name:data
0:g1Group: :g1, (splitted) [10 5]:
1:g2Group: :g2, (splitted) [10 5]:
2:g3Group: :g3, (splitted) [9 5]:

Split as a sequence

To get a sequence of pairs, use split->seq function

(-> for-splitting
    (api/split->seq :kfold {:partition-selector :partition})
    (first))

{:train Group: 0 [20 3]:

:group:partition:id
:g1:a17
:g3:a14
:g2:a12
:g1:a0
:g1:a4
:g2:a6
:g1:a11
:g1:a1
:g1:a2
:g1:a15
:g2:a16
:g2:a5
:g2:a9
:g1:a19
:g2:a3
:g1:a10
:g3:b20
:g1:b21
:g3:b23
:g3:b22

, :test Group: 0 [5 3]:

:group:partition:id
:g2:a8
:g2:a18
:g3:a7
:g3:a13
:g2:b24

}

(-> for-splitting
    (api/group-by :group)
    (api/split->seq :bootstrap {:partition-selector :partition :seed 11 :ratio 0.8 :repeats 2})
    (first))

[:g1 ({:train Group: 0 [9 3]:

:group:partition:id
:g1:a15
:g1:a0
:g1:a10
:g1:a19
:g1:a1
:g1:a4
:g1:a17
:g1:a11
:g1:b21

, :test Group: 0 [1 3]:

:group:partition:id
:g1:a2

} {:train Group: 1 [9 3]:

:group:partition:id
:g1:a4
:g1:a19
:g1:a11
:g1:a0
:g1:a2
:g1:a19
:g1:a1
:g1:a15
:g1:b21

, :test Group: 1 [2 3]:

:group:partition:id
:g1:a17
:g1:a10

})]

Pipeline

tablecloth.pipeline exports special versions of API which create functions operating only on dataset. This creates the possibility to chain operations and compose them easily.

There are two ways to create pipelines:

  • functional, as a composition of functions
  • declarative, separating task declarations and concrete parametrization.

Pipeline operations are prepared to work with metamorph library. That means that result of the pipeline is wrapped into a map and dataset is stored under :metamorph/data key.

(require '[tablecloth.pipeline :as pip])

Functional

To create composable function, call API function but defined in tablecloth.pipeline namespace and without ds argument.

(pip/select-columns :type/numerical)

#function[tablecloth.pipeline/select-columns/fn–46458]

Calling such function on a dataset gives a requested result.

((pip/select-columns :type/numerical) DS)

#:metamorph{:data _unnamed [9 3]:

:V1:V2:V3
110.5
221.0
131.5
240.5
151.0
261.5
170.5
281.0
191.5

}

Pipeline functions can be composed using a comp function or pipeline. The latter is just reversed comp to create the order from first operation to last.

(let [p (pip/pipeline (pip/group-by :V1)
                      (pip/fold-by :V4)
                      (pip/ungroup))]
  (p DS))

#:metamorph{:data _unnamed [6 4]:

:V4:V2:V3:V1
A[1 7][0.5 0.5][1 1]
C[3 9][1.5 1.5][1 1]
B[5][1.0][1]
B[2 8][1.0 1.0][2 2]
A[4][0.5][2]
C[6][1.5][2]

}

Declarative

To create a pipeline declarative way you can use ->pipeline function and apply sequence of definition. Definition is simple a sequence where on the first position name of the operation is passed as a keyword, a symbol or var. The rest are operation parameters which are mostly the same as when calling a function.

To help keeping pipeline declaration as a pure data structure you can use namespaced keywords to create reference to defined symbols. The special ctx namespace is used to refer to a values stored in an optional map passed as an argument to ->pipeline.

You can’t used type and !type namespaces for a symbol reference, because they are reserved for column type selectors.

(def pipeline-declaration [[:pip/group-by :V1]
                           [:pip/unique-by ::unique-by-operation {:strategy :ctx/strategy}]
                           [:pip/ungroup {:add-group-as-column :from-V1}]])

(def unique-by-operation (fn [m] (mod (:V2 m) 3)))

(def pipeline-1 (pip/->pipeline {:strategy vec} pipeline-declaration))
(def pipeline-2 (pip/->pipeline {:strategy set} pipeline-declaration))
(pipeline-1 DS)

#:metamorph{:data _unnamed [6 5]:

:from-V1:V2:V4:V3:V1
1[1 7][“A” “A”][0.5 0.5][1 1]
1[3 9][“C” “C”][1.5 1.5][1 1]
1[5][“B”][1.0][1]
2[2 8][“B” “B”][1.0 1.0][2 2]
2[4][“A”][0.5][2]
2[6][“C”][1.5][2]

}

(pipeline-2 DS)

#:metamorph{:data _unnamed [6 5]:

:from-V1:V2:V4:V3:V1
1#{7 1}#{“A”}#{0.5}#{1}
1#{3 9}#{“C”}#{1.5}#{1}
1#{5}#{“B”}#{1.0}#{1}
2#{2 8}#{“B”}#{1.0}#{2}
2#{4}#{“A”}#{0.5}#{2}
2#{6}#{“C”}#{1.5}#{2}

}

Custom operator

Custom pipeline operator is just function which returns another function operating on a dataset. If you want it to work with metamorph you have to use metamorph/lift function.

(defn duplicate-columns
  [column-selector]
  (fn [ds]
    (let [column-names (api/column-names ds column-selector)]
      (reduce (fn [d n]
                (api/add-column d (str n "-copy") (d n))) ds column-names))))

(def pipeline (pip/->pipeline [[::duplicate-columns :type/numerical]]))
(pipeline DS)

_unnamed [9 7]:

:V1:V2:V3:V4:V1-copy:V2-copy:V3-copy
110.5A110.5
221.0B221.0
131.5C131.5
240.5A240.5
151.0B151.0
261.5C261.5
170.5A170.5
281.0B281.0
191.5C191.5

Functions

This API doesn’t provide any statistical, numerical or date/time functions. Use below namespaces:

Namespacefunctions
tech.v3.datatype.functionalprimitive oprations, reducers, statistics
tech.v3.datatype.datetimedate/time converters and operations

Other examples

Stocks

(defonce stocks (api/dataset "https://raw.githubusercontent.com/techascent/tech.ml.dataset/master/test/data/stocks.csv" {:key-fn keyword}))
stocks

https://raw.githubusercontent.com/techascent/tech.ml.dataset/master/test/data/stocks.csv [560 3]:

:symbol:date:price
MSFT2000-01-0139.81
MSFT2000-02-0136.35
MSFT2000-03-0143.22
MSFT2000-04-0128.37
MSFT2000-05-0125.45
MSFT2000-06-0132.54
MSFT2000-07-0128.40
MSFT2000-08-0128.40
MSFT2000-09-0124.53
MSFT2000-10-0128.02
MSFT2000-11-0123.34
MSFT2000-12-0117.65
MSFT2001-01-0124.84
MSFT2001-02-0124.00
MSFT2001-03-0122.25
MSFT2001-04-0127.56
MSFT2001-05-0128.14
MSFT2001-06-0129.70
MSFT2001-07-0126.93
MSFT2001-08-0123.21
MSFT2001-09-0120.82
MSFT2001-10-0123.65
MSFT2001-11-0126.12
MSFT2001-12-0126.95
MSFT2002-01-0125.92
(-> stocks
    (api/group-by (fn [row]
                    {:symbol (:symbol row)
                     :year (tech.v3.datatype.datetime/long-temporal-field :years (:date row))}))
    (api/aggregate #(tech.v3.datatype.functional/mean (% :price)))
    (api/order-by [:symbol :year]))

_unnamed [51 3]:

:summary:year:symbol
21.748333332000AAPL
10.175833332001AAPL
9.408333332002AAPL
9.347500002003AAPL
18.723333332004AAPL
48.171666672005AAPL
72.043333332006AAPL
133.353333332007AAPL
138.480833332008AAPL
150.393333332009AAPL
206.566666672010AAPL
43.930833332000AMZN
11.739166672001AMZN
16.723333332002AMZN
39.016666672003AMZN
43.267500002004AMZN
40.187500002005AMZN
36.251666672006AMZN
69.952500002007AMZN
69.015000002008AMZN
90.730833332009AMZN
124.210000002010AMZN
159.476000002004GOOG
286.472500002005GOOG
415.256666672006GOOG
(-> stocks
    (api/group-by (juxt :symbol #(tech.v3.datatype.datetime/long-temporal-field :years (% :date))))
    (api/aggregate #(tech.v3.datatype.functional/mean (% :price)))
    (api/rename-columns {:$group-name-0 :symbol
                         :$group-name-1 :year}))

_unnamed [51 3]:

:summary:symbol:year
29.67333333MSFT2000
25.34750000MSFT2001
21.82666667MSFT2002
20.93416667MSFT2003
22.67416667MSFT2004
23.84583333MSFT2005
24.75833333MSFT2006
29.28416667MSFT2007
25.20833333MSFT2008
22.87250000MSFT2009
28.50666667MSFT2010
43.93083333AMZN2000
11.73916667AMZN2001
16.72333333AMZN2002
39.01666667AMZN2003
43.26750000AMZN2004
40.18750000AMZN2005
36.25166667AMZN2006
69.95250000AMZN2007
69.01500000AMZN2008
90.73083333AMZN2009
124.21000000AMZN2010
96.91416667IBM2000
96.96833333IBM2001
75.12500000IBM2002

data.table

Below you can find comparizon between functionality of data.table and Clojure dataset API. I leave it without comments, please refer original document explaining details:

Introduction to data.table

R

library(data.table)
library(knitr)

flights <- fread("https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv")

kable(head(flights))
yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
2014111413AAJFKLAX35924759
201411-313AAJFKLAX363247511
20141129AAJFKLAX351247519
201411-8-26AALGAPBI15710357
20141121AAJFKLAX350247513
20141140AAEWRLAX339245418

Clojure

(require '[tech.v3.datatype.functional :as dfn]
         '[tech.v3.datatype.argops :as aops]
         '[tech.v3.datatype :as dtype])

(defonce flights (api/dataset "https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv"))
(api/head flights 6)

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [6 11]:

yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
2014111413AAJFKLAX35924759
201411-313AAJFKLAX363247511
20141129AAJFKLAX351247519
201411-8-26AALGAPBI15710357
20141121AAJFKLAX350247513
20141140AAEWRLAX339245418

Basics

Shape of loaded data

R

dim(flights)
[1] 253316     11

Clojure

(api/shape flights)
[253316 11]
What is data.table?

R

DT = data.table(
  ID = c("b","b","b","a","a","c"),
  a = 1:6,
  b = 7:12,
  c = 13:18
)

kable(DT)

| ID | a | b | c | | :- | -: | -: | -: | | b | 1 | 7 | 13 | | b | 2 | 8 | 14 | | b | 3 | 9 | 15 | | a | 4 | 10 | 16 | | a | 5 | 11 | 17 | | c | 6 | 12 | 18 |

class(DT$ID)
[1] "character"

Clojure

(def DT (api/dataset {:ID ["b" "b" "b" "a" "a" "c"]
                      :a (range 1 7)
                      :b (range 7 13)
                      :c (range 13 19)}))
DT

_unnamed [6 4]:

| :ID | :a | :b | :c | | --- | -: | -: | -: | | b | 1 | 7 | 13 | | b | 2 | 8 | 14 | | b | 3 | 9 | 15 | | a | 4 | 10 | 16 | | a | 5 | 11 | 17 | | c | 6 | 12 | 18 |

(-> :ID DT meta :datatype)
:string
Get all the flights with “JFK” as the origin airport in the month of June.

R

ans <- flights[origin == "JFK" & month == 6L]
kable(head(ans))
yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
201461-9-5AAJFKLAX32424758
201461-10-13AAJFKLAX329247512
20146118-1AAJFKLAX32624757
201461-6-16AAJFKLAX320247510
201461-4-45AAJFKLAX326247518
201461-6-23AAJFKLAX329247514

Clojure

(-> flights
    (api/select-rows (fn [row] (and (= (get row "origin") "JFK")
                                   (= (get row "month") 6))))
    (api/head 6))

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [6 11]:

yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
201461-9-5AAJFKLAX32424758
201461-10-13AAJFKLAX329247512
20146118-1AAJFKLAX32624757
201461-6-16AAJFKLAX320247510
201461-4-45AAJFKLAX326247518
201461-6-23AAJFKLAX329247514
Get the first two rows from flights.

R

ans <- flights[1:2]
kable(ans)
yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
2014111413AAJFKLAX35924759
201411-313AAJFKLAX363247511

Clojure

(api/select-rows flights (range 2))

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [2 11]:

yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
2014111413AAJFKLAX35924759
201411-313AAJFKLAX363247511
Sort flights first by column origin in ascending order, and then by dest in descending order

R

ans <- flights[order(origin, -dest)]
kable(head(ans))
yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
201415649EVEWRXNA19511318
201416713EVEWRXNA19011318
201417-6-13EVEWRXNA17911318
201418-7-12EVEWRXNA18411318
201419167EVEWRXNA18111318
20141136666EVEWRXNA18811319

Clojure

(-> flights
    (api/order-by ["origin" "dest"] [:asc :desc])
    (api/head 6))

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [6 11]:

yearmonthdaydep_delayarr_delaycarrierorigindestair_timedistancehour
201463-6-38EVEWRXNA15411316
2014120-9-17EVEWRXNA17711318
2014319-610EVEWRXNA20111316
201423231268EVEWRXNA184113112
2014425-8-32EVEWRXNA15911316
20142192110EVEWRXNA17611318
Select arr_delay column, but return it as a vector

R

ans <- flights[, arr_delay]
head(ans)
[1]  13  13   9 -26   1   0

Clojure

(take 6 (flights "arr_delay"))
(13 13 9 -26 1 0)
Select arr_delay column, but return as a data.table instead

R

ans <- flights[, list(arr_delay)]
kable(head(ans))
arr_delay
13
13
9
-26
1
0

Clojure

(-> flights
    (api/select-columns "arr_delay")
    (api/head 6))

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [6 1]:

arr_delay
13
13
9
-26
1
0
Select both arr_delay and dep_delay columns

R

ans <- flights[, .(arr_delay, dep_delay)]
kable(head(ans))
arr_delaydep_delay
1314
13-3
92
-26-8
12
04

Clojure

(-> flights
    (api/select-columns ["arr_delay" "dep_delay"])
    (api/head 6))

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [6 2]:

arr_delaydep_delay
1314
13-3
92
-26-8
12
04
Select both arr_delay and dep_delay columns and rename them to delay_arr and delay_dep

R

ans <- flights[, .(delay_arr = arr_delay, delay_dep = dep_delay)]
kable(head(ans))
delay_arrdelay_dep
1314
13-3
92
-26-8
12
04

Clojure

(-> flights
    (api/select-columns {"arr_delay" "delay_arr"
                         "dep_delay" "delay_arr"})
    (api/head 6))

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [6 2]:

delay_arrdelay_arr
1314
13-3
92
-26-8
12
04
How many trips have had total delay < 0?

R

ans <- flights[, sum( (arr_delay + dep_delay) < 0 )]
ans
[1] 141814

Clojure

(->> (dfn/+ (flights "arr_delay") (flights "dep_delay"))
     (aops/argfilter #(< % 0.0))
     (dtype/ecount))
141814

or pure Clojure functions (much, much slower)

(->> (map + (flights "arr_delay") (flights "dep_delay"))
     (filter neg?)
     (count))
141814
Calculate the average arrival and departure delay for all flights with “JFK” as the origin airport in the month of June

R

ans <- flights[origin == "JFK" & month == 6L,
               .(m_arr = mean(arr_delay), m_dep = mean(dep_delay))]
kable(ans)
m_arrm_dep
5.8393499.807884

Clojure

(-> flights
    (api/select-rows (fn [row] (and (= (get row "origin") "JFK")
                                   (= (get row "month") 6))))
    (api/aggregate {:m_arr #(dfn/mean (% "arr_delay"))
                    :m_dep #(dfn/mean (% "dep_delay"))}))

_unnamed [1 2]:

:m_arr:m_dep
5.839349329.80788411
How many trips have been made in 2014 from “JFK” airport in the month of June?

R

ans <- flights[origin == "JFK" & month == 6L, length(dest)]
ans
[1] 8422

or

ans <- flights[origin == "JFK" & month == 6L, .N]
ans
[1] 8422

Clojure

(-> flights
    (api/select-rows (fn [row] (and (= (get row "origin") "JFK")
                                   (= (get row "month") 6))))
    (api/row-count))
8422
deselect columns using - or !

R

ans <- flights[, !c("arr_delay", "dep_delay")]
kable(head(ans))
yearmonthdaycarrierorigindestair_timedistancehour
201411AAJFKLAX35924759
201411AAJFKLAX363247511
201411AAJFKLAX351247519
201411AALGAPBI15710357
201411AAJFKLAX350247513
201411AAEWRLAX339245418

or

ans <- flights[, -c("arr_delay", "dep_delay")]
kable(head(ans))
yearmonthdaycarrierorigindestair_timedistancehour
201411AAJFKLAX35924759
201411AAJFKLAX363247511
201411AAJFKLAX351247519
201411AALGAPBI15710357
201411AAJFKLAX350247513
201411AAEWRLAX339245418

Clojure

(-> flights
    (api/select-columns (complement #{"arr_delay" "dep_delay"}))
    (api/head 6))

https://raw.githubusercontent.com/Rdatatable/data.table/master/vignettes/flights14.csv [6 9]:

yearmonthdaycarrierorigindestair_timedistancehour
201411AAJFKLAX35924759
201411AAJFKLAX363247511
201411AAJFKLAX351247519
201411AALGAPBI15710357
201411AAJFKLAX350247513
201411AAEWRLAX339245418

Aggregations

How can we get the number of trips corresponding to each origin airport?

R

ans <- flights[, .(.N), by = .(origin)]
kable(ans)
originN
JFK81483
LGA84433
EWR87400

Clojure

(-> flights
    (api/group-by ["origin"])
    (api/aggregate {:N api/row-count}))

_unnamed [3 2]:

:Norigin
81483JFK
84433LGA
87400EWR
How can we calculate the number of trips for each origin airport for carrier code “AA”?

R

ans <- flights[carrier == "AA", .N, by = origin]
kable(ans)
originN
JFK11923
LGA11730
EWR2649

Clojure

(-> flights
    (api/select-rows #(= (get % "carrier") "AA"))
    (api/group-by ["origin"])
    (api/aggregate {:N api/row-count}))

_unnamed [3 2]:

:Norigin
11923JFK
11730LGA
2649EWR
How can we get the total number of trips for each origin, dest pair for carrier code “AA”?

R

ans <- flights[carrier == "AA", .N, by = .(origin, dest)]
kable(head(ans))
origindestN
JFKLAX3387
LGAPBI245
EWRLAX62
JFKMIA1876
JFKSEA298
EWRMIA848

Clojure

(-> flights
    (api/select-rows #(= (get % "carrier") "AA"))
    (api/group-by ["origin" "dest"])
    (api/aggregate {:N api/row-count})
    (api/head 6))

_unnamed [6 3]:

:Norigindest
3387JFKLAX
245LGAPBI
62EWRLAX
1876JFKMIA
298JFKSEA
848EWRMIA
How can we get the average arrival and departure delay for each orig,dest pair for each month for carrier code “AA”?

R

ans <- flights[carrier == "AA",
        .(mean(arr_delay), mean(dep_delay)),
        by = .(origin, dest, month)]
kable(head(ans,10))
origindestmonthV1V2
JFKLAX16.59036114.2289157
LGAPBI1-7.7586210.3103448
EWRLAX11.3666677.5000000
JFKMIA115.72067018.7430168
JFKSEA114.35714330.7500000
EWRMIA111.01123612.1235955
JFKSFO119.25225228.6396396
JFKBOS112.91964315.2142857
JFKORD131.58620740.1724138
JFKIAH128.85714314.2857143

Clojure

(-> flights
    (api/select-rows #(= (get % "carrier") "AA"))
    (api/group-by ["origin" "dest" "month"])
    (api/aggregate [#(dfn/mean (% "arr_delay"))
                    #(dfn/mean (% "dep_delay"))])
    (api/head 10))

_unnamed [10 5]:

month:summary-0:summary-1origindest
16.5903614514.22891566JFKLAX
1-7.758620690.31034483LGAPBI
11.366666677.50000000EWRLAX
115.7206703918.74301676JFKMIA
114.3571428630.75000000JFKSEA
111.0112359612.12359551EWRMIA
119.2522522528.63963964JFKSFO
112.9196428615.21428571JFKBOS
131.5862069040.17241379JFKORD
128.8571428614.28571429JFKIAH
So how can we directly order by all the grouping variables?

R

ans <- flights[carrier == "AA",
        .(mean(arr_delay), mean(dep_delay)),
        keyby = .(origin, dest, month)]
kable(head(ans,10))
origindestmonthV1V2
EWRDFW16.42767310.012579
EWRDFW210.53676511.345588
EWRDFW312.8650318.079755
EWRDFW417.79268312.920732
EWRDFW518.48780518.682927
EWRDFW637.00595238.744048
EWRDFW720.25000021.154762
EWRDFW816.93604622.069767
EWRDFW95.86503113.055215
EWRDFW1018.81366518.894410

Clojure

(-> flights
    (api/select-rows #(= (get % "carrier") "AA"))
    (api/group-by ["origin" "dest" "month"])
    (api/aggregate [#(dfn/mean (% "arr_delay"))
                    #(dfn/mean (% "dep_delay"))])
    (api/order-by ["origin" "dest" "month"])
    (api/head 10))

_unnamed [10 5]:

month:summary-0:summary-1origindest
16.4276729610.01257862EWRDFW
210.5367647111.34558824EWRDFW
312.865030678.07975460EWRDFW
417.7926829312.92073171EWRDFW
518.4878048818.68292683EWRDFW
637.0059523838.74404762EWRDFW
720.2500000021.15476190EWRDFW
816.9360465122.06976744EWRDFW
95.8650306713.05521472EWRDFW
1018.8136646018.89440994EWRDFW
Can by accept expressions as well or does it just take columns?

R

ans <- flights[, .N, .(dep_delay>0, arr_delay>0)]
kable(ans)
dep_delayarr_delayN
TRUETRUE72836
FALSETRUE34583
FALSEFALSE119304
TRUEFALSE26593

Clojure

(-> flights
    (api/group-by (fn [row]
                    {:dep_delay (pos? (get row "dep_delay"))
                     :arr_delay (pos? (get row "arr_delay"))}))
    (api/aggregate {:N api/row-count}))

_unnamed [4 3]:

:arr_delay:dep_delay:N
truetrue72836
truefalse34583
falsefalse119304
falsetrue26593
Do we have to compute mean() for each column individually?

R

kable(DT)

| ID | a | b | c | | :- | -: | -: | -: | | b | 1 | 7 | 13 | | b | 2 | 8 | 14 | | b | 3 | 9 | 15 | | a | 4 | 10 | 16 | | a | 5 | 11 | 17 | | c | 6 | 12 | 18 |

DT[, print(.SD), by = ID]
   a b  c
1: 1 7 13
2: 2 8 14
3: 3 9 15
   a  b  c
1: 4 10 16
2: 5 11 17
   a  b  c
1: 6 12 18
Empty data.table (0 rows and 1 cols): ID
kable(DT[, lapply(.SD, mean), by = ID])

| ID | a | b | c | | :- | --: | ---: | ---: | | b | 2.0 | 8.0 | 14.0 | | a | 4.5 | 10.5 | 16.5 | | c | 6.0 | 12.0 | 18.0 |


Clojure

DT

(api/group-by DT :ID {:result-type :as-map})

_unnamed [6 4]:

| :ID | :a | :b | :c | | --- | -: | -: | -: | | b | 1 | 7 | 13 | | b | 2 | 8 | 14 | | b | 3 | 9 | 15 | | a | 4 | 10 | 16 | | a | 5 | 11 | 17 | | c | 6 | 12 | 18 |

{“b” Group: b [3 4]:

| :ID | :a | :b | :c | | --- | -: | -: | -: | | b | 1 | 7 | 13 | | b | 2 | 8 | 14 | | b | 3 | 9 | 15 |

, “a” Group: a [2 4]:

| :ID | :a | :b | :c | | --- | -: | -: | -: | | a | 4 | 10 | 16 | | a | 5 | 11 | 17 |

, “c” Group: c [1 4]:

| :ID | :a | :b | :c | | --- | -: | -: | -: | | c | 6 | 12 | 18 |

}

(-> DT
    (api/group-by [:ID])
    (api/aggregate-columns (complement #{:ID}) dfn/mean))

_unnamed [3 4]:

:b:c:a:ID
8.014.02.0b
10.516.54.5a
12.018.06.0c
How can we specify just the columns we would like to compute the mean() on?

R

kable(head(flights[carrier == "AA",                         ## Only on trips with carrier "AA"
                   lapply(.SD, mean),                       ## compute the mean
                   by = .(origin, dest, month),             ## for every 'origin,dest,month'
                   .SDcols = c("arr_delay", "dep_delay")])) ## for just those specified in .SDcols
origindestmontharr_delaydep_delay
JFKLAX16.59036114.2289157
LGAPBI1-7.7586210.3103448
EWRLAX11.3666677.5000000
JFKMIA115.72067018.7430168
JFKSEA114.35714330.7500000
EWRMIA111.01123612.1235955

Clojure

(-> flights
    (api/select-rows #(= (get % "carrier") "AA"))
    (api/group-by ["origin" "dest" "month"])
    (api/aggregate-columns ["arr_delay" "dep_delay"] dfn/mean)
    (api/head 6))

_unnamed [6 5]:

arr_delaymonthorigindep_delaydest
6.590361451JFK14.22891566LAX
-7.758620691LGA0.31034483PBI
1.366666671EWR7.50000000LAX
15.720670391JFK18.74301676MIA
14.357142861JFK30.75000000SEA
11.011235961EWR12.12359551MIA
How can we return the first two rows for each month?

R

ans <- flights[, head(.SD, 2), by = month]
kable(head(ans))
monthyeardaydep_delayarr_delaycarrierorigindestair_timedistancehour
1201411413AAJFKLAX35924759
120141-313AAJFKLAX363247511
220141-11AAJFKLAX35824758
220141-53AAJFKLAX358247511
320141-1136AAJFKLAX37524758
320141-314AAJFKLAX368247511

Clojure

(-> flights
    (api/group-by ["month"])
    (api/head 2) ;; head applied on each group
    (api/ungroup)
    (api/head 6))

_unnamed [6 11]:

dep_delayoriginair_timehourarr_delaydestdistanceyearmonthdaycarrier
14JFK359913LAX2475201411AA
-3JFK3631113LAX2475201411AA
-1JFK35881LAX2475201421AA
-5JFK358113LAX2475201421AA
-11JFK375836LAX2475201431AA
-3JFK3681114LAX2475201431AA
How can we concatenate columns a and b for each group in ID?

R

kable(DT[, .(val = c(a,b)), by = ID])

| ID | val | | :- | --: | | b | 1 | | b | 2 | | b | 3 | | b | 7 | | b | 8 | | b | 9 | | a | 4 | | a | 5 | | a | 10 | | a | 11 | | c | 6 | | c | 12 |


Clojure

(-> DT
    (api/pivot->longer [:a :b] {:value-column-name :val})
    (api/drop-columns [:$column :c]))

_unnamed [12 2]:

:ID:val
b1
b2
b3
a4
a5
c6
b7
b8
b9
a10
a11
c12
What if we would like to have all the values of column a and b concatenated, but returned as a list column?

R

kable(DT[, .(val = list(c(a,b))), by = ID])

| ID | val | | :- | :--------------- | | b | 1, 2, 3, 7, 8, 9 | | a | 4, 5, 10, 11 | | c | 6, 12 |


Clojure

(-> DT
    (api/pivot->longer [:a :b] {:value-column-name :val})
    (api/drop-columns [:$column :c])
    (api/fold-by :ID))

_unnamed [3 2]:

:ID:val
b[1 2 3 7 8 9]
a[4 5 10 11]
c[6 12]

API tour

Below snippets are taken from A data.table and dplyr tour written by Atrebas (permission granted).

I keep structure and subtitles but I skip data.table and dplyr examples.

Example data

(def DS (api/dataset {:V1 (take 9 (cycle [1 2]))
                      :V2 (range 1 10)
                      :V3 (take 9 (cycle [0.5 1.0 1.5]))
                      :V4 (take 9 (cycle ["A" "B" "C"]))}))
(api/dataset? DS)
(class DS)
true
tech.v3.dataset.impl.dataset.Dataset
DS

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C

Basic Operations

Filter rows

Filter rows using indices

(api/select-rows DS [2 3])

_unnamed [2 4]:

:V1:V2:V3:V4
131.5C
240.5A

Discard rows using negative indices

In Clojure API we have separate function for that: drop-rows.

(api/drop-rows DS (range 2 7))

_unnamed [4 4]:

:V1:V2:V3:V4
110.5A
221.0B
281.0B
191.5C

Filter rows using a logical expression

(api/select-rows DS (comp #(> % 5) :V2))

_unnamed [4 4]:

:V1:V2:V3:V4
261.5C
170.5A
281.0B
191.5C
(api/select-rows DS (comp #{"A" "C"} :V4))

_unnamed [6 4]:

:V1:V2:V3:V4
110.5A
131.5C
240.5A
261.5C
170.5A
191.5C

Filter rows using multiple conditions

(api/select-rows DS #(and (= (:V1 %) 1)
                          (= (:V4 %) "A")))

_unnamed [2 4]:

:V1:V2:V3:V4
110.5A
170.5A

Filter unique rows

(api/unique-by DS)

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C
(api/unique-by DS [:V1 :V4])

_unnamed [6 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C

Discard rows with missing values

(api/drop-missing DS)

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C

Other filters

(api/random DS 3) ;; 3 random rows

_unnamed [3 4]:

:V1:V2:V3:V4
110.5A
191.5C
131.5C
(api/random DS (/ (api/row-count DS) 2)) ;; fraction of random rows

_unnamed [5 4]:

:V1:V2:V3:V4
131.5C
110.5A
221.0B
221.0B
151.0B
(api/by-rank DS :V1 zero?) ;; take top n entries

_unnamed [4 4]:

:V1:V2:V3:V4
221.0B
240.5A
261.5C
281.0B

Convenience functions

(api/select-rows DS (comp (partial re-matches #"^B") str :V4))

_unnamed [3 4]:

:V1:V2:V3:V4
221.0B
151.0B
281.0B
(api/select-rows DS (comp #(<= 3 % 5) :V2))

_unnamed [3 4]:

:V1:V2:V3:V4
131.5C
240.5A
151.0B
(api/select-rows DS (comp #(< 3 % 5) :V2))

_unnamed [1 4]:

:V1:V2:V3:V4
240.5A
(api/select-rows DS (comp #(<= 3 % 5) :V2))

_unnamed [3 4]:

:V1:V2:V3:V4
131.5C
240.5A
151.0B

Last example skipped.

Sort rows

Sort rows by column

(api/order-by DS :V3)

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
240.5A
170.5A
221.0B
151.0B
281.0B
131.5C
261.5C
191.5C

Sort rows in decreasing order

(api/order-by DS :V3 :desc)

_unnamed [9 4]:

:V1:V2:V3:V4
131.5C
261.5C
191.5C
151.0B
221.0B
281.0B
170.5A
240.5A
110.5A

Sort rows based on several columns

(api/order-by DS [:V1 :V2] [:asc :desc])

_unnamed [9 4]:

:V1:V2:V3:V4
191.5C
170.5A
151.0B
131.5C
110.5A
281.0B
261.5C
240.5A
221.0B
Select columns

Select one column using an index (not recommended)

(nth (api/columns DS :as-seq) 2) ;; as column (iterable)
#tech.v3.dataset.column<float64>[9]
:V3
[0.5000, 1.000, 1.500, 0.5000, 1.000, 1.500, 0.5000, 1.000, 1.500]
(api/dataset [(nth (api/columns DS :as-seq) 2)])

_unnamed [9 1]:

:V3
0.5
1.0
1.5
0.5
1.0
1.5
0.5
1.0
1.5

Select one column using column name

(api/select-columns DS :V2) ;; as dataset

_unnamed [9 1]:

:V2
1
2
3
4
5
6
7
8
9
(api/select-columns DS [:V2]) ;; as dataset

_unnamed [9 1]:

:V2
1
2
3
4
5
6
7
8
9
(DS :V2) ;; as column (iterable)
#tech.v3.dataset.column<int64>[9]
:V2
[1, 2, 3, 4, 5, 6, 7, 8, 9]

Select several columns

(api/select-columns DS [:V2 :V3 :V4])

_unnamed [9 3]:

:V2:V3:V4
10.5A
21.0B
31.5C
40.5A
51.0B
61.5C
70.5A
81.0B
91.5C

Exclude columns

(api/select-columns DS (complement #{:V2 :V3 :V4}))

_unnamed [9 1]:

:V1
1
2
1
2
1
2
1
2
1
(api/drop-columns DS [:V2 :V3 :V4])

_unnamed [9 1]:

:V1
1
2
1
2
1
2
1
2
1

Other seletions

(->> (range 1 3)
     (map (comp keyword (partial format "V%d")))
     (api/select-columns DS))

_unnamed [9 2]:

:V1:V2
11
22
13
24
15
26
17
28
19
(api/reorder-columns DS :V4)

_unnamed [9 4]:

:V4:V1:V2:V3
A110.5
B221.0
C131.5
A240.5
B151.0
C261.5
A170.5
B281.0
C191.5
(api/select-columns DS #(clojure.string/starts-with? (name %) "V"))

_unnamed [9 4]:

:V1:V2:V3:V4
110.5A
221.0B
131.5C
240.5A
151.0B
261.5C
170.5A
281.0B
191.5C
(api/select-columns DS #(clojure.string/ends-with? (name %) "3"))

_unnamed [9 1]:

:V3
0.5
1.0
1.5
0.5
1.0
1.5
0.5
1.0
1.5
(api/select-columns DS #"..2") ;; regex converts to string using `str` function

_unnamed [9 1]:

:V2
1
2
3
4
5
6
7
8
9
(api/select-columns DS #{:V1 "X"})

_unnamed [9 1]:

:V1
1
2
1
2
1
2
1
2
1
(api/select-columns DS #(not (clojure.string/starts-with? (name %) "V2")))

_unnamed [9 3]:

:V1:V3:V4
10.5A
21.0B
11.5C
20.5A
11.0B
21.5C
10.5A
21.0B
11.5C
Summarise data

Summarise one column

(reduce + (DS :V1)) ;; using pure Clojure, as value
13
(api/aggregate-columns DS :V1 dfn/sum) ;; as dataset

_unnamed [1 1]:

:V1
13.0
(api/aggregate DS {:sumV1 #(dfn/sum (% :V1))})

_unnamed [1 1]:

:sumV1
13.0

Summarize several columns

(api/aggregate DS [#(dfn/sum (% :V1))
                   #(dfn/standard-deviation (% :V3))])

_unnamed [1 2]:

:summary-0:summary-1
13.00.4330127
(api/aggregate-columns DS [:V1 :V3] [dfn/sum
                                     dfn/standard-deviation])

_unnamed [1 2]:

:V1:V3
13.00.4330127

Summarise several columns and assign column names

(api/aggregate DS {:sumv1 #(dfn/sum (% :V1))
                   :sdv3 #(dfn/standard-deviation (% :V3))})

_unnamed [1 2]:

:sumv1:sdv3
13.00.4330127

Summarise a subset of rows

(-> DS
    (api/select-rows (range 4))
    (api/aggregate-columns :V1 dfn/sum))

_unnamed [1 1]:

:V1
6.0
Additional helpers
(-> DS
    (api/first)
    (api/select-columns :V3)) ;; select first row from `:V3` column

_unnamed [1 1]:

:V3
0.5
(-> DS
    (api/last)
    (api/select-columns :V3)) ;; select last row from `:V3` column

_unnamed [1 1]:

:V3
1.5
(-> DS
    (api/select-rows 4)
    (api/select-columns :V3)) ;; select forth row from `:V3` column

_unnamed [1 1]:

:V3
1.0
(-> DS
    (api/select :V3 4)) ;; select forth row from `:V3` column

_unnamed [1 1]:

:V3
1.0
(-> DS
    (api/unique-by :V4)
    (api/aggregate api/row-count)) ;; number of unique rows in `:V4` column, as dataset

_unnamed [1 1]:

:summary
3
(-> DS
    (api/unique-by :V4)
    (api/row-count)) ;; number of unique rows in `:V4` column, as value
3
(-> DS
    (api/unique-by)
    (api/row-count)) ;; number of unique rows in dataset, as value
9
Add/update/delete columns

Modify a column

(api/map-columns DS :V1 [:V1] #(dfn/pow % 2))

_unnamed [9 4]:

:V1:V2:V3:V4
1.010.5A
4.021.0B
1.031.5C
4.040.5A
1.051.0B
4.061.5C
1.070.5A
4.081.0B
1.091.5C
(def DS (api/add-column DS :V1 (dfn/pow (DS :V1) 2)))
DS

_unnamed [9 4]:

:V1:V2:V3:V4
1.010.5A
4.021.0B
1.031.5C
4.040.5A
1.051.0B
4.061.5C
1.070.5A
4.081.0B
1.091.5C

Add one column

(api/map-columns DS :v5 [:V1] dfn/log)

_unnamed [9 5]:

:V1:V2:V3:V4:v5
1.010.5A0.00000000
4.021.0B1.38629436
1.031.5C0.00000000
4.040.5A1.38629436
1.051.0B0.00000000
4.061.5C1.38629436
1.070.5A0.00000000
4.081.0B1.38629436
1.091.5C0.00000000
(def DS (api/add-column DS :v5 (dfn/log (DS :V1))))
DS

_unnamed [9 5]:

:V1:V2:V3:V4:v5
1.010.5A0.00000000
4.021.0B1.38629436
1.031.5C0.00000000
4.040.5A1.38629436
1.051.0B0.00000000
4.061.5C1.38629436
1.070.5A0.00000000
4.081.0B1.38629436
1.091.5C0.00000000

Add several columns

(def DS (api/add-columns DS {:v6 (dfn/sqrt (DS :V1))
                                       :v7 "X"}))
DS

_unnamed [9 7]:

:V1:V2:V3:V4:v5:v6:v7
1.010.5A0.000000001.0X
4.021.0B1.386294362.0X
1.031.5C0.000000001.0X
4.040.5A1.386294362.0X
1.051.0B0.000000001.0X
4.061.5C1.386294362.0X
1.070.5A0.000000001.0X
4.081.0B1.386294362.0X
1.091.5C0.000000001.0X

Create one column and remove the others

(api/dataset {:v8 (dfn/+ (DS :V3) 1)})

_unnamed [9 1]:

:v8
1.5
2.0
2.5
1.5
2.0
2.5
1.5
2.0
2.5

Remove one column

(def DS (api/drop-columns DS :v5))
DS

_unnamed [9 6]:

:V1:V2:V3:V4:v6:v7
1.010.5A1.0X
4.021.0B2.0X
1.031.5C1.0X
4.040.5A2.0X
1.051.0B1.0X
4.061.5C2.0X
1.070.5A1.0X
4.081.0B2.0X
1.091.5C1.0X

Remove several columns

(def DS (api/drop-columns DS [:v6 :v7]))
DS

_unnamed [9 4]:

:V1:V2:V3:V4
1.010.5A
4.021.0B
1.031.5C
4.040.5A
1.051.0B
4.061.5C
1.070.5A
4.081.0B
1.091.5C

Remove columns using a vector of colnames

We use set here.

(def DS (api/select-columns DS (complement #{:V3})))
DS

_unnamed [9 3]:

:V1:V2:V4
1.01A
4.02B
1.03C
4.04A
1.05B
4.06C
1.07A
4.08B
1.09C

Replace values for rows matching a condition

(def DS (api/map-columns DS :V2 [:V2] #(if (< % 4.0) 0.0 %)))
DS

_unnamed [9 3]:

:V1:V2:V4
1.00.0A
4.00.0B
1.00.0C
4.04.0A
1.05.0B
4.06.0C
1.07.0A
4.08.0B
1.09.0C
by

By group

(-> DS
    (api/group-by [:V4])
    (api/aggregate {:sumV2 #(dfn/sum (% :V2))}))

_unnamed [3 2]:

:V4:sumV2
A11.0
B13.0
C15.0

By several groups

(-> DS
    (api/group-by [:V4 :V1])
    (api/aggregate {:sumV2 #(dfn/sum (% :V2))}))

_unnamed [6 3]:

:V4:V1:sumV2
A1.07.0
B4.08.0
C1.09.0
A4.04.0
B1.05.0
C4.06.0

Calling function in by

(-> DS
    (api/group-by (fn [row]
                    (clojure.string/lower-case (:V4 row))))
    (api/aggregate {:sumV1 #(dfn/sum (% :V1))}))

_unnamed [3 2]:

:sumV1:$group-name
6.0a
9.0b
6.0c

Assigning column name in by

(-> DS
    (api/group-by (fn [row]
                    {:abc (clojure.string/lower-case (:V4 row))}))
    (api/aggregate {:sumV1 #(dfn/sum (% :V1))}))

_unnamed [3 2]:

:abc:sumV1
a6.0
b9.0
c6.0
(-> DS
    (api/group-by (fn [row]
                    (clojure.string/lower-case (:V4 row))))
    (api/aggregate {:sumV1 #(dfn/sum (% :V1))} {:add-group-as-column :abc}))

_unnamed [3 2]:

:abc:sumV1
a6.0
b9.0
c6.0

Using a condition in by

(-> DS
    (api/group-by #(= (:V4 %) "A"))
    (api/aggregate #(dfn/sum (% :V1))))

_unnamed [2 2]:

:summary:$group-name
6.0true
15.0false

By on a subset of rows

(-> DS
    (api/select-rows (range 5))
    (api/group-by :V4)
    (api/aggregate {:sumV1 #(dfn/sum (% :V1))}))

_unnamed [3 2]:

:sumV1:$group-name
5.0A
5.0B
1.0C

Count number of observations for each group

(-> DS
    (api/group-by :V4)
    (api/aggregate api/row-count))

_unnamed [3 2]:

:summary:$group-name
3A
3B
3C

Add a column with number of observations for each group

(-> DS
    (api/group-by [:V1])
    (api/add-column :n api/row-count)
    (api/ungroup))

_unnamed [9 4]:

| :V1 | :V2 | :V4 | :n | | --: | --: | --- | -: | | 1.0 | 0.0 | A | 5 | | 1.0 | 0.0 | C | 5 | | 1.0 | 5.0 | B | 5 | | 1.0 | 7.0 | A | 5 | | 1.0 | 9.0 | C | 5 | | 4.0 | 0.0 | B | 4 | | 4.0 | 4.0 | A | 4 | | 4.0 | 6.0 | C | 4 | | 4.0 | 8.0 | B | 4 |


Retrieve the first/last/nth observation for each group

(-> DS
    (api/group-by [:V4])
    (api/aggregate-columns :V2 first))

_unnamed [3 2]:

:V2:V4
0.0A
0.0B
0.0C
(-> DS
    (api/group-by [:V4])
    (api/aggregate-columns :V2 last))

_unnamed [3 2]:

:V2:V4
7.0A
8.0B
9.0C
(-> DS
    (api/group-by [:V4])
    (api/aggregate-columns :V2 #(nth % 1)))

_unnamed [3 2]:

:V2:V4
4.0A
5.0B
6.0C

Going further

Advanced columns manipulation

Summarise all the columns

;; custom max function which works on every type
(api/aggregate-columns DS :all (fn [col] (first (sort #(compare %2 %1) col))))

_unnamed [1 3]:

:V1:V2:V4
4.09.0C

Summarise several columns

(api/aggregate-columns DS [:V1 :V2] dfn/mean)

_unnamed [1 2]:

:V1:V2
2.333333334.33333333

Summarise several columns by group

(-> DS
    (api/group-by [:V4])
    (api/aggregate-columns [:V1 :V2] dfn/mean))

_unnamed [3 3]:

:V2:V4:V1
3.66666667A2.0
4.33333333B3.0
5.00000000C2.0

Summarise with more than one function by group

(-> DS
    (api/group-by [:V4])
    (api/aggregate-columns [:V1 :V2] (fn [col]
                                       {:sum (dfn/sum col)
                                        :mean (dfn/mean col)})))

_unnamed [3 5]:

:V2-sum:V1-mean:V4:V2-mean:V1-sum
11.02.0A3.666666676.0
13.03.0B4.333333339.0
15.02.0C5.000000006.0

Summarise using a condition

(-> DS
    (api/select-columns :type/numerical)
    (api/aggregate-columns :all dfn/mean))

_unnamed [1 2]:

:V1:V2
2.333333334.33333333

Modify all the columns

(api/update-columns DS :all reverse)

_unnamed [9 3]:

:V1:V2:V4
1.09.0C
4.08.0B
1.07.0A
4.06.0C
1.05.0B
4.04.0A
1.00.0C
4.00.0B
1.00.0A

Modify several columns (dropping the others)

(-> DS
    (api/select-columns [:V1 :V2])
    (api/update-columns :all dfn/sqrt))

_unnamed [9 2]:

:V1:V2
1.00.00000000
2.00.00000000
1.00.00000000
2.02.00000000
1.02.23606798
2.02.44948974
1.02.64575131
2.02.82842712
1.03.00000000
(-> DS
    (api/select-columns (complement #{:V4}))
    (api/update-columns :all dfn/exp))

_unnamed [9 2]:

:V1:V2
2.718281831.00000000
54.598150031.00000000
2.718281831.00000000
54.5981500354.59815003
2.71828183148.41315910
54.59815003403.42879349
2.718281831096.63315843
54.598150032980.95798704
2.718281838103.08392758

Modify several columns (keeping the others)

(def DS (api/update-columns DS [:V1 :V2] dfn/sqrt))
DS

_unnamed [9 3]:

:V1:V2:V4
1.00.00000000A
2.00.00000000B
1.00.00000000C
2.02.00000000A
1.02.23606798B
2.02.44948974C
1.02.64575131A
2.02.82842712B
1.03.00000000C
(def DS (api/update-columns DS (complement #{:V4}) #(dfn/pow % 2)))
DS

_unnamed [9 3]:

:V1:V2:V4
1.00.0A
4.00.0B
1.00.0C
4.04.0A
1.05.0B
4.06.0C
1.07.0A
4.08.0B
1.09.0C

Modify columns using a condition (dropping the others)

(-> DS
    (api/select-columns :type/numerical)
    (api/update-columns :all #(dfn/- % 1)))

_unnamed [9 2]:

:V1:V2
0.0-1.0
3.0-1.0
0.0-1.0
3.03.0
0.04.0
3.05.0
0.06.0
3.07.0
0.08.0

Modify columns using a condition (keeping the others)

(def DS (api/convert-types DS :type/numerical :int32))
DS

_unnamed [9 3]:

:V1:V2:V4
10A
40B
10C
44A
15B
45C
17A
48B
19C

Use a complex expression

(-> DS
    (api/group-by [:V4])
    (api/head 2)
    (api/add-column :V2 "X")
    (api/ungroup))

_unnamed [6 3]:

:V1:V2:V4
1XA
4XA
4XB
1XB
1XC
4XC

Use multiple expressions

(api/dataset (let [x (dfn/+ (DS :V1) (dfn/sum (DS :V2)))]
               (println (seq (DS :V1)))
               (println (api/info (api/select-columns DS :V1)))
               {:A (range 1 (inc (api/row-count DS)))
                :B x}))

(1 4 1 4 1 4 1 4 1) _unnamed: descriptive-stats [1 11]:

:col-name:datatype:n-valid:n-missing:min:mean:max:standard-deviation:skew:first:last
:V1:int32901.02.333333334.01.581138830.2710523711

_unnamed [9 2]:

| :A | :B | | -: | ---: | | 1 | 39.0 | | 2 | 42.0 | | 3 | 39.0 | | 4 | 42.0 | | 5 | 39.0 | | 6 | 42.0 | | 7 | 39.0 | | 8 | 42.0 | | 9 | 39.0 |

Chain expressions

Expression chaining using >

(-> DS
    (api/group-by [:V4])
    (api/aggregate {:V1sum #(dfn/sum (% :V1))})
    (api/select-rows #(>= (:V1sum %) 5)))

_unnamed [3 2]:

:V4:V1sum
A6.0
B9.0
C6.0
(-> DS
    (api/group-by [:V4])
    (api/aggregate {:V1sum #(dfn/sum (% :V1))})
    (api/order-by :V1sum :desc))

_unnamed [3 2]:

:V4:V1sum
B9.0
A6.0
C6.0
Indexing and Keys

Set the key/index (order)

(def DS (api/order-by DS :V4))
DS

_unnamed [9 3]:

:V1:V2:V4
10A
44A
17A
40B
15B
48B
10C
45C
19C

Select the matching rows

(api/select-rows DS #(= (:V4 %) "A"))

_unnamed [3 3]:

:V1:V2:V4
10A
44A
17A
(api/select-rows DS (comp #{"A" "C"} :V4))

_unnamed [6 3]:

:V1:V2:V4
10A
44A
17A
10C
45C
19C

Select the first matching row

(-> DS
    (api/select-rows #(= (:V4 %) "B"))
    (api/first))

_unnamed [1 3]:

:V1:V2:V4
40B
(-> DS
    (api/unique-by :V4)
    (api/select-rows (comp #{"B" "C"} :V4)))

_unnamed [2 3]:

:V1:V2:V4
40B
10C

Select the last matching row

(-> DS
    (api/select-rows #(= (:V4 %) "A"))
    (api/last))

_unnamed [1 3]:

:V1:V2:V4
17A

Nomatch argument

(api/select-rows DS (comp #{"A" "D"} :V4))

_unnamed [3 3]:

:V1:V2:V4
10A
44A
17A

Apply a function on the matching rows

(-> DS
    (api/select-rows (comp #{"A" "C"} :V4))
    (api/aggregate-columns :V1 (fn [col]
                                 {:sum (dfn/sum col)})))

_unnamed [1 1]:

:V1-sum
12.0

Modify values for matching rows

(def DS (-> DS
            (api/map-columns :V1 [:V1 :V4] #(if (= %2 "A") 0 %1))
            (api/order-by :V4)))
DS

_unnamed [9 3]:

:V1:V2:V4
00A
04A
07A
40B
15B
48B
10C
45C
19C

Use keys in by

(-> DS
    (api/select-rows (comp (complement #{"B"}) :V4))
    (api/group-by [:V4])
    (api/aggregate-columns :V1 dfn/sum))

_unnamed [2 2]:

:V4:V1
A0.0
C6.0

Set keys/indices for multiple columns (ordered)

(api/order-by DS [:V4 :V1])

_unnamed [9 3]:

:V1:V2:V4
00A
04A
07A
15B
40B
48B
10C
19C
45C

Subset using multiple keys/indices

(-> DS
    (api/select-rows #(and (= (:V1 %) 1)
                           (= (:V4 %) "C"))))

_unnamed [2 3]:

:V1:V2:V4
10C
19C
(-> DS
    (api/select-rows #(and (= (:V1 %) 1)
                           (#{"B" "C"} (:V4 %)))))

_unnamed [3 3]:

:V1:V2:V4
15B
10C
19C
(-> DS
    (api/select-rows #(and (= (:V1 %) 1)
                           (#{"B" "C"} (:V4 %))) {:result-type :as-indexes}))
(4 6 8)
set*() modifications

Replace values

There is no mutating operations tech.ml.dataset or easy way to set value.

(def DS (api/update-columns DS :V2 #(map-indexed (fn [idx v]
                                                   (if (zero? idx) 3 v)) %)))
DS

_unnamed [9 3]:

:V1:V2:V4
03A
04A
07A
40B
15B
48B
10C
45C
19C

Reorder rows

(def DS (api/order-by DS [:V4 :V1] [:asc :desc]))
DS

_unnamed [9 3]:

:V1:V2:V4
03A
04A
07A
40B
48B
15B
45C
10C
19C

Modify colnames

(def DS (api/rename-columns DS {:V2 "v2"}))
DS

_unnamed [9 3]:

| :V1 | v2 | :V4 | | --: | -: | --- | | 0 | 3 | A | | 0 | 4 | A | | 0 | 7 | A | | 4 | 0 | B | | 4 | 8 | B | | 1 | 5 | B | | 4 | 5 | C | | 1 | 0 | C | | 1 | 9 | C |

(def DS (api/rename-columns DS {"v2" :V2})) ;; revert back

Reorder columns

(def DS (api/reorder-columns DS :V4 :V1 :V2))
DS

_unnamed [9 3]:

:V4:V1:V2
A03
A04
A07
B40
B48
B15
C45
C10
C19
Advanced use of by

Select first/last/… row by group

(-> DS
    (api/group-by :V4)
    (api/first)
    (api/ungroup))

_unnamed [3 3]:

:V4:V1:V2
A03
B40
C45
(-> DS
    (api/group-by :V4)
    (api/select-rows [0 2])
    (api/ungroup))

_unnamed [6 3]:

:V4:V1:V2
A03
A07
B40
B15
C45
C19
(-> DS
    (api/group-by :V4)
    (api/tail 2)
    (api/ungroup))

_unnamed [6 3]:

:V4:V1:V2
A04
A07
B48
B15
C10
C19

Select rows using a nested query

(-> DS
    (api/group-by :V4)
    (api/order-by :V2)
    (api/first)
    (api/ungroup))

_unnamed [3 3]:

:V4:V1:V2
A03
B40
C10

Add a group counter column

(-> DS
    (api/group-by [:V4 :V1])
    (api/ungroup {:add-group-id-as-column :Grp}))

_unnamed [9 4]:

:Grp:V4:V1:V2
0A03
0A04
0A07
1B40
1B48
2B15
3C45
4C10
4C19

Get row number of first (and last) observation by group

(-> DS
    (api/add-column :row-id (range))
    (api/select-columns [:V4 :row-id])
    (api/group-by :V4)
    (api/ungroup))

_unnamed [9 2]:

:V4:row-id
A0
A1
A2
B3
B4
B5
C6
C7
C8
(-> DS
    (api/add-column :row-id (range))
    (api/select-columns [:V4 :row-id])
    (api/group-by :V4)
    (api/first)
    (api/ungroup))

_unnamed [3 2]:

:V4:row-id
A0
B3
C6
(-> DS
    (api/add-column :row-id (range))
    (api/select-columns [:V4 :row-id])
    (api/group-by :V4)
    (api/select-rows [0 2])
    (api/ungroup))

_unnamed [6 2]:

:V4:row-id
A0
A2
B3
B5
C6
C8

Handle list-columns by group

(-> DS
    (api/select-columns [:V1 :V4])
    (api/fold-by :V4))

_unnamed [3 2]:

:V4:V1
A[0 0 0]
B[4 4 1]
C[4 1 1]
(-> DS    
    (api/group-by :V4)
    (api/unmark-group))

_unnamed [3 3]:

:group-id:name:data
0AGroup: A [3 3]:
1BGroup: B [3 3]:
2CGroup: C [3 3]:

Grouping sets (multiple by at once)

Not available.

Miscellaneous

Read / Write data

Write data to a csv file

(api/write! DS "DF.csv")
nil

Write data to a tab-delimited file

(api/write! DS "DF.txt" {:separator \tab})
nil

or

(api/write! DS "DF.tsv")
nil

Read a csv / tab-delimited file

(api/dataset "DF.csv" {:key-fn keyword})

DF.csv [9 3]:

:V4:V1:V2
A03
A04
A07
B40
B48
B15
C45
C10
C19
(api/dataset "DF.txt" {:key-fn keyword})

DF.txt [9 3]:

:V4:V1:V2
A03
A04
A07
B40
B48
B15
C45
C10
C19
(api/dataset "DF.tsv" {:key-fn keyword})

DF.tsv [9 3]:

:V4:V1:V2
A03
A04
A07
B40
B48
B15
C45
C10
C19

Read a csv file selecting / droping columns

(api/dataset "DF.csv" {:key-fn keyword
                       :column-whitelist ["V1" "V4"]})

DF.csv [9 2]:

:V1:V4
0A
0A
0A
4B
4B
1B
4C
1C
1C
(api/dataset "DF.csv" {:key-fn keyword
                       :column-blacklist ["V4"]})

DF.csv [9 2]:

:V1:V2
03
04
07
40
48
15
45
10
19

Read and rbind several files

(apply api/concat (map api/dataset ["DF.csv" "DF.csv"]))

null [18 3]:

| V4 | V1 | V2 | | -- | -: | -: | | A | 0 | 3 | | A | 0 | 4 | | A | 0 | 7 | | B | 4 | 0 | | B | 4 | 8 | | B | 1 | 5 | | C | 4 | 5 | | C | 1 | 0 | | C | 1 | 9 | | A | 0 | 3 | | A | 0 | 4 | | A | 0 | 7 | | B | 4 | 0 | | B | 4 | 8 | | B | 1 | 5 | | C | 4 | 5 | | C | 1 | 0 | | C | 1 | 9 |

Reshape data

Melt data (from wide to long)

(def mDS (api/pivot->longer DS [:V1 :V2] {:target-columns :variable
                                          :value-column-name :value}))
mDS

_unnamed [18 3]:

:V4:variable:value
A:V10
A:V10
A:V10
B:V14
B:V14
B:V11
C:V14
C:V11
C:V11
A:V23
A:V24
A:V27
B:V20
B:V28
B:V25
C:V25
C:V20
C:V29

Cast data (from long to wide)

(-> mDS
    (api/pivot->wider :variable :value {:fold-fn vec})
    (api/update-columns ["V1" "V2"] (partial map count)))

_unnamed [3 3]:

| :V4 | V1 | V2 | | --- | -: | -: | | A | 3 | 3 | | B | 3 | 3 | | C | 3 | 3 |

(-> mDS
    (api/pivot->wider :variable :value {:fold-fn vec})
    (api/update-columns ["V1" "V2"] (partial map dfn/sum)))

_unnamed [3 3]:

:V4V1V2
A0.014.0
B9.013.0
C6.014.0
(-> mDS
    (api/map-columns :value #(> % 5))
    (api/pivot->wider :value :variable {:fold-fn vec})
    (api/update-columns ["true" "false"] (partial map #(if (sequential? %) (count %) 1))))

_unnamed [3 3]:

:V4falsetrue
A51
B51
C51

Split

(api/group-by DS :V4 {:result-type :as-map})

{“A” Group: A [3 3]:

:V4:V1:V2
A03
A04
A07

, “B” Group: B [3 3]:

:V4:V1:V2
B40
B48
B15

, “C” Group: C [3 3]:

:V4:V1:V2
C45
C10
C19

}


Split and transpose a vector/column

(-> {:a ["A:a" "B:b" "C:c"]}
    (api/dataset)
    (api/separate-column :a [:V1 :V2] ":"))

_unnamed [3 2]:

:V1:V2
Aa
Bb
Cc
Other

Skipped

Join/Bind data sets

(def x (api/dataset {"Id" ["A" "B" "C" "C"]
                     "X1" [1 3 5 7]
                     "XY" ["x2" "x4" "x6" "x8"]}))
(def y (api/dataset {"Id" ["A" "B" "B" "D"]
                     "Y1" [1 3 5 7]
                     "XY" ["y1" "y3" "y5" "y7"]}))
x y

_unnamed [4 3]:

| Id | X1 | XY | | -- | -: | -- | | A | 1 | x2 | | B | 3 | x4 | | C | 5 | x6 | | C | 7 | x8 |

_unnamed [4 3]:

| Id | Y1 | XY | | -- | -: | -- | | A | 1 | y1 | | B | 3 | y3 | | B | 5 | y5 | | D | 7 | y7 |

Join

Join matching rows from y to x

(api/left-join x y "Id")

left-outer-join [5 6]:

| Id | X1 | XY | right.Id | Y1 | right.XY | | -- | -: | -- | -------- | -: | -------- | | A | 1 | x2 | A | 1 | y1 | | B | 3 | x4 | B | 3 | y3 | | B | 3 | x4 | B | 5 | y5 | | C | 5 | x6 | | | | | C | 7 | x8 | | | |


Join matching rows from x to y

(api/right-join x y "Id")

right-outer-join [4 6]:

| Id | X1 | XY | right.Id | Y1 | right.XY | | -- | -: | -- | -------- | -: | -------- | | A | 1 | x2 | A | 1 | y1 | | B | 3 | x4 | B | 3 | y3 | | B | 3 | x4 | B | 5 | y5 | | | | | D | 7 | y7 |


Join matching rows from both x and y

(api/inner-join x y "Id")

inner-join [3 5]:

| Id | X1 | XY | Y1 | right.XY | | -- | -: | -- | -: | -------- | | A | 1 | x2 | 1 | y1 | | B | 3 | x4 | 3 | y3 | | B | 3 | x4 | 5 | y5 |


Join keeping all the rows

(api/full-join x y "Id")

full-join [6 6]:

| Id | X1 | XY | right.Id | Y1 | right.XY | | -- | -: | -- | -------- | -: | -------- | | A | 1 | x2 | A | 1 | y1 | | B | 3 | x4 | B | 3 | y3 | | B | 3 | x4 | B | 5 | y5 | | C | 5 | x6 | | | | | C | 7 | x8 | | | | | | | | D | 7 | y7 |


Return rows from x matching y

(api/semi-join x y "Id")

semi-join [2 3]:

| Id | X1 | XY | | -- | -: | -- | | A | 1 | x2 | | B | 3 | x4 |


Return rows from x not matching y

(api/anti-join x y "Id")

anti-join [2 3]:

| Id | X1 | XY | | -- | -: | -- | | C | 5 | x6 | | C | 7 | x8 |

More joins

Select columns while joining

(api/right-join (api/select-columns x ["Id" "X1"])
                (api/select-columns y ["Id" "XY"])
                "Id")

right-outer-join [4 4]:

| Id | X1 | right.Id | XY | | -- | -: | -------- | -- | | A | 1 | A | y1 | | B | 3 | B | y3 | | B | 3 | B | y5 | | | | D | y7 |

(api/right-join (api/select-columns x ["Id" "XY"])
                (api/select-columns y ["Id" "XY"])
                "Id")

right-outer-join [4 4]:

| Id | XY | right.Id | right.XY | | -- | -- | -------- | -------- | | A | x2 | A | y1 | | B | x4 | B | y3 | | B | x4 | B | y5 | | | | D | y7 |

Aggregate columns while joining

(-> y
    (api/group-by ["Id"])
    (api/aggregate {"sumY1" #(dfn/sum (% "Y1"))})
    (api/right-join x "Id")
    (api/add-column "X1Y1" (fn [ds] (dfn/* (ds "sumY1")
                                                    (ds "X1"))))
    (api/select-columns ["right.Id" "X1Y1"]))

right-outer-join [4 2]:

right.IdX1Y1
A1.0
B24.0
C
C

Update columns while joining

(-> x
    (api/select-columns ["Id" "X1"])
    (api/map-columns "SqX1" "X1" (fn [x] (* x x)))
    (api/right-join y "Id")
    (api/drop-columns ["X1" "Id"]))

right-outer-join [4 4]:

| SqX1 | right.Id | Y1 | XY | | ---: | -------- | -: | -- | | 1 | A | 1 | y1 | | 9 | B | 3 | y3 | | 9 | B | 5 | y5 | | | D | 7 | y7 |


Adds a list column with rows from y matching x (nest-join)

(-> (api/left-join x y "Id")
    (api/drop-columns ["right.Id"])
    (api/fold-by (api/column-names x)))

_unnamed [4 5]:

| XY | X1 | Id | right.XY | Y1 | | -- | -: | -- | ------------- | ------- | | x2 | 1 | A | [“y1”] | [1] | | x4 | 3 | B | [“y3” “y5”] | [3 5] | | x6 | 5 | C | [] | [] | | x8 | 7 | C | [] | [] |


Some joins are skipped


Cross join

(def cjds (api/dataset {:V1 [[2 1 1]]
                        :V2 [[3 2]]}))
cjds

_unnamed [1 2]:

:V1:V2
[2 1 1][3 2]
(reduce #(api/unroll %1 %2) cjds (api/column-names cjds))

_unnamed [6 2]:

:V1:V2
23
22
13
12
13
12
(-> (reduce #(api/unroll %1 %2) cjds (api/column-names cjds))
    (api/unique-by))

_unnamed [4 2]:

:V1:V2
23
22
13
12
Bind
(def x (api/dataset {:V1 [1 2 3]}))
(def y (api/dataset {:V1 [4 5 6]}))
(def z (api/dataset {:V1 [7 8 9]
                     :V2 [0 0 0]}))
x y z

_unnamed [3 1]:

:V1
1
2
3

_unnamed [3 1]:

:V1
4
5
6

_unnamed [3 2]:

:V1:V2
70
80
90

Bind rows

(api/bind x y)

null [6 1]:

:V1
1
2
3
4
5
6
(api/bind x z)

null [6 2]:

:V1:V2
1
2
3
70
80
90

Bind rows using a list

(->> [x y]
     (map-indexed #(api/add-column %2 :id (repeat %1)))
     (apply api/bind))

null [6 2]:

:V1:id
10
20
30
41
51
61

Bind columns

(api/append x y)

_unnamed [3 2]:

:V1:V1
14
25
36
Set operations
(def x (api/dataset {:V1 [1 2 2 3 3]}))
(def y (api/dataset {:V1 [2 2 3 4 4]}))
x y

_unnamed [5 1]:

:V1
1
2
2
3
3

_unnamed [5 1]:

:V1
2
2
3
4
4

Intersection

(api/intersect x y)

intersection [2 1]:

:V1
2
3

Difference

(api/difference x y)

difference [1 1]:

:V1
1

Union

(api/union x y)

union [4 1]:

:V1
1
2
3
4
(api/concat x y)

null [10 1]:

:V1
1
2
2
3
3
2
2
3
4
4

Equality not implemented

Can you improve this documentation?Edit on GitHub

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close