Liking cljdoc? Tell your friends :D

Geni (/gɜni/ or "gurney" without the r) is a Clojure library that wraps Apache Spark. The name means "fire" in Javanese.

WARNING! This library is still unstable. Some information here may be outdated. Do not use it in production just yet! See Flambo and Sparkling for more mature alternatives.

CI Code Coverage Clojars Project License

Overview

Geni is designed to provide an idiomatic Spark interface for Clojure without the hassle of Java or Scala interop. Geni uses Clojure's -> threading macro as the main way to compose Spark's Dataset and Column operations in place of the usual method chaining in Scala. It also provides a greater degree of dynamism by allowing args of mixed types such as columns, strings and keywords in a single function invocation. See the docs section on Geni semantics for more details.

Resources

Docs:

Geni Cookbook:

  1. Getting Started with Clojure, Geni and Spark
  2. Reading and Creating Datasets
  3. Selecting Rows and Columns
  4. Grouping and Aggregating
  5. Combining Datasets with Joins and Unions
  6. String Operations
  7. Cleaning up Messy Data
  8. Timestamps and Dates
  9. Windowing Functions [TBD]
  10. Loading Data from SQL Databases [TBD]

cljdoc badge slack

Basic Examples

All examples below use the Melbourne housing market data available for free on Kaggle.

Spark SQL API for grouping and aggregating:

(require '[zero-one.geni.core :as g])

(-> dataframe
    (g/group-by :Suburb)
    g/count
    (g/order-by (g/desc :count))
    (g/limit 5)
    g/show)
; +--------------+-----+
; |Suburb        |count|
; +--------------+-----+
; |Reservoir     |359  |
; |Richmond      |260  |
; |Bentleigh East|249  |
; |Preston       |239  |
; |Brunswick     |222  |
; +--------------+-----+

Spark ML example translated from Spark's programming guide:

(require '[zero-one.geni.core :as g])
(require '[zero-one.geni.ml :as ml])

(def training-set
  (g/table->dataset
    spark
    [[0 "a b c d e spark"  1.0]
     [1 "b d"              0.0]
     [2 "spark f g h"      1.0]
     [3 "hadoop mapreduce" 0.0]]
    [:id :text :label]))

(def pipeline
  (ml/pipeline
    (ml/tokenizer {:input-col :text
                   :output-col :words})
    (ml/hashing-tf {:num-features 1000
                    :input-col :words
                    :output-col :features})
    (ml/logistic-regression {:max-iter 10
                             :reg-param 0.001})))

(def model (ml/fit training-set pipeline))

(def test-set
  (g/table->dataset
    spark
    [[4 "spark i j k"]
     [5 "l m n"]
     [6 "spark hadoop spark"]
     [7 "apache hadoop"]]
    [:id :text]))

(-> test-set
    (ml/transform model)
    (g/select :id :text :probability :prediction)
    g/show)
;; +---+------------------+----------------------------------------+----------+
;; |id |text              |probability                             |prediction|
;; +---+------------------+----------------------------------------+----------+
;; |4  |spark i j k       |[0.1596407738787411,0.8403592261212589] |1.0       |
;; |5  |l m n             |[0.8378325685476612,0.16216743145233883]|0.0       |
;; |6  |spark hadoop spark|[0.0692663313297627,0.9307336686702373] |1.0       |
;; |7  |apache hadoop     |[0.9821575333444208,0.01784246665557917]|0.0       |
;; +---+------------------+----------------------------------------+----------+

More detailed examples can be found here.There is also a one-to-one walkthrough of Chapter 5 of NVIDIA's Accelerating Apache Spark 3.x, which can be found here.

Quick Start

Use Leiningen to create a template of a Geni project:

lein new geni <project-name>

asciicast

Installation

Add the following to your project.clj dependency:

Clojars Project

You would also need to add Spark as provided dependencies. For instance, have the following key-value pair for the :profiles map:

:provided
{:dependencies [;; Spark
                [org.apache.spark/spark-avro_2.12 "3.0.0"]
                [org.apache.spark/spark-core_2.12 "3.0.0"]
                [org.apache.spark/spark-hive_2.12 "3.0.0"]
                [org.apache.spark/spark-mllib_2.12 "3.0.0"]
                [org.apache.spark/spark-sql_2.12 "3.0.0"]
                [org.apache.spark/spark-streaming_2.12 "3.0.0"]
                [com.github.fommil.netlib/all "1.1.2" :extension "pom"]
                ;; Optional: Spark XGBoost
                [ml.dmlc/xgboost4j-spark_2.12 "1.0.0"]
                [ml.dmlc/xgboost4j_2.12 "1.0.0"]]}

You may also need to install libgomp1 to train XGBoost4j models. When the optional dependencies are not present, the vars to the corresponding functions (such as ml/xgboost-classifier) will be left unbound.

License

Copyright 2020 Zero One Group.

Geni is licensed under Apache License v2.0, see LICENSE.

Mentions

Some code was taken from:

Can you improve this documentation? These fine people already did:
Anthony Khong & arithmox
Edit on GitHub

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close