Liking cljdoc? Tell your friends :D

beme.alpha.scan.grouper

Token grouping stage: collapses opaque regions (reader conditionals, namespaced maps, syntax-quote with brackets) from flat token sequences into single composite tokens.

This replaces the tokenizer's read-balanced-raw by operating on already-tokenized input where bracket matching is trivial — strings, chars, and comments are already individual tokens.

Token grouping stage: collapses opaque regions (reader conditionals,
namespaced maps, syntax-quote with brackets) from flat token sequences
into single composite tokens.

This replaces the tokenizer's read-balanced-raw by operating on
already-tokenized input where bracket matching is trivial — strings,
chars, and comments are already individual tokens.
raw docstring

beme.alpha.scan.source

Scanner-level source-position utilities. Defines the character-level line/col model used by the tokenizer and grouper. Only \n advances the line counter — \r is a regular character that occupies a column. This matches sadvance! in the tokenizer.

Note: this is the scanner line model, not a universal line definition. The error display module (beme.alpha.errors/source-context) uses str/split-lines which has different line-ending semantics (splits on both \n and \r\n). The two models agree for LF sources but diverge for CRLF. See format-error for how the bridge is handled.

Scanner-level source-position utilities.
Defines the character-level line/col model used by the tokenizer and grouper.
Only \n advances the line counter — \r is a regular character that occupies
a column. This matches sadvance! in the tokenizer.

Note: this is the *scanner* line model, not a universal line definition.
The error display module (beme.alpha.errors/source-context) uses
str/split-lines which has different line-ending semantics (splits on
both \n and \r\n). The two models agree for LF sources but diverge
for CRLF. See format-error for how the bridge is handled.
raw docstring

beme.alpha.scan.tokenizer

beme tokenizer: character scanning and token production. Transforms beme source text into a flat vector of typed tokens.

beme tokenizer: character scanning and token production.
Transforms beme source text into a flat vector of typed tokens.
raw docstring

cljdoc builds & hosts documentation for Clojure/Script libraries

Keyboard shortcuts
Ctrl+kJump to recent docs
Move to previous article
Move to next article
Ctrl+/Jump to the search field
× close