(is-crawlable? robots-txt url user-agent)Does the given parsed robots.txt permit the given URL to be crawled by the given user-agent?
Does the given parsed robots.txt permit the given URL to be crawled by the given user-agent?
(parse content)Parses the given string (content of a robots.txt file) into data that can be queried.
Parses the given string (content of a robots.txt file) into data that can be queried.
(query-crawlable {:keys [agent-groups]} url user-agent)Determines whether and explains why the given parsed robots.txt does or does not permit the given URL to be crawled by the given user-agent.
Determines whether and explains why the given parsed robots.txt does or does not permit the given URL to be crawled by the given user-agent.
(stringify-query-result {:keys [raw-content]}
{:keys [because]}
&
{:keys [context] :or {context 1}})Creates a user-readable string explanation of a query-crawlable result by providing contextual highlighting of the source robots.txt that produced it.
Creates a user-readable string explanation of a query-crawlable result by providing contextual highlighting of the source robots.txt that produced it.
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |