Essbase ASO export.
What to expect from input:
To parse the export file you need to know one thing: the complete member to dimension mapping of storage dimensions.
It's easy to get member names from a data file: just drop the last column in every row.
Essbase ASO export. What to expect from input: * Space separated. * Quoted member names, non-quoted values. * Variable column count, last column is always a value, every line is a cell. * First line is a complete POV, other lines do minimal POV update only. * So this whole file must be parsed in order... * ... and all members must be mapped to dimensions properly. To parse the export file you need to know one thing: the complete member to dimension mapping of storage dimensions. It's easy to get member names from a data file: just drop the last column in every row.
Essbase BSO export.
What to expect from input:
To parse the export file you need to know two things:
Essbase BSO export. What to expect from input: * Space separated. * Quoted member names, non-quoted values. * Max file size is 2GB. * COLUMNS are specified in first line of the file: - List of quoted member names of single dense dimension, last field is empty. - No members from this dimension ever appear in the file again. - N members are specified, up to these many figures can appear in data lines. * POV lines appear periodically, those signal complete POV update: - List of quoted member names from distinct sparse dimensions. * DATA line consists of both partial POV updates and figures: - Quoted members of remaining dense dimensions (not present in POV lines). - Figures are non-quoted #Mi and numeric values, up to N occurrences per line. - Last field is always empty string, so last figure is followed by space. - Missing values from the left are marked as #Mi, from the right - skipped. To parse the export file you need to know two things: * Number of data storing dimensions in the cube. * Complete mapping of members to dimensions for storage dimensions.
Essbase columns export.
What to expect from input:
To parse the export file you need to know two things:
Essbase columns export. What to expect from input: * Space separated. * Quoted member names, non-quoted values. * Max file size 2GB. * COLUMNS are specified in first line of the file: - List of quoted member names of single dense dimension, last field is empty. - No members from this dimension ever appear in the file again. - N members are specified, up to these many figures can appear in data lines. * DATA line consists of both full POV updates and figures: - Quoted members of non-columns dimensions followed by figures. - Dimension order in the data lines should always be the same. - Figures are non-quoted #Mi and numeric values, up to N occurrences per line. - Last field is always empty string, so last figure is followed by space. - Missing values from the left are marked as #Mi, from the right - skipped. To parse the export file you need to know two things: * Number of data storing dimensions in the cube. * Complete mapping of member name to dimension name in those dimensions.
Essbase application log and MaxL spool.
Log timestamp line looks like:
[Tue Nov 06 08:50:26 2001]Local/Sample///Info(1013214)
So it's [timestamp]Local/application/database/issuer/type(code) more or less.
And then data follows, that looks like this:
Clear Active on User [admin] Instance [1];
So this one contains user info, but it could be Command [.+] or Database [.+] etc.
TODO: break entries into fields, not only headers?
Fields you'll get using AppLog:
Additional tables of use: Code categories [1].
And then there's MaxL allowing you to see all the useful system properties, and even run MDX queries and get results back, almost like in a real database, but only if you remember to set the column_width just right, or it will truncate those space padded, fixed width table outputs.
FIXME: can headers in MaxL output be multiline?
I just set it to 256 and that's the default value here. YMMV.
That MaxLSpool will help you extract those, remove the padding and pack columns into hash maps. Will also keep MaxL output preceding and following tabular output.
Some 'special' values are resolvable via maxl-constants map. It's WIP with little P.
[1] https://docs.oracle.com/cd/E12825_01/epm.111/esb_dbag/dlogs.htm
Essbase application log and MaxL spool. # Application logs Log timestamp line looks like: [Tue Nov 06 08:50:26 2001]Local/Sample///Info(1013214) So it's [timestamp]Local/application/database/issuer/type(code) more or less. And then data follows, that looks like this: Clear Active on User [admin] Instance [1]; So this one contains user info, but it could be Command [.+] or Database [.+] etc. TODO: break entries into fields, not only headers? Fields you'll get using AppLog: * full timestamp, decoded from date * date: yyyy-mm-dd decoded from timestamp * application: String * database: String * user: String * level: Info | Warning | Error | ??? * code: int * raw: String, full payload of the entry (head + message) Additional tables of use: Code categories [1]. # MaxL shell spool And then there's MaxL allowing you to see all the useful system properties, and even run MDX queries and get results back, almost like in a real database, but only if you remember to set the column_width just right, or it will truncate those space padded, fixed width table outputs. FIXME: can headers in MaxL output be multiline? I just set it to 256 and that's the default value here. YMMV. That MaxLSpool will help you extract those, remove the padding and pack columns into hash maps. Will also keep MaxL output preceding and following tabular output. Some 'special' values are resolvable via maxl-constants map. It's WIP with little P. [1] https://docs.oracle.com/cd/E12825_01/epm.111/esb_dbag/dlogs.htm
Essbase XML Outline export.
Provides szew.io/XML processors for dimension extraction and convenience functions consuming the outline export file directly.
Allows both sequencing and zipping over dimensions and members.
Just keep in mind that parsing big, deeply nested XMLs is a memory hog.
Essbase XML Outline export. Provides szew.io/XML processors for dimension extraction and convenience functions consuming the outline export file directly. Allows both sequencing and zipping over dimensions and members. Just keep in mind that parsing big, deeply nested XMLs is a memory hog.
Dimension specs for otl.
Dimension specs for otl.
No vars found in this namespace.
Member specs for otl.
Member specs for otl.
No vars found in this namespace.
Essbase transaction logs.
ALG file is just pairs of timestamps and transaction descriptions:
ATX file holds the data as it was locked and sent:
Note: Entries in both files are expected to keep the same order. Based on these properties they can be parsed separately and joined via index. This method is beneficial, because row-count errors happen in ALG from time to time and are easier to spot if you don't follow ALG declarations when parsing transactions.
This namespace lets you process transaction logs, filter and pack results in a presentable way. Contains some basic predicates to aid that.
IMPORTANT:
Since you will most likely work with these files via text editor, or interact with people who will, to verify the results -- all indexing is 1-based, just to make your life a bit easier easier:
Whatever is the ALG declaration of starting position and rows -- we use that here to avoid any confusion while exchanging this data.
Essbase transaction logs. ALG file is just pairs of timestamps and transaction descriptions: * First two lines is the time stamp of when audit log was enabled. * Remaining pairs describe user and location+length in ATX file, line wise. ATX file holds the data as it was locked and sent: * Quoted member names, non-quoted values. * Data chunks are separated by empty lines. Note: Entries in both files are expected to keep the same order. Based on these properties they can be parsed separately and joined via index. This method is beneficial, because row-count errors happen in ALG from time to time and are easier to spot if you don't follow ALG declarations when parsing transactions. This namespace lets you process transaction logs, filter and pack results in a presentable way. Contains some basic predicates to aid that. IMPORTANT: Since you will most likely work with these files via text editor, or interact with people who will, to verify the results -- all indexing is 1-based, just to make your life a bit easier easier: * :line-no in header, physical line in ALG file; * :span in block, physical lines in ATX file; * :index in both is order number of it. Whatever is the ALG declaration of starting position and rows -- we use that here to avoid any confusion while exchanging this data.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close