Get Started with Analytixus
Follow these steps to set up Analytixus on your system and start accelerating your data projects.
Install prerequisites
Download and install the .NET Core 8 Runtime Framework.
Download Analytixus
Get the latest release from our Download page as a ZIP file.
Extract it into a folder of your choice, for example:C:\Analytixus\App
Set up your project space
Inside your installation folder, create a subfolder called ProjectData. Each of your projects should have its own subfolder here.C:\Analytixus\ProjectData
Run the application
simply start AnalytixusApp.exe.
Windows protected your PC
If Windows prevents the program from running with the following message, you can install the certificate File "Analytixus.cer" from the "Certificates" folder as a “trusted root certificate.” After that, the program should run without any warnings.


Option: Install SAP-Driver
SAP HANA Client
For SAP HANA access install the SAP HANA Client (x64).
👉 You need a User to download the Software
SAP Development Tools or direct hanaclient-latest-windows-x64.zip
SAP NCO 3.1 for .NET Core
For SAP ERP (ECC or S4) or SAP BW access download SAP Connector for Microsoft .NET 3.1 – Compiled for .NET (formerly .NET Core).
👉 You need a User to download the Software
SAP Connector for Microsoft .NET or direct SAP Connector for Microsoft .NET 3.1.6.0 for Windows 64bit (x64)
After downloading, unzip the files. Then copy all files with the “.dll” extension to the Analytixus app folder (where the AnalytixusApp.exe is located).

Next Steps in Analytixus
After installing and starting Analytixus, here’s how you can set up your first solution and begin working with metadata.
Create your first solution
Click the Database-Symbol and choose the folder C:\Analytixus\ProjectData or an other one. Name your solution Test-DBX.


Add a source
Inside the solution, you will see an empty Sources folder. Right-click it and choose New Source — select from CSV, Oracle, PostgreSQL, SQL Server, Databricks, SAP SuccessFactors, and more.
In my example I use the Wide World Importers sample database, which is installed on an Azure SQL Database.

Configure connection & metadata
Enter the connection details and filters in the dialog. – Use Test to validate the connection. – Use Save to store the Metadata as so called unsupervised Metadata. The individual metadata files for each object are displayed in the tree view and can be opened. Since this is raw metadata and manual modification until the next metadata update is pointless, we call it unsupervised metadata. When saved, this XML-based metadata was converted into a compact and editable form –> DnAML (Data and Analytics Markup Language).
DnAML (Data and Analytics Markup Language) is a domain-specific, JSON-like metadata format used for the semantic description and processing of data sources, structures, and transformations in data analytics solutions. It enables the centralized definition and management of data models across the entire data processing chain—from raw data ingestion to analytical provisioning.
In the metadata lifecycle, unsupervised metadata are transformed into a DnAML model, representing the second stage of the process.


Open DnAML-Model Editor
Open the Model (DnAML) to see which metadata has been read in.
The unsupervised metadata of the data source was converted into DnAML and inserted into the model and stored in supervised metadata (XML and JSON). The corresponding file "Metadata.xml" and "Metadata.json" is stored in the solution folder.

Organize your Solution using a Template
Open the “Template Manager” via the “Extras” menu. Wait until the online templates have loaded. Select your solution from the dropdown menu and install the “Lean Data-Vault for Databricks” template (with Metadata = false). Then close the window to see what has been imported.
The template contains the folders "Bronze", "Silver", and "Gold" for the medallion architecture, as well as other helper folders. It includes various examples of how to generate and manipulate code and metadata using XSLT or Python. Further details on how to use the template can be found in the file “00ReadmeFirst.md” in the folder “Common”.




Build the Code and Metadata with XSLTX-Transformations
Open the "Bronze" folder and right-click on "01_BronzeStore.xsltx". In the context menu that appears, click "Build…". A window will then open and load the artifacts to be created. Clicking the “Build” button starts the code generation process, which in this case also generates metadata for the DnAML model from the generated code.
If the build process was successful, the window will close. Afterwards, open the DnAML Model Editor to see what happened.



XSLT transformation vs. XSLTX transformation:
XSLT transformation are standard XSLT script templates that are suitable for outsourcing reusable elements. For example, data type conversions can be outsourced and reintegrated into other XSLTX transformations. XSLTX transformations are also XSLT script templates, but with the difference that they are provided with supervised metadata by the build process to generate code.
Create a second solution to see the possibilities.
Click the Database-Symbol and choose the folder C:\Analytixus\ProjectData or an other one. Name your solution Test-DBX-Complete-Sample.
After creating the solution, use the Template Manager and install the template "Lean Data-Vault for Databricks complete sample" (with Metadata = true) into this solution.
Next, open the DnAML Model Editor and navigate through the solution. Also, take a look at the “Best Practice Analyzer” or the documentation in the Preview tab (below).




Everything else can be found in the help.

