XDTL stands for eXtensible Data Transformation Language (see the previous post). This is an XML based language for describing data transformations most often utilized in loading data into data warehouses, or building complex data processing tasks consisting of series of data manipulations. XDTL language definition (XML Schema) has its namespace defined here: 

xmlns:xdtl="http://xdtl.org/xdtl"
xsi:schemaLocation="http://xdtl.org/xdtl xdtl.xsd" 

Note: The schema is non-normative and only provided as means to validate XDTL instances and, as such, is naturally subject to change as the language evolves.

(1) Runtime Engine interpreting XDTL scripts. XDTL is just a language used to build the scripts describing data transformations, so it needs an execution mechanism to (pre)process and run those scripts. An XDTL engine (interpreter) assembles the scripts, mappings and templates into a series of executable commands basically consisting of file and database operations and runs them. There can be more than one XDTL runtime, each one designed for its own purpose and implementing a specific subset of the language definition. An XDTL runtime could also be embedded into another system to provide the low-level plumbing for an application that has to accomplish some ELT functions internally. 

(2) Mappings, stored either in MMX Repository or directly inside XDTL script. Mappings' concept is based on the ideas laid out in [1]. Mappings express the structural dependencies and data dependencies between data 'sources' and 'targets' during different stages of a transformation process and "...describe all data movement-related dependencies as a set of Mapping instances. One instance represents a "single-stage" link between a set of source Entity instances and a set of target Entity instances where every entity plays only a single role, either source ot target." [1] There are three basic types of mapping instances: Filter-Mapping (corresponding to SQL's WHERE and HAVING clauses), Join-Mapping (JOINs) and Aggregate-Mapping (GROUP BYs).

Implementation of the mappings concept in XDTL involves a set of four collections: Sources (covering source tables), Target (target table), Columns (column mappings accompanied by IsJoinKey, IsUpdateableColumn etc.) and Conditions (conditions used in JOIN, WHERE and HAVING clauses). Mappings are either imported from MMX Metadata Repository in XML format during execution, included from an external file or explicitly defined in the XDTL script. An arbitrary number of mappings can be cascaded to express transformations of very high complexity. Storing mappings in the Repository opens up endless opportunities for using the same information in various other applications, eg. Impact Analysis or Data Quality tools.

(3) SQL templates turned into executable SQL statements. Being an ELT language, SQL statement represents its single most important functional part. The big question with SQL automation is: how far you want to go with substituting SQL code with something more abstract? In theory, you could decompose all your SQL statements into series of binary expressions, store them away, and assemble into SQL statements again as needed, driven by syntax of one particular SQL dialect. However, usually this approach fails to produce something useful - mostly because SQL is too loosely defined (considering all those vendors and dialects) and trying to cover everything just results in a system too cumbersome for anyone to use. The result would be a sort of 'metaSQL' describing 'real SQL' that is probably an order of magnitude more complex to maintain than hand-coded statements, even with several parallel code bases. And that's exactly what the developers would do: invent a mechanism to bypass the abstract layer and hand-code SQL directly.

(4) Template Engine. Based on our experience (and tons of ETL code written) we have extracted a set of SQL 'patterns' common to ETL (ELT) tasks. The patterns are converted into templates for processing by a template engine (Velocity [2] in particular). Template engine merges patterns and mappings into executable SQL statements. So instead of going as deep as full decomposition we only separate mappings (structure) and template (process) parts of SQL. This limits us to only a set of predefined templates but new ones can always be added. The templates are generic and can be used with multiple different mappings/data structures. The mappings are generic as well and can be used in multiple different patterns/templates. Template engine 'instantiates' mappings and templates to create executable SQL statement 'instances'.

As an example, provided with proper mappings, this simple template 

...
#macro( insertClause $Cols )
#foreach ($col in $Cols )
 $colTarget#if( $velocityHasNext ),#end
#end
)
#end
 
#macro( selectClause $Cols )
#foreach ( $col in $Cols )
 $colSource#if( $colType )::$colType#end AS $colTarget
#if( $velocityHasNext ) ,#end
#end
#end
 
#macro( fromClause $Srcs $Conds )
#foreach ( $cond in $Conds )
#foreach ( $src in $Srcs )
#if( $condId == $velocityCount )
 $srcName $srcAlias ON $condExpr
#if( $velocityHasNext ) 
JOIN#end
#end
#end
#end
#end
 
#macro( whereClause $Conds )
#foreach ( $cond in $Conds )
#if( $velocityCount > 1 )AND #end
 $condExpr
#end
#end
 
## generic insert-select statement
#set ($tgt = $Target)
INSERT INTO $tgtName (
#insertClause ( $Columns )
SELECT 
#selectClause ( $Columns )
FROM 
#fromClause ( $Sources $Conditions )
WHERE 
#whereClause ( $Conditions )

...

would produce the following SQL statement:

INSERT INTO call (
 cust_a, cust_b, type, length, date)
SELECT
 a.cust_id AS cust_a
 , b.cust_id AS cust_b
 , call_type::integer AS type
 , call_length::integer AS length
 , call_date::date AS date
FROM
 cdr c
JOIN customer a
 ON c.phone_a = a.phone_no
JOIN customer b
 ON c.phone_b = b.phone_no
WHERE
 c.phone_a IS NOT NULL

This pretty simple template can actually produce a lot of quite different (and much more complex) SQL statements all following the same basic pattern (insert select from multiple table join) which is probably one of the most frequent ones in ELT processes. Of course, in an ideal world, a runtime engine would also have zero footprint and zero overhead, support multiple platforms and multiple SQL dialects...

[1] Stöhr, T.; Müller, R.; Rahm, E.: An Integrative and Uniform Model for Metadata Management in Data Warehousing Environments, 1999. 

[2] http://velocity.apache.org/