mmx metadata framework
...the DNA of your data
MMX metadata framework is a lightweight implementation of OMG Metadata Object Facility built on relational database technology. MMX framework
is based on three general concepts:
Metamodel | MMX Metamodel provides a storage mechanism for various knowledge models. The data model underlying the metadata framework is more abstract in nature than metadata models in general. The model consists of only a few abstract entities... see more.
Access layer | Object oriented methods can be exploited using inheritance to derive the whole data access layer from a small set of primitives created in SQL. MMX Metadata Framework provides several diverse methods of data access to fulfill different requirements... see more.
Generic transformation | A large part of relationships between different objects in metadata model are too complex to be described through simple static relations. Instead, universal data transformation concept is put to use enabling definition of transformations, mappings and transitions of any complexity... see more.

Knowledge Management feat. Wiktionary

March 12, 2010 12:55 by marx

Wiktionary is about Knowledge Management.

Although the term itself has been around for ages, it would probably be hard to find two persons who would agree on what it stands for precisely. Knowledge management has come a long way, from huge hierarchical file systems full of text files of the 70's, to dedicated document management systems of the 80's, to enterprise portals, intranets and content management systems of the 90's. However, it's always been a balancing act between strengths and weaknesses in particular areas, to get the mix between collaborative, structural and navigational facets right.

Two burning issues building a knowledge management infrastructure as we see it are: How to define and access the knowledge we want to manage? and How to store the knowledge we have created/defined?

Regarding the first question, the keywords are collaborative effort in knowledge creation, and intuitive, effortless navigation during knowledge retrieval. In today's internet one of the most successful technologies of the Web 2.0 era is Wikipedia, or more generally - wiki. This is arguably the easiest to use, most widely recognised and probably the cheapest to build method to give a huge number of very different people located all over the world an efficient access to manage an unimaginably vast amount of complex and disparate information. So we found it to be good and put it to use.

One way to define knowledge management in a simple way is: it's about things (concepts, ideas, facts etc.) and relationships between them. In our today's internet-based world we have probably most (or at least a big share) of the data, facts and figures we ever need freely available for us, anytime, anywhere. So it's not about the existence or access of data, it's about navigation and finding it. The relationships are as important and sometimes even more important than the related items themselves. More than that, relationships tend to carry information with them, which might be even more significant than the information carried by the related items. 

Which brings us to the semantics (meaning) of the relationships. In Wikipedia (and in the Internet in general) the links carry only one universal meaning: we can navigate from here to there. A human being clicking on a link has to guess the meaning and significance of the link, and he/she does this by using a combination of intuition, experience and creativity. However, this is a pretty limited and inefficient way to associate things to each other. Adding semantics to relationships enables us to understand why and how various ideas, concepts, topics and terms are related. Some very obvious examples: 'synonym', 'antonym', 'part of', 'previous version', 'owner', 'creator'. The mindshift towards technologies with more semantically 'rich' relations is visible in the evolution from classifications to ontologies, from XML to RDF etc.

Finally, simply by enumerating things and relationships between them we have created a model, which forces us to think 'properly': we only define concepts and ideas that are meaningful in our domain of interest, and we only define relationships that are actually allowed and possible between those concepts and ideas. A model validates all our proceedings and forces us to 'do right things'. Wiktionary employs this approach as the cornerstone of it's technology; in fact, the metamodel acting as the base of Wiktionary houses a multitude of different models, enabling Wiktionary to support management of knowledge in disparate subject domains simultaneously and even have links between concepts belonging to different domains. So, regarding our second issue, metamodel defines a structured storage mechanism for our knowledge repository.

In data processing world, there has always been an ancient controversy between structured and unstructured data. Structured data is good for the computers, and can be managed and processed very efficiently. However, we, humans tend to think in an unstructured way, and most of us feel very uncomfortable while being forced to squeeze the way we do things into rigid and structured patterns. Wiktionary aims to bridge those two opposites by building on a well-defined underlying structure, at the same time providing a comfortable, unstructured user experience. We have two pretty controversial goals and the approach we have taken - Wiktionary - is arguably the cheapest route to solve both of them.



MMX Wiktionary: A Wiki With An Attitude

November 17, 2009 14:45 by kalle

MMX Wiktionary is the web based collaborative application, top on MMX Metadata Framework, to provide semantic wiki-like user interface for metadata creation and management. Main creative idea, behind the MMX Wiktionary, is structured, metamodel driven, universal metadata repository in combination with wiki user interface. This combination allows users to see and feel complicated metadata structures as conventional pages, without losing required formalization, driven by defined metamodel. Same time, there are no restrictions to use Wiktionary for loosely formalized content creation, like document management, using predefined open schema/metamodel approach, when needed. While it seems easier to start without modeling, we do not see it promising for organizational metadata perspective. Our moderately modeled approach brings guided metadata creation to every end user, in intuitive and simplified form. We do not sacrifice the semantics, which is coded to metamodel, in the journey of simplification and usability creation. Content dependent classification of pages, hierarchy management, named relation and properties extraction and linking within text creation, are some examples of usability and semantics mashup.  

The editor user interface is one of the biggest challenges in our wiki initiative. Trying to avoid wiki's markup mess we use "wysiwyg" editor, for rich content formatting and directed metadata creation. The editor is meant to end users, who are grown up with mark-and-click editing style and do not know or remember, how text creation was "programmed" in WordStar or WordPerfect environment or do not have extensive "writing in Wikipedia" experience. Created text parsed during saving and stored to metadata repository in structured form, that is defined by model. The rich formatting is stored to the text body, using basic html markup, which will be interpreted during reading and writing, by browser and editor. Defined properties and created links will be extracted from text and stored to metadata structures as property values or relations between objects. Addition to saved text, the markup is the connection mechanism between text and stored properties and relation, which is giving the layout and presentation dimension to the captured metadata, same time preserving structure and machine process capability. 

Some keywords and topics in our Wiktionary initiative, which keeps us busy: 

  • wiki style ui and wysiwyg editor
  • usability and semantics, integration of user interface and metadata
  • community driven content tagging for business glossary creation
  • page templates and metadata driven layout
  • history and versioning
  • discussion forum and commentaries
  • import and export

 




XDTL: Template-Based SQL Generation

November 6, 2009 22:21 by marx

SQL is and probably remains the main workforce behind any ETL (and especially ELT flavour of ETL) tool. Automating SQL generation has arguably always been the biggest obstacle in building an ideal ETL tool - ie. completely metadata-driven, with small foot-print, multiple platform support on single code base... and, naturally, capable of generating complex SQL in an easy and flexible manner, with no rocket scientists required nearby. While SQL stands for Structured Query Language, ironically the language itself is not too well 'structured', and the abundance of vendor dialects and extensions does not help either. 

Attempts to build an SQL generator supporting full feature list of SQL language have generally fallen into one of the two camps: one of them trying to create a graphical click-and-pick interface that would encompass the syntax of every single SQL construct, another one designing an even more high-level language or model to describe SQL itself, a kind of meta-SQL. The first approach would usually be limited to simple SQL statements, be appropriate mostly for SELECT statements only and struggle with UPDATEs and INSERTs, and be limited to a single vendor dialect.  

The second approach would drown in the complexity of SQL itself. In theory, one could decompose all of the SQL statements into series of binary expressions, store them away, and (re)assemble into SQL statements again as needed, driven by the syntax of a particular SQL dialect. However, usually this approach fails to produce something usable, mostly because SQL is too loosely defined (considering all those vendors and dialects) and trying to cover everything just results in a system too cumbersome for anyone to use. The result would probably be an order of magnitude more complex to use than just hand-coding SQL statements, even with several parallel code bases. And that's exactly what the developers would do: invent a method to bypass the abstract layer and hand-code SQL directly.

Enter Template-Based SQL Generation. Based on our experience (and tons of ETL code written) we have extracted a set of SQL 'patterns' common to ETL (ELT) tasks. The patterns are converted into templates for processing by a template engine (eg. Apache Velocity), each one realizing a separate SQL fragment, a full SQL statement or a complete sequence of commands implementing a complex process. Template engine merges patterns and mappings into executable SQL statements so instead of going as deep as full decomposition we only separate and extract mappings (structure) and template (process) parts of SQL. This limits us to only a set of predefined templates, but anyone can add new or customize the existing ones.

The important thing about this is: templates are generic and can be used with multiple different mappings/data structures. The mappings are generic as well and can be used in multiple different patterns/templates. Template engine 'instantiates' mappings and templates to create executable SQL statement 'instances' which brings us closer to OO mindset. The number of tables joined, the number of columns selected, the number of WHERE conditions etc. is arbitrary and is affected by and driven by the contents of the mappings only, ie. well-designed templates are transparent to the level of complexity of the mappings. The same template would produce quite different SQL statements driven by minor changes in mappings.

As an example, a 'basic' template-driven INSERT..SELECT statement might look like this:

INSERT INTO network (
caller
,receiver
,calls_no
)
SELECT
c.cust_a AS caller
,c.cust_b AS receiver
,c.calls AS calls_no
FROM
call c
LEFT OUTER JOIN
network n ON n.network_id = c.network_id
WHERE
...

Indicating that three consequtive mappings are actually to be treated as one complex statement with subqueries would change the generated SQL to:

INSERT INTO network (
caller
,receiver
,calls_no
)
SELECT
c.cust_a AS caller
,c.cust_b AS receiver
,c.calls AS calls_no
FROM
(SELECT
DISTINCT a.cust_id AS cust_a
,b.cust_id AS cust_b
,c.call_type::integer AS type
,c.call_length::integer AS length
,c.call_date::date AS date
FROM
(SELECT
DISTINCT r.call_type::integer AS call_type
,r.call_length::integer AS call_length
,r.call_date::date AS call_date
FROM
raw_cdr r
...

On the other hand, we might prefer cascading INSERTs through temporary tables for performance reasons, that would morph the SQL into:

SELECT
DISTINCT r.call_type::integer AS call_type
,r.call_length::integer AS call_length
,r.call_date::date AS call_date
INTO TEMP TABLE cdr
FROM
raw_cdr r
WHERE
... 

Selecting Oracle as the target platform would switch the same template over to Oracle syntax producing:

CREATE TABLE cdr AS
SELECT
DISTINCT r.call_type AS call_type
,r.call_length AS call_length
,r.call_date AS call_date
FROM
raw_cdr r
WHERE
... 

To accomplish all (or at least a lot) of this we have (so far) assembled two template 'libraries'. MMX XDTL Basic SQL Library covers a wide range of 'building blocks' for implementing complex data processing command chains: basic INSERT..SELECT, complex INSERT..SELECT with subqueries, cascaded (staged) INSERT..SELECT, UPDATE..FROM etc. MMX XDTL Basic ELT Library includes more complex multi-step patterns used in typical ELT scenarios focusing on single-table synchronisation: Full Replace, Incremental Load, Upsert, History Load etc. These pattern libraries serve the purpose of reference templates and are easily customizable to fit the unique characteristics of a specific use case.