--- content_type: "doc_shard" title: "Meta-data processor" chapter: "meta" label: "meta" version: 2.0 date: 27.08.2015 published: true --- The main and the most distinguishing feature of DataForge framework is a concept of data analysis as a meta-data processor. First of all, one need to define main terms: * **Data**. Any information from experiment or simulation provided by user. The data format is not governed by DataForge so it could be presented in any form. The important point concerning data is that data is by default immutable. Meaning the program could create its modified copies (which is not recommended), but could not modify initial input in any way. * **Meta-data**. Meta-data contrary to data has a fixed internal representation provided by Meta object. Meta is simple [tree-like object](#meta_structure) that can conveniently store values and Meta nodes. Meta-data is either provided by user or generated in analysis process. Inside the analysis process Meta is immutable, but for some purposes, changeable meta nodes could be used during analysis configuration and results representation (see [Configuration](#configuration)). The fundamental point of the whole DataForge philosophy is that every external input could be either data or meta-data. This point could seem dull at first glance, but in fact, it have a number of very important consequences: * *No scripts*. DataForge encourages user not to use imperative code to manipulate data. Any procedure to be performed on data should be presented either by some hard-coded meta-data processor rule (the function that takes data and metadata and produces some output without any additional external information) or as declarative process definition in form of meta-data. Since the most of data analysis nowadays is made by different scripts, the loss of scripting capability could seem to be a tremendous blow to framework functionality, but in fact every thing one could do with script, one also can do with declaration and declaration processing rules. Also purely declarative description allows for easy error check and analysis scaling (via parallelism).