Introduction to ODI 12c

Data Integration earlier referred as as the architecture used for loading Enterprise Data Warehouse systems, but now it includes now includes data movement, data synchronization, data quality, data management, and data services.

Oracle Data Integrator (ODI 12c) is a heterogeneous Data Integration (E-LT) tool that is capable of:- –High-volume batch processing with optimal performance –Event-driven or Trickle-feed integration –SOA -based integration

This tool supports variety of technologies ranging from all RDBMS to legacy applications.

It is an Extract, load and transform (ELT) (in contrast with the ETL common approach) tool produced by Oracle that offers a graphical environment to build, manage and maintain data integration processes in business intelligence systems.

Features of ODI 12c (12.1.2):

      -Declarative Flow-Based User Interface
      -Reusable Mappings
      – Multiple Target Support
      -Step-by-Step Debugger
      -Runtime Performance Enhancements
      -GoldenGate Integration Improvements
      -Standalone Agent Management with WebLogic Management Framework
      -Integration with OPSS Enterprise Roles
      -XML Improvements
      -Oracle Warehouse Builder Integration
      -Unique Repository IDs
              -Oracle Warehouse Builder to Oracle Data Integrator Migration Utility

Learn about features in detail at —oracle

Oracle Data Integration Concepts

1. Introduction to Declarative Design:- Declarative Design in Oracle Data Integrator uses relational approach to declare in the form of a mapping the declarative rules for a data integration task, which includes designation of sources, targets, and transformations.There are four types of Declarative rules mappings, joins, filters and constraints.

— A Mapping is a business rule implemented as an SQL expression. It is a transformation rule that maps source attributes (or fields)onto one of the target attributes.

— A Join operation links records in several data sets, such as tables or files. Joins are used to link multiple sources to the target.

— A Filter is an expression applied to source data sets attributes. Only the records matching this filter are processed by the data flow.

–A constraint validate the data in a given data set and the integration of the data of a model. Constraints on the target are used to check the validity of the data before integration in the target.

2. Introduction to Knowledge Modules:- KM implement “how” the integration processes occur. It is a code template and independent of the declarative rules that need to be processed. There are six types of Knowledge Modules, and each refers to a specific integration task.

a). IKM :- IKM refers for Integration knowledge module used to integrate data in Target system, using specific strategies (insert/update, slowly changing dimensions).These KMs are used in Mappings.

b). LKM :- Loading Knowledge Module, Loading data from one system to another using system optimized methods.These KMs are also used in Mappings.

c). CKM :- Check Knowledge Module, used to check data consistency i.e. constraints on sources and targets are not violated.

d). RKM :- Reverse Knowledge Module, used to reverse engineering metadata from heterogeneous systems for ODI.

e). JKM :- Journalizing Knowledge Modules, used to perform change data capture (CDC) on a given system.

f). SKM :-Service Knowledge Modules, used to expose data in the form of web services.

Leave a Reply