Thursday, November 27, 2014

Linux Tutorial 1

Introduction

This tutorial is designed for beginners who wish to learn the basics of shell scripting/programming plus introduction to power tools such as awk, sed, etc. It is not help or manual for the shell; while reading this tutorial you can find manual quite useful (type man bash at $ prompt to see manual pages). Manual contains all necessary information you need, but it won't have that much examples, which makes idea more clear. For this reason, this tutorial contains examples rather than all the features of shell.

Audience for this tutorial

I assumes you have at least working knowledge of Linux i.e. basic commands like how to create, copy, remove files/directories etc or how to use editor like vi or mcedit and login to your system. But not expects any programming language experience. If you have access to Linux, this tutorial will provide you an easy-to-follow introduction to shell scripting.

What's different about this tutorial

Many other tutorial and books on Linux shell scripting are either too basic, or skips important intermediate steps. But this tutorial, maintained the balance between these two. It covers the many real life modern example of shell scripting which are almost missed by many other tutorials/documents/books. I have used a hands-on approach in this tutorial. The idea is very clear "do it yourself or learn by doing" i.e. trying things yourself is the best way to learn, so examples are presented as complete working shell scripts, which can be typed in and executed.

What Linux is?

  • Free :
    Linux is free. First ,It's available free of cost (You don't have to pay to use this OS, other OSes like MS-Windows or Commercial version of Unix may cost you money) 
    Second free means freedom to use Linux, i.e. when you get Linux you will also get source code of Linux, so you can modify OS (Yes OS! Linux OS!!) according to your taste. 
    It also offers many Free Software applications, programming languages, and development tools etc. Most of the Program/Software/OS are under GNU General Public License (GPL).
  • Unix Like:
    Unix is almost 35 year old Os. 
    In 1964 OS called MULTICS (Multiplexed Information and Computing System) was developed by Bell Labs, MIT & General Electric. But this OS was not the successful one.

    Then Ken Thompson (System programmer of Bell Labs) thinks he could do better (In 1991, Linus Torvalds felt he could do better than Minix - History repeats itself.). So Ken Thompson wrote OS on PDP - 7 Computer, assembler and few utilities, this is know as Unix (1969). But this version of Unix is not portable. Then Unix was rewrote in C. Because Unix written in 'C', it is portable. It means Unix can run on verity of Hardware platform (1970-71). 
    At the same time Unix was started to distribute to Universities. There students and professor started more experiments on Unix. Because of this Unix gain more popularity, also several new features are added to Unix. Then US govt. & military uses Unix for there inter-network (now it is know as INTERNET).
    So Unix is Multi-user, Multitasking, Internet-aware Network OS.  Linux almost had same Unix Like feature for e.g.
    • Like Unix, Linux is also written is C.
    • Like Unix, Linux is also the Multi-user/Multitasking/32 or 64 bit Network OS.
    • Like Unix, Linux is rich in Development/Programming environment.
    • Like Unix, Linux runs on different hardware platform; for e.g.
      • Intel x86 processor (Celeron/PII/PIII/PIV/Old-Pentiums/80386/80486)
      • Macintosh PC's 
      • Cyrix processor 
      • AMD processor 
      • Sun Microsystems Sparc processor
      • Alpha Processor (Compaq)
  • Open Source:  Linux is developed under the GNU Public License. This is sometimes referred to as a "copyleft", to distinguish it from a copyright.
    Under GPL the source code is available to anyone who wants it, and can be freely modified, developed, and so forth. There are only a few restrictions on the use of the code. If you make changes to the programs , you have to make those changes available to everyone. This basically means you can't take the Linux source code, make a few changes, and then sell your modified version without making the source code available. 
  • Network operating system:
Common vi editor command list
For this PurposeUse this vi Command Syntax
To insert new textesc + i ( You have to press 'escape' key then 'i')
To save fileesc + : + w (Press 'escape' key  then 'colon' and finally 'w')
To save file with file name (save as)esc + : + w  "filename"
To quit the vi editoresc + : + q
To quit without savingesc + : + q!
To save and quit vi editoresc + : + wq
To search for specified word in forward directionesc + /word (Press 'escape' key, type /word-to-find, for e.g. to find word 'shri', type as
/shri)
To continue with search n
To search for specified word in backward directionesc + ?word (Press 'escape' key, type word-to-find)
To copy the line where cursor is locatedesc + yy
To paste the text just deleted or copied at the cursoresc + p
To delete entire line where cursor is locatedesc + dd
To delete word from cursor positionesc + dw
To Find all occurrence of given word and Replace then globally without confirmation esc + :$s/word-to-find/word-to-replace/gFor. e.g. :$s/mumbai/pune/g
Here word "mumbai" is replace with "pune"
To Find all occurrence of given word and Replace then globally with confirmationesc + :$s/word-to-find/word-to-replace/cg
To run shell command like ls, cp or date etc within viesc + :!shell-command

For e.g. :!pwd

How Shell Locates the file
To run script, you need to have in the same directory where you created your script, if you are in different directory your script will not run (because of path settings), For e.g.. Your home directory is ( use $ pwd to see current working directory) /home/vivek. Then you created one script called 'first', after creation of this script you moved to some other directory lets say /home/vivek/Letters/Personal, Now if you try to execute your script it will not run, since script 'first' is in /home/vivek directory, to overcome this problem there are two ways first, specify complete path of your script when ever you want to run it from other directories like giving following command
$ /bin/sh   /home/vivek/first 

Now every time you have to give all this detailed as you work in other directory, this take time and you have to remember complete path. 
There is another way, if you notice that all of our programs (in form of executable files) are marked as executable and can be directly executed from prompt from any directory. (To see executables of our normal program give command $ ls -l /bin ) By typing commands like
$ bc
$ cc myprg.c
$ cal
etc, How its possible? All our executables files are installed in directory called /bin and /bin directory is set in your PATH setting, Now when you type name of any command at $ prompt, what shell do is it first look that command in its internal part (called as internal command, which is part of Shell itself, and always available to execute), if found as internal command shell will execute it, If not found It will look for current directory, if found shell will execute command from current directory, if not found, then Shell will Look PATH setting, and try to find our requested commands executable file in all of the directories mentioned in PATH settings, if found it will execute it, otherwise it will give message "bash: xxxx :command not found", Still there is one question remain can I run my shell script same as these executables?, Yes you can, for this purpose create bin directory in your home directory and then copy your tested version of shell script to this bin directory. After this you can run you script as executable file without using command like
$ /bin/sh   /home/vivek/first 

Command to create you own bin directory.
$ cd
$ mkdir bin
$ cp first ~/bin
$ first
Each of above commands can be explained as follows:
Each of above commandExplanation
$ cdGo to your home directory
$ mkdir binNow created bin directory, to install your own shell script, so that script can be run as independent program or can be accessed from any directory
$ cp   first ~/bincopy your script 'first' to your bin directory
$ firstTest whether script is running or not (It will run)
Answer to Variable sections exercise
Q.1.How to Define variable x with value 10 and print it on screen.
$ x=10
$ echo $x

Q.2.How to Define variable xn with value Rani and print it on screen
For Ans. Click here
$ xn=Rani
$ echo $xn

Q.3.How to print sum of two numbers, let's say 6 and 3
$ echo 6 + 3This will print 6 + 3, not the sum 9, To do sum or math operations in shell use expr, syntax is as follows  
Syntax:
 expr   op1   operator   op2Where, op1 and op2 are any Integer Number (Number without decimal point) and operator can be
+ Addition
- Subtraction
/ Division
% Modular, to find remainder For e.g. 20 / 3 = 6 , to find remainder 20 % 3 = 2, (Remember its integer calculation)
\* Multiplication
$ expr 6 + 3 
Now It will print sum as 9 , But
$ expr 6+3
will not work because space is required between number and operator (See Shell Arithmetic)

Q.4.How to define two variable x=20, y=5 and then to print division of x and y (i.e. x/y)
For Ans. Click here
$x=20
$ y=5
$ expr x / y 


Q.5.Modify above and store division of x and y to variable called z
For Ans. Click here
$ x=20
$ y=5
$ z=`expr x / y`
$ echo $z 
Q.6.Point out error if any in following script
$ vi   variscript
#
#
# Script to test MY knolwdge about variables!
#
myname=Vivek
myos   =  TroubleOS    -----> 
ERROR 1myno=5
echo "My name is $myname"
echo "My os is $myos"

echo "My number is   myno,   can you see this number" 
 ----> ERROR 2
Read the following for ERROR 1 and ERROR 2 :
To print or access UDV use following syntax
Syntax: 
$variablename
Define variable vech and n as follows:
$ vech=Bus
$ n=10

To print contains of variable 'vech' type
$ echo $vech
It will print 'Bus',To print contains of variable 'n' type command as follows
$ echo $n
Caution: Do not try $ echo vech, as it will print vech instead its value 'Bus' and $ echo n, as it will print n instead its value '10', You must use $ followed by variable name.

Following script should work now, after bug fix!

$ vi   variscript
#
#
# Script to test MY knolwdge about variables!
#
myname=Vivek
myos=TroubleOS
myno=5
echo "My name is $myname"
echo "My os is $myos"
echo "My number is   $myno,   can you see this number"






Parameter substitution.
Now consider following command
$($ echo 'expr 6 + 3')
The command ($ echo 'expr 6 + 3')  is know as Parameter substitution. When a command is enclosed in backquotes, the command get executed and we will get output. Mostly this is used in conjunction with other commands. For e.g.
$pwd
$cp /mnt/cdrom/lsoft/samba*.rmp `pwd`
Now suppose we are working in directory called "/home/vivek/soft/artical/linux/lsst" and I want to copy some samba files from "/mnt/cdrom/lsoft" to my current working directory, then my command will be something like
$cp   /mnt/cdrom/lsoft/samba*.rmp    /home/vivek/soft/artical/linux/lsst
Instead of giving above command I can give command as follows
$cp  /mnt/cdrom/lsoft/samba*.rmp  `pwd`
Here file is copied to your working directory. See the last Parameter substitution of `pwd` command, expand it self to /home/vivek/soft/artical/linux/lsst. This will save my time.
$cp  /mnt/cdrom/lsoft/samba*.rmp  `pwd`

Future Point: 
What is difference between following two command?
$cp  /mnt/cdrom/lsoft/samba*.rmp  `pwd`

                        
A N D

$cp  /mnt/cdrom/lsoft/samba*.rmp  .

Try to note down output of following Parameter substitution.
$echo "Today date is `date`"
$cal > menuchoice.temp.$$
$dialog --backtitle "Linux Shell Tutorial"  --title "Calender"  --infobox  "`cat  menuchoice.temp.$$`"  9 25 ; read








Thursday, November 20, 2014

ETL Informatica


Why do We need ETL Tools?

Think of GE, the company has over 100+ years of history & presence in almost all the industries. Over these years company’s management style has been changed from book keeping to SAP. This transition was not a single day transition. In transition, from book keeping to SAP, they used a wide array of technologies, ranging from mainframes to PCs, data storage ranging from flat files to relational databases, programming languages ranging from Cobol to Java.This transformation resulted into different businesses, or to be precise different sub businesses within a business, running different applications, different hardware and different architecture. Technologies are introduced as and when invented & as and when required.

This directly resulted into the scenario, like HR department of the company running on Oracle Applications, Finance running SAP, some part of process chain supported by mainframes, some data stored on Oracle, some data on mainframes, some data in VSM files & the list goes on. If one day company requires a consolidated reports of assets, there are two ways.

First completely manual, generate different reports from different systems and integrate them.
Second fetch all the data from different systems/applications, make a Data Warehouse, and generate reports as per the requirement.
Obviously second approach is going to be the best.

Now to fetch the data from different systems, making it coherent, and loading into a Data Warehouse requires some kind of extraction, cleansing, integration, and load. ETL stands for Extraction, Transformation & Load.

ETL Tools provide facility to Extract data from different non-coherent systems, cleanse it, merge it and load into target systems.
Informatica Power Center Components
Informatica Power Center is not just a tool but an end-to-end data processing and data integration environment. It facilitates organizations to collect, centrally process and redistribute data. It can be used just to integrate two different systems like SAP and MQ Series or to load data warehouses or Operational Data Stores (ODS). Now Informatica Power Center also includes many add-on tools to report the data being processed, business rules applied and quality of data before and after processing.
To facilitate this Power Center is divided into different components:
Power Center Domain: As Informatica says “The Power Center domain is the primary unit for management and administration within PowerCenter”. Doesn’t make much sense? Right… So here is a simpler version. Power Center domain is the collection of all the servers required to support Power Center functionality. Each domain has gateway (called domain server) hosts. Whenever you want to use Power Center services you send a request to domain server. Based on request type it redirects your request to one of the Power Center services.
Power Center Repository: Repository is nothing but a relational database which stores all the metadata created in Power Center. Whenever you develop mapping, session, workflow, execute them or do anything meaningful (literally), entries are made in the repository.
Integration Service: Integration Service does all the real job. It extracts data from sources, processes it as per the business logic and loads data to targets.
Repository Service: Repository Service is the one that understands content of the repository, fetches data from the repository and sends it back to the requesting components (mostly client tools and integration service)
Power Center Client Tools: The Power Center Client consists of multiple tools. They are used to manage users, define sources and targets, build mappings and mapplets with the transformation logic, and create workflows to run the mapping logic. The Power Center Client connects to the repository through the Repository Service to fetch details. It connects to the Integration Service to start workflows. So essentially client tools are used to code and give instructions to Power Center servers.
Power Center Administration Console: This is simply a web-based administration tool you can use to administer the Powe rCenter installation.
There are some more not-so-essential-to-know components discussed below:
Web Services Hub: Web Services Hub exposes Power Center functionality to external clients through web services.
SAP BW Service: The SAP BW Service extracts data from and loads data to SAP BW.
Data Analyzer: Data Analyzer is like a reporting layer to perform analytics on data warehouse or ODS data.
Metadata Manager: Metadata Manager is a metadata management tool that you can use to browse and analyze metadata from disparate metadata repositories. It shows how the data is acquired, what business rules are applied and where data is populated in readable reports.
Power Center Repository Reports: Power Center Repository Reports are a set of prepackaged Data Analyzer reports and dashboards to help you analyze and manage Power Center metadata.



Informatica System Architecture


Informatica ETL product, known as Informatica Power Center consists of 3 main components.

1. Informatica PowerCenter Client Tools: These are the development tools installed at developer end. These tools enable a developer to

·         Define transformation process, known as mapping. (Designer)

·         Define run-time properties for a mapping, known as sessions (Workflow Manager)

·         Monitor execution of sessions (Workflow Monitor)
·         Manage repository, useful for administrators (Repository Manager)
·         Report Metadata (Metadata Reporter)
2. Informatica PowerCenter Repository: Repository is the heart of Informatica tools. Repository is a kind of data inventory where all the data related to mappings, sources, targets etc is kept. This is the place where all the metadata for your application is stored. All the client tools and Informatica Server fetch data from Repository. Informatica client and server without repository is same as a PC without memory/hard disk, which has got the ability to process data but has no data to process. This can be treated as backend of Informatica.
3. Informatica PowerCenter Server:
Server is the place, where all the executions take place. Server makes physical connections to sources/ targets, fetches data, applies the transformations mentioned in the mapping and loads the data in the target system.
This architecture is visually explained in diagram below:
 


Informatica Product Line

Informatica is a powerful ETL tool from Informatica Corporation, a leading provider of enterprise data integration software and ETL software’s.
The important products provided by Informatica Corporation are provided below:
·         Power Center
·         Power Mart
·         Power Exchange
·         Power Center Connect<
·         Power Channel
·         Metadata Exchange
·         Power Analyzer
·         Super Glue
Power Center & Power Mart: Power Mart is a departmental version of Informatica for building, deploying, and managing data warehouses and data marts. Power Center is used for corporate enterprise data warehouse and power mart is used for departmental data warehouses like data marts. Power Center supports global repositories and networked repositories and it can be connected to several sources. Power Mart supports single repository and it can be connected to fewer sources when compared to Power Center. Power Mart can extensibility grow to an enterprise implementation and it is easy for developer productivity through a code less environment.
Power Exchange: Informatica Power Exchange as a standalone service or along with Power Center helps organizations leverage data by avoiding manual coding of data extraction programs. Power Exchange supports batch, real time and changed data capture options in main frame(DB2, VSAM, IMS etc.,), mid-range (AS400 DB2 etc.,), and for relational databases (oracle, sql server, db2 etc.) and flat files in Unix, Linux and windows systems.
Power Center Connect: This is add on to Informatica Power Center. It helps to extract data and metadata from ERP systems like IBM’s MQSeries, PeopleSoft, SAP, Siebel etc. and other third party applications.
Power Channel: This helps to transfer large amount of encrypted and compressed data over LAN, WAN, through Firewalls, transfer files over FTP, etc.
Meta Data Exchange: Metadata Exchange enables organizations to take advantage of the time and effort already invested in defining data structures within their IT environment when used with Power Center. For example, an organization may be using data modeling tools, such as Erwin, Embarcadero, Oracle designer, Sybase Power Designer etc. for developing data models. Functional and technical team should have spent much time and effort in creating the data model’s data structures (tables, columns, data types, procedures, functions, triggers etc.). By using metadata exchange, these data structures can be imported into power center to identify source and target mappings which leverages time and effort. There is no need for Informatica developer to create these data structures once again.
Power Analyzer: Power Analyzer provides organizations with reporting facilities. Power Analyzer makes accessing, analyzing, and sharing enterprise data simple and easily available to decision makers. Power Analyzer enables to gain insight into business processes and develop business intelligence. With Power Analyzer, an organization can extract, filter, format, and analyze corporate information from data stored in a data warehouse, data mart, operational data store, or other data storage models. Power Analyzer is best with a dimensional data warehouse in a relational database. It can also run reports on data in any table in a relational database that do not conform to the dimensional model.
Super Glue: Superglue is used for loading metadata in a centralized place from several sources. Reports can be run against this superglue to analyze metadata.
Note: This is not a complete tutorial on Informatica. We will add more tips and guidelines on Informatica in near future. Please visit us soon to check back. To know more about Informatica, contact its official website www.informatica.com



Informatica Transformation Types


A transformation is a repository object that generates, modifies, or passes data. The Designer provides a set of transformations that perform specific functions. For example, an Aggregator transformation performs calculations on groups of data.

Transformations can be of two types:

Active Transformation: An active transformation can change the number of rows that pass through the transformation, change the transaction boundary, can change the row type. For example, Filter, Transaction Control and Update Strategy are active transformations.
The key point is to note that Designer does not allow you to connect multiple active transformations or an active and a passive transformation to the same downstream transformation or transformation input group because the Integration Service may not be able to concatenate the rows passed by active transformations. However, Sequence Generator transformation(SGT) is an exception to this rule. A SGT does not receive data. It generates unique numeric values. As a result, the Integration Service does not encounter problems concatenating rows passed by a SGT and an active transformation.
Passive Transformation: A passive transformation does not change the number of rows that pass through it, maintains the transaction boundary, and maintains the row type.
The key point is to note that Designer allows you to connect multiple transformations to the same downstream transformation or transformation input group only if all transformations in the upstream branches are passive. The transformation that originates the branch can be active or passive.
Transformations can be Connected or Un Connected to the data flow.
Connected Transformation: Connected transformation is connected to other transformations or directly to target table in the mapping.
Un Connected Transformation: An unconnected transformation is not connected to other transformations in the mapping. It is called within another transformation, and returns a value to that transformation.



Informatica Transformations – List


Following are the list of Transformations available in Informatica:

·         Aggregator Transformation
·         Application Source Qualifier Transformation
·         Custom Transformation
·         Data Masking Transformation
·         Expression Transformation
·         External Procedure Transformation
·         Filter Transformation
·         HTTP Transformation
·         Input Transformation
·         Java Transformation
·         Joiner Transformation
·         Lookup Transformation
·         Normalizer Transformation
·         Output Transformation
·         Rank Transformation
·         Reusable Transformation
·         Router Transformation
·         Sequence Generator Transformation
·         Sorter Transformation
·         Source Qualifier Transformation
·         SQL Transformation
·         Stored Procedure Transformation
·         Transaction Control Transaction
·         Union Transformation
·         Unstructured Data Transformation
·         Update Strategy Transformation
·         XML Generator Transformation
·         XML Parser Transformation
·         XML Source Qualifier Transformation
·         Advanced External Procedure Transformation
·         External Transformation
In the following pages, we will explain all the above Informatica Transformations and their significances in the ETL process in detail.


Informatica Transformations


Aggregator Transformation
Aggregator transformation performs aggregate functions like average, sum, count etc. on multiple rows or groups. The Integration Service performs these calculations as it reads and stores data group and row data in an aggregate cache. It is an Active & Connected transformation.
Difference b/w Aggregator and Expression Transformation? Expression transformation permits you to perform calculations row by row basis only. In Aggregator you can perform calculations on groups.
Aggregator transformation has following ports – State, State_Count, Previous_State and State_Counter.
Components: Aggregate Cache, Aggregate Expression, Group by port, Sorted input.
Aggregate Expressions: are allowed only in aggregate transformations. can include conditional clauses and non-aggregate functions. can also include one aggregate function nested into another aggregate function.
Aggregate Functions: AVG, COUNT, FIRST, LAST, MAX, MEDIAN, MIN, PERCENTILE, STDDEV, SUM, VARIANCE
Application Source Qualifier Transformation
Represents the rows that the Integration Service reads from an application, such as an ERP source, when it runs a session.It is an Active & Connected transformation.
Custom Transformation
It works with procedures you create outside the designer interface to extend PowerCenter functionality. calls a procedure from a shared library or DLL. It is active/passive & connected type.
You can use CT to create T. that require multiple input groups and multiple output groups.
Custom transformation allows you to develop the transformation logic in a procedure. Some of the PowerCenter transformations are built using the Custom transformation. Rules that apply to Custom transformations, such as blocking rules, also apply to transformations built using Custom transformations. PowerCenter provides two sets of functions called generated and API functions. The Integration Service uses generated functions to interface with the procedure. When you create a Custom transformation and generate the source code files, the Designer includes the generated functions in the files. Use the API functions in the procedure code to develop the transformation logic.
Difference between Custom and External Procedure Transformation? In Custom T, input and output functions occur separately.The Integration Service passes the input data to the procedure using an input function. The output function is a separate function that you must enter in the procedure code to pass output data to the Integration Service. In contrast, in the External Procedure transformation, an external procedure function does both input and output, and its parameters consist of all the ports of the transformation.
Data Masking Transformation
Passive & Connected. It is used to change sensitive production data to realistic test data for non production environments. It creates masked data for development, testing, training and data mining. Data relationship and referential integrity are maintained in the masked data.
Example: It returns masked value that has a realistic format for SSN, Credit card number, birthdate, phone number, etc. But is not a valid value.
Masking types: Key Masking, Random Masking, Expression Masking, Special Mask format. Default is no masking.
Expression Transformation
Passive & Connected. are used to perform non-aggregate functions, i.e to calculate values in a single row. Example: to calculate discount of each product or to concatenate first and last names or to convert date to a string field.
You can create an Expression transformation in the Transformation Developer or the Mapping Designer.
Components: Transformation, Ports, Properties, Metadata Extensions.
External Procedure
Passive & Connected or Unconnected. It works with procedures you create outside of the Designer interface to extend PowerCenter functionality. You can create complex functions within a DLL or in the COM layer of windows and bind it to external procedure transformation. To get this kind of extensibility, use the Transformation Exchange (TX) dynamic invocation interface built into PowerCenter. You must be an experienced programmer to use TX and use multi-threaded code in external procedures.
Filter Transformation
Active & Connected. It allows rows that meet the specified filter condition and removes the rows that do not meet the condition. For example, to find all the employees who are working in NewYork or to find out all the faculty member teaching Chemistry in a state. The input ports for the filter must come from a single transformation. You cannot concatenate ports from more than one transformation into the Filter transformation.
Components: Transformation, Ports, Properties, Metadata Extensions.
HTTP Transformation
Passive & Connected. It allows you to connect to an HTTP server to use its services and applications. With an HTTP transformation, the Integration Service connects to the HTTP server, and issues a request to retrieves data or posts data to the target or downstream transformation in the mapping.
Authentication types: Basic, Digest and NTLM.
Examples: GET, POST and SIMPLE POST.
Java Transformation
Active or Passive & Connected. It provides a simple native programming interface to define transformation functionality with the Java programming language. You can use the Java transformation to quickly define simple or moderately complex transformation functionality without advanced knowledge of the Java programming language or an external Java development environment.
Joiner Transformation
Active & Connected. It is used to join data from two related heterogeneous sources residing in different locations or to join data from the same source. In order to join two sources, there must be at least one or more pairs of matching column between the sources and a must to specify one source as master and the other as detail. For example: to join a flat file and a relational source or to join two flat files or to join a relational source and a XML source.
The Joiner transformation supports the following types of joins:
·         Normal: Normal join discards all the rows of data from the master and detail source that do not match, based on the condition.
·         Master Outer: Master outer join discards all the unmatched rows from the master source and keeps all the rows from the detail source and the matching rows from the master source.
·         Detail Outer: Detail outer join keeps all rows of data from the master source and the matching rows from the detail source. It discards the unmatched rows from the detail source.
·         Full Outer: Full outer join keeps all rows of data from both the master and detail sources.
Limitations on the pipelines you connect to the Joiner transformation:
*You cannot use a Joiner transformation when either input pipeline contains an Update Strategy transformation.
*You cannot use a Joiner transformation if you connect a Sequence Generator transformation directly before the Joiner transformation.
Lookup Transformation
Passive & Connected or UnConnected. It is used to look up data in a flat file, relational table, view, or synonym. It compares lookup transformation ports (input ports) to the source column values based on the lookup condition. Later returned values can be passed to other transformations. You can create a lookup definition from a source qualifier and can also use multiple Lookup transformations in a mapping.
You can perform the following tasks with a Lookup transformation:
*Get a related value. Retrieve a value from the lookup table based on a value in the source. For example, the source has an employee ID. Retrieve the employee name from the lookup table.
*Perform a calculation. Retrieve a value from a lookup table and use it in a calculation. For example, retrieve a sales tax percentage, calculate a tax, and return the tax to a target.
*Update slowly changing dimension tables. Determine whether rows exist in a target.
Lookup Components: Lookup source, Ports, Properties, Condition.
Types of Lookup:
1.     Relational or flat file lookup.
2.     Pipeline lookup.
3.     Cached or uncached lookup.
4.     connected or unconnected lookup.


Normalizer Transformation
Active & Connected. The Normalizer transformation processes multiple-occurring columns or multiple-occurring groups of columns in each source row and returns a row for each instance of the multiple-occurring data. It is used mainly with COBOL sources where most of the time data is stored in de-normalized format.
You can create following Normalizer transformation:
*VSAM Normalizer transformation. A non-reusable transformation that is a Source Qualifier transformation for a COBOL source. VSAM stands for Virtual Storage Access Method, a file access method for IBM mainframe.
*Pipeline Normalizer transformation. A transformation that processes multiple-occurring data from relational tables or flat files. This is default when you create a normalizer transformation.
Components: Transformation, Ports, Properties, Normalizer, Metadata Extensions.
Rank Transformation
Active & Connected. It is used to select the top or bottom rank of data. You can use it to return the largest or smallest numeric value in a port or group or to return the strings at the top or the bottom of a session sort order. For example, to select top 10 Regions where the sales volume was very high or to select 10 lowest priced products.
As an active transformation, it might change the number of rows passed through it. Like if you pass 100 rows to the Rank transformation, but select to rank only the top 10 rows, passing from the Rank transformation to another transformation.
You can connect ports from only one transformation to the Rank transformation. You can also create local variables and write non-aggregate expressions.
Router Transformation
Active & Connected. It is similar to filter transformation because both allow you to apply a condition to test data. The only difference is, filter transformation drops the data that do not meet the condition whereas router has an option to capture the data that do not meet the condition and route it to a default output group.
If you need to test the same input data based on multiple conditions, use a Router transformation in a mapping instead of creating multiple Filter transformations to perform the same task. The Router transformation is more efficient.
Sequence Generator Transformation
Passive & Connected transformation. It is used to create unique primary key values or cycle through a sequential range of numbers or to replace missing primary keys.
It has two output ports: NEXTVAL and CURRVAL. You cannot edit or delete these ports. Likewise, you cannot add ports to the transformation. NEXTVAL port generates a sequence of numbers by connecting it to a transformation or target. CURRVAL is the NEXTVAL value plus one or NEXTVAL plus the Increment By value.
You can make a Sequence Generator reusable, and use it in multiple mappings. You might reuse a Sequence Generator when you perform multiple loads to a single target.
For non-reusable Sequence Generator transformations, Number of Cached Values is set to zero by default, and the Integration Service does not cache values during the session.For non-reusable Sequence Generator transformations, setting Number of Cached Values greater than zero can increase the number of times the Integration Service accesses the repository during the session. It also causes sections of skipped values since unused cached values are discarded at the end of each session.
For reusable Sequence Generator transformations, you can reduce Number of Cached Values to minimize discarded values, however it must be greater than one. When you reduce the Number of Cached Values, you might increase the number of times the Integration Service accesses the repository to cache values during the session.
Sorter Transformation
Active & Connected transformation. It is used sort data either in ascending or descending order according to a specified sort key. You can also configure the Sorter transformation for case-sensitive sorting, and specify whether the output rows should be distinct. When you create a Sorter transformation in a mapping, you specify one or more ports as a sort key and configure each sort key port to sort in ascending or descending order.
Source Qualifier Transformation
Active & Connected transformation. When adding a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier is used to join data originating from the same source database, filter rows when the Integration Service reads source data, Specify an outer join rather than the default inner join and to specify sorted ports.
It is also used to select only distinct values from the source and to create a custom query to issue a special SELECT statement for the Integration Service to read source data
SQL Transformation
Active/Passive & Connected transformation. The SQL transformation processes SQL queries midstream in a pipeline. You can insert, delete, update, and retrieve rows from a database. You can pass the database connection information to the SQL transformation as input data at run time. The transformation processes external SQL scripts or SQL queries that you create in an SQL editor. The SQL transformation processes the query and returns rows and database errors.
Stored Procedure Transformation
Passive & Connected or UnConnected transformation. It is useful to automate time-consuming tasks and it is also used in error handling, to drop and recreate indexes and to determine the space in database, a specialized calculation etc. The stored procedure must exist in the database before creating a Stored Procedure transformation, and the stored procedure can exist in a source, target, or any database with a valid connection to the Informatica Server. Stored Procedure is an executable script with SQL statements and control statements, user-defined variables and conditional statements.
Transaction Control Transformation
Active & Connected. You can control commit and roll back of transactions based on a set of rows that pass through a Transaction Control transformation. Transaction control can be defined within a mapping or within a session.
Components: Transformation, Ports, Properties, Metadata Extensions.
Union Transformation
Active & Connected. The Union transformation is a multiple input group transformation that you use to merge data from multiple pipelines or pipeline branches into one pipeline branch. It merges data from multiple sources similar to the UNION ALL SQL statement to combine the results from two or more SQL statements. Similar to the UNION ALL statement, the Union transformation does not remove duplicate rows.
Rules
1.     You can create multiple input groups, but only one output group.
2.     All input groups and the output group must have matching ports. The precision, datatype, and scale must be identical across all groups.
3.     The Union transformation does not remove duplicate rows. To remove duplicate rows, you must add another transformation such as a Router or Filter transformation.
4.     You cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.
5.     The Union transformation does not generate transactions.
Components: Transformation tab, Properties tab, Groups tab, Group Ports tab.
Unstructured Data Transformation
Active/Passive and connected. The Unstructured Data transformation is a transformation that processes unstructured and semi-structured file formats, such as messaging formats, HTML pages and PDF documents. It also transforms structured formats such as ACORD, HIPAA, HL7, EDI-X12, EDIFACT, AFP, and SWIFT.
Components: Transformation, Properties, UDT Settings, UDT Ports, Relational Hierarchy.
Update Strategy Transformation
Active & Connected transformation. It is used to update data in target table, either to maintain history of data or recent changes. It flags rows for insert, update, delete or reject within a mapping.
XML Generator Transformation
Active & Connected transformation. It lets you create XML inside a pipeline. The XML Generator transformation accepts data from multiple ports and writes XML through a single output port.
XML Parser Transformation
Active & Connected transformation. The XML Parser transformation lets you extract XML data from messaging systems, such as TIBCO or MQ Series, and from other sources, such as files or databases. The XML Parser transformation functionality is similar to the XML source functionality, except it parses the XML in the pipeline.
XML Source Qualifier Transformation
Active & Connected transformation. XML Source Qualifier is used only with an XML source definition. It represents the data elements that the Informatica Server reads when it executes a session with XML sources. has one input or output port for every column in the XML source.
External Procedure Transformation
Active & Connected/UnConnected transformation. Sometimes, the standard transformations such as Expression transformation may not provide the functionality that you want. In such cases External procedure is useful to develop complex functions within a dynamic link library (DLL) or UNIX shared library, instead of creating the necessary Expression transformations in a mapping.
Advanced External Procedure Transformation
Active & Connected transformation. It operates in conjunction with procedures, which are created outside of the Designer interface to extend PowerCenter/PowerMart functionality. It is useful in creating external transformation applications, such as sorting and aggregation, which require all input rows to be processed before emitting any output rows.