ETL Testing Interview Questions & material

1) What is ETL?

Ans : In datawarehousing architechture, ETL is an important component, which manages the data for any business process. ETL stands for Extract, Transform and Load.  Extract does the process of reading data from a database.  Transform does the converting of data into a format that could be appropriate for reporting and analysis. While, load does the process of writing the data into the target database.

2) Explain what are the ETL testing operations includes?

Ans : ETL testing includes

  •  Verify whether the data is transforming correctly according to business requirements
  •  Verify that the projected data is loaded into the data warehouse without any truncation anddata loss
  •  Make sure that ETL application reports invalid data and replaces with default values
  •  Make sure that data loads at expected time frame to improve scalability and performance
3) Mention what are the types of data warehouse applications and what is the difference between data mining and data warehousing?

Ans : The types of data warehouse applications are

  • Info Processing
  • Analytical Processing
  • Data Mining
Data mining can be define as the process of extracting hidden predictive information from large databases and interpret the data while data warehousing may make use of a data mine for analytical processing of the data in a faster way. Data warehousing is the process of aggregating data from multiple sources into one common repository

4) What are the various tools used in ETL?

Ans :
  • Cognos Decision Stream
  • Oracle Warehouse Builder
  • Business Objects XI
  • SAS business warehouse
  • SAS Enterprise ETL server
5) What is fact? What are the types of facts?

Ans :It is a central component of a multi-dimensional model which contains the measures to be analysed.  Facts are related to dimensions.
Types of facts are:

  • Additive Facts
  • Semi-additive Facts
  • Non-additive Facts
6) Explain what are Cubes and OLAP Cubes?

Ans :Cubes are data processing units comprised of fact tables and dimensions from the data warehouse.  It provides multi-dimensional analysis.OLAP stands for Online Analytics Processing, and OLAP cube stores large data in muti-dimensional form for reporting purposes.  It consists of facts called as measures categorized by dimensions.

7) Explain what is tracing level and what are the types?

Ans :Tracing level is the amount of data stored in the log files.  Tracing level can be classified in two Normal and Verbose. Normal level explains the tracing level in a detailed manner while verbose explains the tracing levels at each and every row.

8) Explain what is Grain of Fact?

Ans :Grain fact can be defined as the level at which the fact information is stored.  It is also known as Fact Granularity

9) Explain what factless fact schema is and what is Measures?

Ans :A fact table without measures is known as Factless fact table.  It can view the number of occurring events. For example, it is used to record an event such as employee count in a company.The numeric data based on columns in a fact table is known as Measures

10) Explain what is transformation?

Ans : A transformation is a repository object which generates, modifies or passes data.  Transformation are of two types Active and Passive

11) Explain the use of Lookup Transformation?

Ans : The Lookup Transformation is useful for

  • Getting a related value from a table using a column value
  • Update slowly changing dimension table
  • Verify whether records already exist in the table
12) Explain what is partitioning, hash partitioning and round robin partitioning?

Ans :
To improve performance, transactions are sub divided, this is called as Partitioning.  Partioning enables Informatica Server for creationg of multiple connection to various sourcesThe types of partitions are
Round-Robin Partitioning:

  •  By informatica data is distributed evenly among all partitions
  •  In each partition where the number of rows to process are approximately same this partioning is applicable

Hash Partitioning:

  • For the purpose of partitioning keys to group data among partitions Informatica server applies a hash function
  • It is used when ensuring the processes groups of rows with the same partitioning key in the same partition need to be ensured
13) Mention what is the advantage of using DataReader Destination Adapter?

Ans : The advantage of using the DataReader Destination Adapter is that it populates an ADO recordset (consist of records and columns) in memory and exposes the data from the DataFlow task by implementing the DataReader interface, so that other application can consume the data.

14) Using SSIS ( SQL Server Integration Service) what are the possible ways to update table?

Ans : To update table using SSIS the possible ways are:

  • Use a SQL command
  • Use a staging table
  • Use Cache
  • Use the Script Task
  • Use full database name for updating if MSSQL is used
15) In case you have non-OLEDB (Object Linking and Embedding Database) source for the lookup what would you do?

Ans : In case if you have non-OLEBD source for the lookup then you have to use Cache to load data and use it as source

16) In what case do you use dynamic cache and static cache in connected and unconnected transformations?

Ans :
  •  Dynamic cache is used when you have to update master table and slowly changing dimensions(SCD) type 1
  •  For flat files Static cache is used
SelectionFile type iconFile nameDescriptionSizeRevisionTimeUser