Sr. Etl Lead/informatica Developer Resume
Chicago, IL
SUMMARY
- Over 9+ years of experience in Information Technology including Data Warehouse/DataMart development using ETL/Informatica Power Center and SQL Server Integration (SSIS) across various industries such as Healthcare, Banking, Insurance, Pharmaceutical, Finance.
- Experienced in leading, support and maintenance ofETLapplications in production environment.
- Excellent experience with databases such as Oracle, Teradata, Netezza in developing SQL, PL/SQL packages as per business needs.
- Experienced in Hadoop Big Data Integration with ETLon performing data extract, loading and transformation process for ERP data and experienced inBig datatechnologies - Pig, Hive, Sqoop, Flume, Oozie, NoSQL, databases (Cassandra & Hbase).
- Excellent knowledge and work experience on Amazon Cloud, Amazon EMR, Dynamo DB,RedShift.
- Experienced working with dimensional Data Modeling using ERwin, Physical & Logical data modeling, Ralph Kimball Approach, Star Modeling, Data marts, OLAP, FACT & Dimensions tables.
- Experienced in code review ofETLapplications, SQL queries, UNIX shell scripts and TWS commands built by fellow colleagues.
- Extensive experience inNetezzadatabase design and workload management
- Excellent experience in working with indexes, complex queries, stored procedures, views, triggers, user defined functions, complex joins, loops T-SQL, DTS/SSIS using MS SQL Server.
- Experience with Oozie Scheduler in setting up complex workflow jobs withSpark, Hive, Shell and Pig Actions.
- Strong working knowledge in RDBMS, ERDiagrams, Normalization and De Normalization Concepts.
- Experienced working with configuring SSIS packages using package logging, breakpoints, Checkpoints and Event handler to fix the errors.
- Experienced in OLTP/OLAP System Study, Analysis and E-R modeling, developing database Schemas like Star schema and Snowflake schema used in relational, dimensional and multidimensional data modeling.
- Very good experience and understanding of Data warehousing, Data modeling and Business Intelligence concepts with emphasis onETLand life cycle development using Informatica PowerCenter (Repository Manager, Designer, Workflow Manager, Metadata Manager and Workflow Monitor).
- Excellent knowledge on data warehouse concepts using Ralph Kimball and Bill Inmon methodologies.
- Good knowledge of Hadoop (MapR) architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm.
- Excellent experience using Teradata SQL Assistant, data load/export utilities like BTEQ, FastLoad, Multi Load, Fast Export using Mainframes and UNIX
- Expertise in Extraction, Transforming and Loading (ETL) data usingSSIScreating mappings/workflows to extract data from SQL Server, Excel file, other databases and Flat File sources and load into various Business Entities (Data Warehouse, Data mart)
- Experienced in coding using SQL, PL/SQL procedures/functions, triggers and exceptions and excellent exposer in Relational Database concepts, Entity relation diagrams.
- Experience in creating various reports like drill down, sub reports, parameterized, multi-valued and various Ad hoc reports through Report model creation using SSRS and proficiency in Developing SSAS Cubes, Aggregation, KPIs, Measures, Partitioning Cube, Data Mining Models, and Deploying and Processing SSAS objects.
- Experienced in UNIX working environment, writing UNIX shell scripts for Informatica pre & post session operations.
- Expertise in generating reports using SQL Server Reporting Services (SSRS), and Microsoft Access & Excel spreadsheet.
- Very good Knowledge and experience of complete SDLC including Requirement Analysis, Requirement Gathering, Project Management, Design, Development, Implementation and Testing.
- Experienced in dealing with various Data sources like Oracle, SQL Server, Teradata, MS-Access, MS - Excel and Flat files.
- Experienced in using Teradata BTEQ, FastExport, Fastload, Multiload and Tpump utilities
- Excellent experience in designing and developing complex mappings by using transformations like Unconnected and Connected Lookups, Source Qualifier, Expression, Router, Filter, Aggregator, Joiner, Update Strategy, Union, Sequence Generator, Rank, Sorter, Normalizer, Stored Procedure, Transaction Control, External Procedure etc.
- Expertise in SQL and Performance tuning on large scale Teradata
- Extensive experience with Informatica Power Center 10.x, 9.x, and 8.x hosted on UNIX, Linux and Windows platforms.
- Experienced on extracting data from multiple sources like Teradata, Oracle, SAP BW, Mainframes, and Flat files and perform therequired transformations on the data usingETLtools -Informatica or Teradata utility.
- Excellent knowledge of data ware house methodologies Dimensional Modeling, Fact Table, ODS, EDW.
TECHNICAL SKILLS
ETLTools: Informatica Power Center 10.x/9.x (Source Analyzer, Mapping Designer, Workflow Monitor, Workflow Manager, Power Connects for ERP and Mainframes, Power Plugs), Power Exchange, Power Connect, Data Junction (Map Designer, Process Designer, Meta Data Query), Datastage, SQL Server Integration (SSIS).
OLAP/DSS Tools: Business Objects XI, Hyperion, Crystal Reports XI
Databases: Oracle 9i/10g/11g/12c,Sybase, DB2, MS SQL Server … Teradata v2r6/v2r5, Netezza, HBase, MongoDB, Cassandra
Others: AWS Cloud, AWS Redshift, TOAD, PL/SQLDeveloper, Tivoli, Cognos, Visual Basic, Perl, SQL-Navigator, Test Director, Win RunnerDatabase Skills Stored Procedures, Database Triggers and packages
Data Modeling Tools: Physical and Logical Data Modeling using ERWIN
Languages: C, C++, Java, UNIX shell scripts, XML, HTML
Operating Systems: Windows NT/ AIX, LINUX, UNIX
PROFESSIONAL EXPERIENCE
Confidential, Chicago IL
Sr. ETL Lead/Informatica Developer
Responsibilities:
- Requirements gathering, analyze, design, code, test highly efficient and highly scalable integration solutions using Informatica, Oracle, SQL, Source systems viz.
- Involved in the technical analysis of data profiling, mappings, formats, data types, and development of data movement programs using Power Exchange and Informatica.
- Aware of the columnar storage, data compression, zone maps of Amazonredshift and automated the administrative tasks of Amazonredshiftlike provision, monitoring etc.
- Developed ETL mapping document which includes implementing the data model, implementing the incremental/full load logic and the ETL methodology.
- Wrote Transformation Jobs usingETLforRedshiftbased on given requirement and scheduled in Windows Environment.
- Worked on Informatica BDE for retrieving data from Hadoop's HDFS file system.
- Working on a MapR Hadoop platform to implementBigdatasolutions using Hive, Map reduce, shell scripting, and java technologies.
- Performed the data profiling and analysis making use ofInformatica Data Explorer (IDE) and Informatica Data Quality (IDQ) and Design, Development, Testing and Implementation of ETL processes using Informatica Cloud.
- WroteETLjobs to read from web API using REST and HTTP calls and loaded into HDFS using java.
- Worked with existing Python Scripts, and made additions to the Python script to load data from CMS files to Staging Database and to ODS.
- UsedSparkStreaming to stream data from external sources using Kafka service.
- Responsible for migrate the code base from Cloudera Platform to Amazon EMR and evaluated Amazon eco systems components likeRedShift, Dynamo DB.
- Involved in writingTeradataSQL bulk programs and in Performance tuning activities forTeradataSQL statements usingTeradataEXPLAINand using Teradata Explain, PMON to analyze and improve query performance.
- Design, development of mappings, transformations, sessions, workflows and ETL batch jobs to load data into Source/s to Stage using Informatica, T/SQL, UNIX Shell scripts, Control - M scheduling.
- Developed various mappings for extracting the data from different source systems using Informatica, PL/SQL stored procedures.
- Developed jobs to send and read data from AWS S3 buckets using components like tS3Connection, tS3BucketExist, tS3Get, tS3Put and created the SFDC, Flat File and Oracle connections forAWSCloud services.
- Imported Relational Data base data using Sqoop into Hive Dynamic partition tables using staging tables and imported data using Sqoop from Teradata using Teradata connector.
- Installed configured Amazonredshiftcloud data integration application for faster data queries
- Developed mappings for extracting data from different types of source systems (flat files, XML files, relational files, etc.) into our data warehouse using Power Center.
- Convert specifications to programs and data mapping in an ETL Informatica Cloud environment.
- Developed multiple POCs using Scala/PySpark and deployed on the Yarn cluster, compared the performance ofSpark, with Hive and SQL/Teradata.
- Used Informatica power center 10.1.0 to Extract, Transform and Load data into Netezza Data Warehouse from various sources like Oracle and flat files and responsible for creating shell scripts to invoke the informatica workflows through command line.
- Have usedAWScomponents (Amazon Web Services) - Downloading and uploading data files (withETL) toAWSsystem using S3 components.
- MigratedETLjobs to Pig scripts to do Transformations, even joins and some pre-aggregations before storing the data to HDFS and involved in creating and running Sessions & Workflows using Informatica Workflow Manager and monitoring using Workflow Monitor.
- Developed UNIX Shell scripts for data extraction, running the pre/post processes and PL/SQL procedures.
- Used Teradata SQL Assistant, Teradata Administrator and PMON and data load/export utilities like BTEQ, FastLoad, Multi Load, Fast Export, Tpump, and TPT on UNIX/Windows environments and running the batch process for Teradata.
- Use ofpythonand data visualization and its libraries like numpy, pandas and matplotlib and Converted interfaces running with UNIX toPythonScript to boost performance
- Developed standard mappings using various transformations like expression, aggregator, joiner, source qualifier, router, lookup, and filter.
- Involved in business analysis and technical design sessions with business and technical staff to develop Entity Relationship/data models, requirements document, and ETL specifications and Dig deep into complex T-SQL Query and Stored Procedure to identify items that could be converted to Informatica Cloud ISD.
- Involved in gathering and documenting business requirements from end clients and translating them into report specifications for the MicroStrategy platform.
- Delivering the code from dev to stage and then stage to production using deployment tools like excel deploy and as it is a Single resource Project must take complete care of the code and the requirements from preparing the TDD to Code migration.
- Involved in designing, documenting and configuringInformaticaData Director for supporting management ofMDMdata.
- Developed automateddatapipelines from various externaldatasources (web pages, API etc) to internaldatawarehouse (SQL server) then export to reporting tools byPython.
- Created Informatica mappings with PL/SQL stored procedures/functions to in corporate critical business functionality to load data.
- Created data load process to load data from OLTP sources intoNetezzaand created external tables in NZLOAD process inNetezza
Environment: Informatica Power Centre 10.1.0, AWS cloud, AWS Redshift, Spark, Python, Power exchange, Oracle 12c, Netezza, UNIX, Netezza Aginity, Teradata, SQL & PLSQL, Informatica Cloud, SQL Server, Korn Shell Scripting, XML, T-SQL, TOAD, UNIX, Win, LINUX, Excel, MDM, Microstrategy, Flat Files, Tivoli, Databricks, MongoDB, BigData, Hadoop, Cassandra, HBase, HDFS.
Confidential, Charlotte NC
Sr. ETL/ Informatica Developer
Responsibilities:
- Responsible for requirement definition and analysis in support of Data warehousing efforts and worked onETLToolInformaticato load data from Flat Files to landing tables in SQL server.
- UsedREDSHIFTfor allowing tables to be read while they are being incrementally loaded or modified and sourced data form RDS and AWS S3 bucket and populated in Teradata target and mounted S3 bucket in local UNIX environment for data analysis.
- Imported data from RDBMS environment into HDFS using Sqoop for report generation and visualization purpose using Tableau.
- Converting Hive based applications toSparkFramework usingSparkRDDs/DataFrames/Datasets with Scala/Python.
- Extensively used Informatica client tools Source Analyzer, Warehouse designer, Mapping designer, Transformation Developer, Informatica Repository Manager and Informatica Workflow Manager and developed and tested all theInformaticamappings and update processes.
- Used PostgreSQL features that are suited to smaller-scale OLTP processing, such as secondary indexes and efficient single-row data manipulation operations, have been omitted to improve performance usingREDSHIFT.
- Developed the Pig UDF's to pre-process the data for analysis and Migrated ETL operations into Hadoop system using Pig Latin scripts and Python Scripts.
- Extensively involved in writing UDFS in Hive and Developed SQOOP scripts to migrate data from Oracle to Big data Environment.
- Worked with mappings from varied transformation logics like Unconnected and Connected Lookups, Router, Aggregator, Filter, Joiner, Update Strategy.
- Information Steward Integration withBusinessObjects,BusinessObjectsData Services (BODS), BW and ECC systems.
- Created data partitions on large data sets in S3 and DDL on partitioned data and converted all Hadoop jobs to run in EMR by configuring the cluster according to the data size.
- Stored data in AWS S3 similar to HDFS. Also performed EMR programs on data stored in S3and created Pre/Post Session/SQL commands in sessions and mappings on the target instance.
- Coding using Teradata Analytical functions, BTEQ SQL of TERADATA, write UNIX scripts to validate, format and execute the SQLs on UNIX environment.
- Involved in loading data from UNIX file system to HDFS, configuring Hive and writing Hive UDFsand load log data into HDFS using Flume, Kafka and performingETLintegrations
- Worked on predictive and what-if analysis using Python from HDFS and successfully loaded files to HDFS from Teradata, and loaded from HDFS to HIVE
- Fixing invalid Mappings, testing of Stored Procedures and Functions, Unit and Integration Testing of Informatica Sessions, Batches and Target Data.
- Used Pig as aETLtool to do Transformations, even joins and some pre-aggregations before storing data into HDFS
- Developing analytical component using Scala,SparkandSparkStream.
- Extensively used Transformations like Router, Aggregator, Source Qualifier, Joiner, Expression, Aggregator and Sequence generator and scheduled Sessions and Batches on the Informatica Server using Informatica Server Manager/Workflow Manager.
- Designed and Implemented ETL for data load from heterogeneous Sources to SQL Server and Oracle as target databases and for Fact and Slowly Changing Dimensions SCD-Type1 and SCD-Type2.
- Worked on CodingTeradataSQL,TeradataStored Procedures, Macros and Triggers.
- Used NZSQL/NZLOAD utilities and developed LINUX Shell scripts to load data from flat files toNetezzadatabase
- Developed Data Mapping, Data Governance, transformation and cleansing rules for the Master Data Management Architecture involving OLTP, ODS
- Participated in reconciling data drawn from multiple systems across the company like Oracle 12c, flat files into Oracle data warehouse
- Extensively used AginityNetezzawork bench to perform various DML, DDL etc. operations onNetezzadatabase and performed data cleaning and data manipulation activities using NZSQL utility and worked onNetezzadatabase to implement data cleanup, performance-tuning techniques
- Scheduling the Informatica workflows using Control-M, Tivoli scheduling tools& trouble shooting the Informatica workflows.
- Did extensive work with ETL testing including Data Completeness, Data Transformation & Data Quality for various data feeds coming from source and developed online view queries and complex SQL queries and improved the query performance for the same.
- Worked extensively with importing metadata into Hive and migrated existing tables and applications to work on Hive and AWS cloud and making the data available in Athena and Snowflake.
- Designed and develop data movements using SQL Server Integration Services, TSQL and Stored Procedures in SQL.
- Performed match/merge and ran match rules to check the effectiveness ofMDMprocess on data.
- Worked on Teradata Table creation and Index selection criteria and Worked on Teradata Macro and Procedures and Worked on Basic Teradata Query (BTEQ) language
Environment: InformaticaPower Center 9.6 (InformaticaDesigner, Spark, Python, Workflow Manager, Workflow Monitor), Python, AWS Cloud, AWS Redshift, Hadoop, HDFS, HBase, Tivoli, MDM, Hive, MongoDB, NOSQL, Databricks Cloud, Oracle 12c, Flat files, SQL, XML, PL/SQL, Business Objects, Teradata, Aginity, Tivoli, Teradata SQL Assistant, Windows NT, UNIX, Shell Scripts, Toad 7.5, SQL, MDM, Netezza.
Confidential, Boston, MA
Sr. ETL/SSIS Developer
Responsibilities:
- Created technical design specification documents for Extraction, Transformation and Loading Based on the business requirements and worked for Development, Enhancement & Supporting the Enterprise Data Warehouse (EDW).
- Analyze business requirements, technical specification, source repositories and physical data models forETLmapping and process flow.
- UsedSSISasETLtool, and stored procedures to pull data from source systems/ files, cleanse, transform and load data into databases and extensively used Pre-SQL and Post-SQL scripts for loading the data into the targets according to the requirement.
- Developed MLOAD scripts to load data from Load Ready Files to Teradata Enterprise Data warehouse (EDW).
- Stored data from SQL Server database into Hadoop clusters which are set up in AWS EMR.
- Developed the SQL Server Integration Services (SSIS) packages to transform data from SQL 2008 to MS SQL 2014 as well as Created interface stored procedures used inSSISto load/transform data to the database.
- Involved in the full HIPAA compliance lifecycle from GAP analysis, mapping, implementation, and testing for processing of Medicaid Claims. worked with data mapping team for ICD 9 to ICD10 for forward mapping of the diagnosis and procedure codes
- Extracted Data from various Heterogeneous Databases such as Oracle and Access database, DB2, flat files to SQL server 2014 usingSSIS.
- Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift and have used AWS components (Amazon Web Services) - Downloading and uploading data files (withETL) to AWS system using S3 components.
- Developed database objects such as SSIS Packages, Tables, Triggers, and Indexes using T-SQL, SQL Analyzer and Enterprise Manager
- Worked with Teradata SQL Assistant and Teradata Studio and responsible for design and developing Teradata BTEQ scripts, MLOAD based on the given business rules and design documents.
- Worked withSSISpackages involved FTP tasks, Fuzzy Grouping, Merge, and Merge joining, Pivot and Unpivot Control Flow Transformations.
- Worked withMetadataManager which uses SSIS workflows to extract metadatafrommetadatasources and load it into a centralizedmetadatawarehouse.
- Developed mappings to load Fact and Dimension tables, SCD Type 1 and SCD Type 2 dimensions and Incremental loading and unit tested the mappings.
- Designed and developed UnixShell Scripts,FTP, sending files to source directory & managing session files
- Created Databases, Tables, Cluster/Non-Cluster Index, Unique/Check Constraints, Views, Stored Procedures, Triggers and optimizing Stored Procedures and long running SQL queries using indexing strategies and query optimization techniques and migrated data from Excel source to CRM usingSSIS.
- Integrate complex Medicaid principles and policies into the Medicaid Management Information System (MMIS), requiring knowledge in areas of health systems and Medicaid information processing.
- Loaded data from various data sources and legacy systems into Teradata production and development warehouse using BTEQ, FASTEXPORT, MULTI LOAD, and FASTLOAD.
- Created Business Requirements Document (BRD) containing the glossary, Low Level Design Document, Technical design document, Tivoli scheduling flow document & Migration manual.
- Worked extensively with different Caches such as Index cache, Data cache, Lookup cache (Static, Dynamic and Persistence) and Join cache while developing the Mappings and worked with Static, Dynamic and Persistent Cache in lookup transformation for better throughput of Sessions.
- Worked on Planning and design database changes necessary for data conversion, tool configuration, data refresh on Netezza.
- Used Cognos Data Manager supports high performance analysis of relational data by creating aggregate tables at multiple levels within and across hierarchies in the dimension tables
- Gathered Business Requirements, analyzed data scenarios, build; unit tested, and migrated Self Service Cognos Reports from DEV to QA
- Created watches, probes and alerts to monitor performance and availability of services withinBusiness Objectsplatform
- Designed Cubes with Star Schema using SQL Server Analysis Services 2012 (SSAS), Created several Dashboards and Scorecards with Key Performance Indicators (KPI) in SQL Server 2012 Analysis Services (SSAS).
- Involved in Creating Data warehouse based on Star schema and worked withSSIS packages to load the data into the database.
- Developed complex stored procedure using T-SQL to generate Ad hoc reports using SSRS and developed various reports using SSRS, which included writing complex stored procedures for datasets.
- Explored data in a variety of ways and across multiple visualizations using Power BI, Strategic expertise in design of experiments, data collection, analysis and visualization.
- Writing Complex T-SQL Queries, Sub queries, Co-related sub queries and Dynamic SQL queries and created on demand (Pull) and Event Based delivery (Push) of reports according to the requirement.
Environment: SSIS 2014, Oracle 11g, SQL Server 2014, SQL, Tivoli, Teradata, Business Objects, DB2, Netezza, Flat files, UNIX, Windows, Teradata SQL assistant, Cognos, Shell scripting, SSAS, SSRS, T-SQL, XML, Excel, Tivoli, Microsoft, PL/SQL, MS SQL Server, Autosys.
Confidential, Chicago, IL
Sr. ETL/SSIS Developer
Responsibilities:
- Detailed study and data profiling of all the underlying information security application systems and understood the information security data models and identified and captured the right metadata from source systems
- Designed and developed theETLsolution usingSSISpackages to implement slowly changing dimensions of Type 2 to populate current and historical data into Dimensions, ETL solution validated incoming data and sent notifications when jobs are done.
- Developed shell scripts, PL/SQL procedures, for creating/dropping of table and indexes of performance for pre and post session management.
- Designed and developed Dashboards using MicroStrategy Report Service Documents for various products and services in Insurance Service Group.
- Designed and developedETLpackages using SQL Server Integration Services (SSIS) to load the data from SQL server, XML files to SQL Server database through custom C# script tasks.
- Designed and developedETLpackages using SQL Server Integration Services (SSIS) 2008 to load the data from different source systems (SQL Server, Oracle, flat files, csv files).
- Designed and documented the error-handling strategy in theETLload process. Prepared the completeETLspecification document for all theETLflows.
- Developed Drill-through, Drill-down, sub Reports, Charts, Matrix reports, Linked reports using SQL Server Reporting Services (SSRS).
- Worked on Model Analysis, Model Testing and documenting of Cognos Framework Models and suggested maintaining versions using VSS Tool.
- Design and develop dynamic advanced T-SQL, PL/SQL, MDX query, stored procedures, XML, user defined functions, parameters, views, tables, triggers, indexes, constraints,SSIS packages, SSAS Cubes, SQL Server Agent jobs, deployment scripts and installation instructions for enterprise data warehouse and applications.
- Created T-SQL stored procedures, functions, triggers, cursors and tables.
- Used existing UNIX shell scripts and modified them as needed to process SAS jobs, search strings, execute permissions over directories etc.
- Worked with PowerCenter team to load data from external source systems toMDMhub.
- Migrated data from SQL Server toNetezzausing NZ Migrate utility and constructed Korn shell driver routines (write, test and implement UNIX scripts.
- Filtered data from Transient Stage to EDW by using complex T-SQL statements in Execute SQL Query Task and in Transformations and implemented various Constraints and Triggers for data consistency and to preserve data integrity.
- Embed hand-crafted SQL statements directly into the MicroStrategy BI platform using The Freeform SQL editor.
- Supporting the ETL Inbound for the LegacyMDMsolution built and debugged existingMDMoutbound views and changed it according to the requirement.
- The customer data loaded into the SQL Server was used to generate reports and charts using the SQL Server Reporting Services (SSRS) 2008 reporting functionality.
- CreatedSSISpackages to Extract, Transform and load data using different transformations such as Lookup, Derived Columns, Condition Split, Aggregate, Pivot Transformation, and Slowly Changing Dimension, Merge Join and Union all.
- Generated ETL Scripts leveraging parallel load and unload utilities from Teradata.
- Complete Software Development Lifecycle Experience (SDLC) from Business Analysis to Development, Testing, Deployment and Documentation.
Environment: SSIS 2008/2012, SSRS, T-SQL, Oracle10g, MDM, Tivoli, PLSQL, Microstrategy, Teradata, Toad, TWS(Tivoli), UNIX, Shell scripting, Flat files, XML files, Oracle 10g, Cognos, Netezza, SQL, SQL*Plus, PL/SQL, Windows XP, Aginity.
Confidential
ETL Developer
Responsibilities:
- Designed and developed ETLprocess using InformaticaPower Center and involved in process of enhancing the existing Informatica and PL/SQL code and bug fixing.
- Involved in implementing change management process while fixing the existing code to enhance the code or bug fixing
- Collection of data source information from all the legacy systems and existing data stores and involved in Data Extraction from Oracle, Flat files using Informatica.
- Developed mappings using multiple sources, transformations and targets from different databases and flat files and worked extensively on Oracle database, SQL Server and flat files as data sources.
- Involved in scheduling Informatica jobs using file dependencies in Tivoli enterprise scheduler tool.
- Wrote UNIX shell scripts to work with flat files, to define parameter files and to create pre and post session commands.
- Designed and developed complex mappings that involved Slowly Changing Dimensions, Error handling, Business logic implementation
- Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.
- Extensively used SQL Override, Sorter, and Filter in the Source Qualifier Transformation.
- Extensively used Normal Join, Full Outer Join, Detail Outer Join, and master Outer Join in the Joiner Transformation.
- Done caching optimization techniques in Aggregator, Lookup, and Joiner transformation.
- Installed and configured the Informatica version in Unix/Sun Solaris for development, test and production environments which are compatible for smooth migrations and maintenance.
- Developed shell scripts that automates the process of deducing delimited flat files.
- Involved in the Design, Development and Unit Test of theETLcode as per the requirements.
- Created technical specifications to create the mappings according to the business specifications given.
- Involved in Unit, Integration, system, and performance testing levels.
Environment: Informatica Power Center oracle 9i/10g, SQL Server 2008, Oracle, Tivoli, Toad 7.0, Flat files, SQL *Loader, PL/SQL, XML,, Unix Shell Scripts, DB2, Business Objects XI, O/S, UNIX.