We provide IT Staff Augmentation Services!

Etl Developer Resume Profile

5.00/5 (Submit Your Rating)

PROFILE:

  • Over 10 years of IT experience in designing and leading Enterprise Data Warehouse, data mart, and BI solutions with various integration areas, such as data integration, application integration, business process integration, and UI integration .
  • Around 8 years of rich experience in IBM Web sphere Datastage7.0/7.5.1/8.0/8.1/8.5/9.1

SUMMARY:

  • Expertise in Oracle, SQL Server, Teradata, DB2 UDB and DB2 Mainframe.
  • Expertise in identifying the performance related bottle-necks in the Architecture.
  • Implement validation rules and on-screen errors within the user interface. Retrieve exceptions i.e. unmapped items from ETL processes i.e. loading trial balance . Integrate with the TDM for seamless committing of approved changes to data.
  • Plan the data analytics rules as per OCC guidelines, validate the reports for daily data movements and loads.
  • Ability to work with all levels within the IT and Business organization, influence business and IT in order to align to an Enterprise Data Roadmap.
  • Experience in implementing the CTR filing with using NORKOM.
  • NORKOM flows optimization for CTR , TMO,CPR, Compliance Framework, Compliance MIS, Compliance Transformation Service, SCDM, Transaction Monitoring Optimization .
  • As an ETL Lead/Architect/Administrator/Developer with expertise as an architect and designing in IBM Websphere Datastage 7.0/7.5//8.1/8.5/9.1 experience as well as performance tuning of mappings, identifying resolving bottlenecks in various levels like sources, targets, mappings and sessions.
  • Extensively worked on data extraction, Transaction and loading data from various sources like Oracle, SQL Server, DB2,My SQL ,Teradata ,Netezza and Flat files using Datastage, Informatica and UNIX shell scripting.
  • As a Senior Consultant responsible for setting direction across the enterprise, and ensuring that project and program architecture aligns to the organization vision.
  • Possess strong hands-on experience and functional knowledge in Information management stream that includes Data warehousing/Business intelligence. Knowledge of best practices on Process Sequence, Dictionaries, Data Quality Lifecycles, Naming Convention, Version Control.
  • Lead discussions with client leadership explaining architecture options and recommendations.
  • Have both hands-on as well as have ability to define and explain complex concepts and solutions and design to technical and business audiences.
  • As a data architect I possess the knowledge of Data analysis, Data integration, Data modeling, Data warehousing Database designing.
  • Excellent Experience in Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, NameNode, Data Node and MapReduce programming paradigm.
  • Hands on experience in installing, configuring, and using Hadoop ecosystem components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Pig, Zookeeper and Flume.
  • Good Exposure on Apache Hadoop Map Reduce programming, PIG Scripting and Distribute Application and HDFS.
  • Good Knowledge on Hadoop Cluster architecture and monitoring the cluster.
  • In-depth understanding of Data Structure and Algorithms.
  • Experience in managing and reviewing Hadoop log files.
  • Excellent understanding and knowledge of NOSQL databases like MongoDB, HBase, and Cassandra.
  • Implemented in setting up standards and processes for Hadoop based application design and implementation.
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa.
  • Experience in managing Hadoop clusters using Cloudera Manager tool.
  • Very good experience in complete project life cycle design, development, testing and implementation of Client Server and Web applications.
  • Experience in Administering, Installation, configuration, troubleshooting, Security, Backup, Performance Monitoring and Fine-tuning of Linux Redhat.
  • Extensive experience working in Oracle, DB2, SQL Server and My SQL database.
  • Hands on experience in application development using Java, RDBMS, and Linux shell scripting.
  • Ability to adapt to evolving technology, strong sense of responsibility and accomplishment.
  • Having experience on to deal with Unstructured Data XML files .
  • Having complete domain and development knowledge of Data Warehousing Life Cycle. Experience in Design, Development, and Maintenance Support of Enterprise applications using DWH technologies with Software Development Life Cycle.
  • Experience In Data manger , Scenario Manger , OFAC , ACH , AML Screening
  • Design sustainable and optimal database models to support the client's business requirements.
  • Well versed in the following data domains: Master Data, Operational Data, Analytical Data, Unstructured Data, and Metadata.
  • Advise clients on database management, integration and BI tools .Assist in performance tuning of databases and code.
  • Experience in Agile Methodology project execution model and expertise in coordinating the teams between multi locations.
  • Have sound knowledge on Installation / upgrade Information Server product suite on different tiers Server and Client . Which includes Data Stage, Info Analyzer, and Business Glossary etc.
  • Responsible for setting direction, and development support for the ETL development team as well as support to the developers.
  • Ability to Trouble shoot the infrastructure issues with Information analyzer, Metadata Workbench, data stage job execution and resolve.
  • Expertise in data warehouse technologies such as IBM Web sphere Data stage Informatica.
  • Sound in ETL design server jobs, Parallel jobs, job sequencer and shared containers using data stage designer.
  • Expertise in parsing XML files, java API connectors from data stage, and using custom build stages..
  • Extensive knowledge on normalized and dimensional modeling techniques, And Dimensional Data modeling, Star Schema modeling, Snow-Flake modeling, Fact and Dimensions tables, slowly Changing Dimensions SCD's .
  • Mentor development team members in design and development of complex ETL.
  • Experience in Teradata, upgrade planning analysis and pre and post validation.
  • Expertise in Change Management tools such as CVS, VSS and Star Team. And experience in Scheduling tools Autosys , Control-M , UC4
  • Experience in MDM Master data Management tool Siperian XU Hub.
  • Experience in Prepare the Business Requirement Documents BRD , Functional Specification Documents FSD and Design Specification Documents.
  • Willingness and ability to learn and adapt quickly to the emerging new technologies and paradigms.
  • Have strong analytical skills with proficiency in debugging, problem solving.
  • Strong skills in writing UNIX shell scripts and Perl scripts.
  • Sound knowledge in RDBMS concepts and extensively worked with Oracle 8i/9i/10g, MySQL, Teradata, Ms-Access, Netezza,, DB2, Microsoft SQL server 2005, Derby.
  • Well versed in the stored procedures, functions, triggers, etc. using SQL and PL/SQL.

DOMAINS WORKED ON:

  • Investment Banking
  • Life Insurance
  • Health Insurance
  • Brokerage
  • Mutual funds
  • Fraud AML Anti-money laundering
  • Financial crimes Risk management
  • Auto Insurance
  • Asset liability management
  • Retail Manufacturing
  • Oil Gas

TECHNICAL SKILLS:

  • DWH Technologies
  • IBM Web Sphere Data stage 7.0/7.5/8.0/8.1/8.5/8.7/9.1,Informatica
  • Data modeling
  • Erwin 4.1
  • Databases
  • OO Oracle 10g/9i/8i ,DB2 ,My Sql ,Teradata ,Netezza, SQL Server ,NoSQL
  • Programming Languages
  • C, C , Sql, PL/Sql, AQL,T-Sql
  • Operating Systems
  • Windows 95/98/NT/2000/XP, Unix
  • Front End
  • HTML ,XML,Java Script ,VB6.0
  • OLAP Tools
  • Business Objects XI R2, Crystal Reports
  • Other Tools
  • Squirrel Client ,TOAD ,Clear Case , Clear Quest , Inspector
  • MDM Tools
  • Siperian XU HUB
  • Scheduling
  • Autosys ,Control-M , UC4
  • Scripting
  • Unix ,Perl ,HTML,Python
  • DATA Analytics Tools
  • IBM System T ,SAS ,MS Composite Studio
  • Big Data Tools
  • Hive ,Pig ,Hadoop

PROFESSIONAL EXPERIENCE:

Confidential

Role : ETL Architect.

  • Goal Statement: The FDS Target architecture is intended to describe the Common Data Architectural design principles required for the Managed Integration component of the Upstream Data Foundation that supports MCBU applications. The FDS target architecture for the Managed Integration component of the MCBU Upstream Data Foundation will describe the preferred model to use to integrate data from system of records, perform change data capture, and ensure transaction integrity. In addition, it will perform attribute besting and rationalization of master data, provide data quality filtration and business transformation, shape data based on conformed entity models, and finally ensure data is formatted for consumption based on use cases.
  • Ultimately, with a strong data quality program, utilization of standards, fit for-purpose governance model and the required organizational capability, the goals of the Managed Integration common design can realize a standardized architecture based on MCBU scope and priorities. However it must be extensible to a broader function-governed vision currently being defined within the Upstream Data Foundation objectives. In achieving this objective, BU led innovation and global common solutions will be easier to proliferate as the target architecture demand layer provides a consistent means to access and source data.
  • Other important benefits provided by MCBU Midcontinent Business Unit data foundation:
  • Improved access to high quality data which will enhance our ability to actively monitor every barrel from our assets.
  • Improved data quality for better decision making, decreased risk, incidents and lost opportunities
  • Make high-quality data available to support growth rates in Functions and BUs
  • Key Objectives Benefits: The MCBU Foundation Data Store FDS project has the following key objectives:
  • Minimize impact to existing MCBU data integration platform and try to go in for a hybrid approach for the near term.
  • FDS will increase decision quality and decrease time to market for both IT application development and operational processes, and ensure a solid foundation for MCBU data infrastructure in support of overall Upstream Foundation Data Program.
  • FDS project will leverage existing MCBU Data Foundation MIRA model architecture for the target architecture thereby leveraging common design components and strategies across all CNAEP's BU's.

Responsibilities:

  • Designed ETL Architectural approach.
  • Analyze the existing applications and minimize the impact on the existing applications
  • Standardize data acquisition processes
  • Comply with data governance
  • Identify data quality measures and define key DQ metrics
  • Simplify overall data integration processes by creating a central integrated physical data source from which all consuming applications will access data.
  • Build the ETL process with using IBM WebSphere data stage for Data Cleansing , Transforming Load
  • Responsible for setting standards and providing governance through a process or framework, such as an Architecture Review Board.
  • Migrate existing ETL's from Microsoft Composite Studio to IBM WebSphere Data stage
  • Created and scheduled the sequencers.
  • Involved in creating the Control-M tables to schedule the jobs .
  • Implement the Data Quality process and comply with governance.
  • Mentor development team members in design and development of complex ETL and BI implementations.
  • Perform architectural assessments of the client's Enterprise Data Warehouse EDW .
  • Responsible for building and driving alignment to an Enterprise Reference Architecture.
  • Lead discussions with client leadership explaining architecture options and recommendations.

Environment: Oracle 10g, Microsoft SQL Server, Unix shell scripting, plsql. , UNIX ,IBM Web sphere Data Stage 8.7 ,DB2 , My Sql, Teradata ,Mainframe, Microsoft Composite Studio , Netezza ,XML, Win SCP , ,SQL developer, Squirrel, humming bird, putty, TOAD ,SAP BW .

Confidential

Role : Lead ETL Data stage Consultant.

  • Problem Statement: The EDR project addresses fundamental business problems with processing Personal Income Tax PIT and Business Entity BE tax returns including the underutilization of data and Business rules to further streamline and automate return processing and generate more revenue for all business areas. Improvements include:
  • Reengineering and utilization of more data ,data matching and business rules management in return processing to reduce fallouts ,make return processing faster ,and improve and expand return validation
  • North America Members Addresses Standardization, and Verify the names member information using public data.
  • Data Conversion for Tax year 2013.
  • Utilization of more data, data matching and business rules management in return and case modeling, and collections scoring to improve Filing Enforcement FE , Audit and Collections case and account selection.
  • Expanded data capture and imaging and elimination of paper returns and paper dependent workflow to reduce operational costs and make data available to the business areas.
  • Centralization of all FTB internal and external data into an Enterprise Data Warehouse to improve data access and availability to all business areas.
  • Creation of single view into taxpayer data via a Taxpayer Folder that integrates existing with new self-services to improve efficiency and effectiveness of business operations.
  • Implementation of reusable processed and enterprise services including underpayment Modeling Process, Contact Service and Notification Service to reduce system redundancy and improve efficiency and effectiveness.
  • Retirement of existing systems to reduce maintenance costs.

Responsibilities:

  • Analyzed Business and systems requirements and playing a key role in implementing the system using Data Stage as ETL tool.
  • Designed ETL Architectural approach.
  • Responsible for the Data conversion from various sources and load into EOD Enterprise Ongoing Database
  • Implement the Data cleansing , Address standardization and Names Verification
  • Implement Rule Sets Data Quality process with using IBM Quality Stage
  • Build the MDM process to update the latest address Information.
  • Gather information from IBM, USPS United Postal Service files, DMDepartment of Motor Vehicles SSN Department of Social Security for the Updated Addresses and member information.

Build the ETL process with using IBM WebSphere data stage for Data Cleansing , Transforming Load

  • Responsible for setting standards and providing governance through a process or framework, such as an Architecture Review Board
  • Design sustainable and optimal database models to support the client's business requirements.
  • parsing XML files, extensively used java API connectors to call web services for MDM
  • Adhering to cleanup standards using Data Clean.
  • Mentor development team members in design and development of complex ETL and BI implementations.
  • Perform architectural assessments of the client's Enterprise Data Warehouse EDW .
  • Work with the sales team to create and deliver client proposals and demonstrations.
  • Understanding the existing rules of analyzing risk and develop a strategy ETL to reduce false positives
  • Responsible for setting standards and providing governance through a process or framework, such as an Architecture Review Board
  • Implemented project in Rapid BI and Lean Agile/Sprint methodology.

Environment: Oracle 10g, Perl, Unix shell scripting ,plsql. , UNIX ,IBM Web sphere Data Stage 8.1/9.1 ,DB2 , My Sql, Teradata ,Mainframe, BTEQ ,Sql Server , Netezza ,XML, Win SCP ,Command Centre , Star team ,SQL developer, Squirrel, humming bird, putty, Java JDK 1.6 , Eclipse, MySQL , Clear Case , Clear Quest ,CDC Tool Change Capture , Inspector .

Confidential

Role: Technical Lead / ETL Developer

Problem Statement: EFCM Enterprise Financial Crimes Management requires the ability to monitor data volume, data integrity, and data synchronization per the OCC office of the comptroller of the currency 2011-12 guidelines to ensure proper implementation of models, effective systems integration appropriate use .EFCM's current data quality process are not sufficient for regulatory reporting.

Goal statement: Improved data accuracy and data quality across EFCM data environments improved ability to create and refine detection scenarios to identify suspicious financial activity.

Scope:

  • Implement Discovery Zone data sources into analytical repository EFAM .
  • Implement data quality proess es and reports to reconcile operational environment data ECAMS back to legacy source systems.
  • Implement discovery zone data source into the operational environment ECAMS
  • Acquire additional data sources into the analytical repository EFAM .
  • Acquire additional data sources into operational environment ECAMS
  • Create Data Reconciliation Data Quality reports for newly acquired data sources.

Responsibilities:

  • Responsible for building and driving alignment to an Enterprise Reference Architecture.
  • Lead discussions with client leadership explaining architecture options and recommendations.
  • Coordinating with UNIX admins and DBAs communicating with IBM support.
  • Responsible for managing risk within an organization through a reusable framework and process
  • Responsible for creating design patterns and best practices for data and the integration of data through the enterprise.
  • Responsible for setting standards and providing governance through a process or framework, such as an Architecture Review Board
  • Design sustainable and optimal database models to support the client's business requirements.
  • Advise clients on database management, integration and BI tools .Assist in performance tuning of databases and code.
  • Mentor development team members in design and development of complex ETL and BI implementations.
  • Perform architectural assessments of the client's Enterprise Data Warehouse EDW .
  • Work with the sales team to create and deliver client proposals and demonstrations.
  • Understanding the existing rules of analyzing risk and develop a strategy ETL to reduce false positives
  • Coordination between Onsite ,Offshore Near shore teams
  • Preparation of the estimates, time lines of the deliverables and project execution plan.
  • Analysis of the data sources.
  • To determine the ETL jobs design flow for the Extract, transform .load, reconciliation process ...etc. And Data quality reports.
  • parsing XML files, extensively used java API connectors
  • Adhering to cleanup standards using Data Clean.
  • Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, Hbase database and Sqoop
  • Responsible for building scalable distributed data solutions using Hadoop
  • Installed and configured Flume, Hive, Pig, Sqoop, HBase on the Hadoop cluster.
  • Managing and scheduling Jobs on a Hadoop cluster.
  • Implemented nine nodes CDH3 Hadoop cluster on Red hat LINUX.
  • Worked on installing cluster, commissioning decommissioning of data node, namenode recovery, capacity planning, and slots configuration.
  • Setup Hadoop cluster on Amazon EC2 using whirr for POC.
  • Resource management of HADOOP Cluster including adding/removing cluster nodes for maintenance and capacity needs
  • Created HBase tables to store variable data formats of PII data coming from different portfolios.
  • Implemented best income logic using Pig scripts.
  • Implemented test scripts to support test driven development and continuous integration.
  • Responsible to manage data coming from different sources.
  • Installed and configured Hive and also written Hive UDFs.
  • Experienced on loading and transforming of large sets of structured, semi structured and unstructured data.
  • Experience in managing and reviewing Hadoop log files.
  • Analysed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Used agile methodologies Scrum for software development. Involved in daily status meetings and team code reviews.
  • Involved in story writing's for the Scope items.

Environment: Oracle 10g, Perl, Unix shell scripting, python,plsql. Hadoop , HIVE, Pig , UNIX ,IBM Web sphere Data Stage 8.1/9.1 ,DB2 , My Sql, Teradata ,Mainframe, BTEQ ,Sql Server , Netezza ,XML, Win SCP ,Command Centre , Star team ,SQL developer, Squirrel, RDC ,Bridger XG ,choice point ,humming bird, putty, IPMS, IQMS,NORKOM , Hadoop, HDFS, Hive, Flume, HBase, Sqoop, PIG, Java JDK 1.6 , Eclipse, MySQL

Confidential

Role: Technical Lead / ETL Developer.

Scope:

  1. Replicate the External Brokered data ELAH into new operational data source ODS through ETL.
  2. Provide the ODS Extract in csfiles to MPMS system for other reporting purpose.
  3. Provide required health data for MMO outlook model.
  4. Provide Operational report for Health Products Rollup-only actuals .
  5. Build reporting database in FASG warehouse.
  6. Do an ETL Load from ODS, SOGP, MAAC, HAL, EUL and MMO to FASG warehouse.
  7. Build universe for FASG warehouse.
  8. Provide operational reports for all Health products Rollup Product wise .
  9. Provide required health data for MMO outlook model.

Responsibilities:

  • Gathered System requirements from the Chief Risk Officer Team.
  • Conducted Road shows and interviews with LoB SMEs.
  • Project implementation in offshore-onshore model.
  • Analyzing and identified primary, secondary and tertiary sources.
  • Finalized preferred sources.
  • Created metadata documentation and system context diagrams.
  • Conducted design and coding peer reviews.
  • Analyzed Business and systems requirements and playing a key role in implementing the system using Data Stage as ETL tool.
  • Designed ETL Architectural approach
  • Implemented project in Rapid BI and Lean Agile/Sprint methodology.
  • Created FACT and Dimension load ETL via internal ETL Reusable flow framework.
  • Applied performance tuning techniques for optimizing the performance of the Data Stage jobs.
  • Extensively using Data Stage PARAMSETs for parameterize the ETLs.
  • Designed Data Stage Job Sequencer to automate the whole process of data loading.
  • Involved in ETL Unit Testing and also in integration testing.
  • Created job execution flows via Control M and scheduling ETLs.
  • Created UNIX scripts in support of ETLs.
  • Supported Data modeling/dimensional modeling.
  • Provided architectural support.
  • Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, Hbase database and Sqoop
  • Responsible for building scalable distributed data solutions using Hadoop
  • Installed and configured Flume, Hive, Pig, Sqoop, HBase on the Hadoop cluster.
  • Managing and scheduling Jobs on a Hadoop cluster.
  • Implemented nine nodes CDH3 Hadoop cluster on Red hat LINUX.
  • Worked on installing cluster, commissioning decommissioning of data node, namenode recovery, capacity planning, and slots configuration.
  • Setup Hadoop cluster on Amazon EC2 using whirr for POC.
  • Resource management of HADOOP Cluster including adding/removing cluster nodes for maintenance and capacity needs
  • Created HBase tables to store variable data formats of PII data coming from different portfolios.
  • Implemented best income logic using Pig scripts.
  • Implemented test scripts to support test driven development and continuous integration.
  • Responsible to manage data coming from different sources.
  • Installed and configured Hive and also written Hive UDFs.
  • Experienced on loading and transforming of large sets of structured, semi structured and unstructured data.
  • Trained Business users in understanding the reports and their functionalities. Used agile methodologies Scrum for software development. Involved in daily status meetings and team code reviews.
  • Involved in story writing's for the Scope items.

Environment: IBM Web sphere Data Stage 8.1, SAP Business Objects XI, Hadoop , HIVE, Pig, UNIX, Windows 2000/NT, IBM UDB-DB2, Squirrel Client, Teradata ,Mainframe, BTEQ ,Erwin, Star Team, Mercury Quality center, Control-M and Version One, Shell PERL Scripting , Hadoop, HDFS, Hive, Flume, HBase, Sqoop, PIG, Java JDK 1.6 , Eclipse, MySQL

Confidential

Role: Technical Lead / ETL Developer

Responsibilities:

  • Analyze the Business Requirements and Involved in Analysis, Design, Development, UAT and Production phases for new modules and enhancements of the application.
  • Understanding the existing rules of analyzing risk and develop a strategy ETL to reduce false positives
  • Coordination between offshore Onsite teams
  • Preparation of the estimates, time lines of the deliverables and project execution plan.
  • Analysis of the data sources.
  • Preparation of S2T, A D and design documents
  • Created parallel jobs using data Stage Designer.
  • Excessively used complex stages like aggregator, Lookup. Change capture, joiner, Funnel, etc...
  • Created sequential file sets and Data sets as per the requirements.
  • Specified the partitioning methods as per the stages where ever required.
  • Debugged the jobs and resolved the formatting and other data quality issues.
  • Created and scheduled the sequencers.
  • Involved in creating the Control-M tables to schedule the jobs
  • Adhering to cleanup standards using Data Clean.
  • Quality Process Management using IQMS.
  • Knowledge transition about the existing systems to the new joiners.
  • Prepare and validate product architecture and design model.
  • Configuration and defect management using star team.
  • Guiding project team to prepare UTP and UTRs for the functionality testing
  • Integration of all the modules.
  • Co-ordination with third party vendor teams.
  • Involved in the project and views creation in StarTeam while moving the code to UAT and Runways
  • Involved in all the configuration management activities till code moved to production.
  • Involved in Release Management of the application and Production Support for fixing the issues.
  • Co-Ordination with off shore teams Development Testing teams .

Environment: Oracle 10g, Perl, Unix shell scripting, python,plsql. UNIX , IBM Web sphere Data Stage 8.1,DB2 , My Sql, Win SCP ,Command Centre , Star team ,SQL developer, Squirrel, Teradata ,Mainframe, BTEQ, RDC ,Bridger XG ,choice point ,humming bird, putty, IPMS, IQMS,NORKOM .

Confidential

Role: ETL Developer

Goal statement:

  • Automate the negative news search for high risk members by implementing the RDC solution and integrate with ECAMS to store and disposition the Negative News alerts through ECAMS.
  • The bad guys list will be available in Bridger XG to be automatically screened against the USAA member base .Bridger XG currently hosts OFAC, FBI and other hot lists.

Scope:-

  • Automate the negative news search for high risk members by implementing the Regulatory Data Corp RDC solution and integrating with ECAMS to include an interactive negative news search to include adhoc searches with in RDC.
  • High Risk Members will include:
  • High Net worth members
  • Business Entities
  • Members in Enhanced Due Diligence EDD programme one time submission
  • An interactive negative news search to include AD-HOC search within RDC .
  • Create a new hot list Bridger XG to include Negative News entities received from outside sources and screen active members against this list. CR .

Responsibilities:

  • Analyze and Understanding the existing rules of analyzing risk and develop a strategy ETL to reduce false positives.
  • Coordination between Onsite ,Offshore Near shore teams
  • Automated the jobs thru scheduling using TIDAL which runs every day by maintaining the data validations.
  • Preparation of the estimates, time lines of the deliverables and project execution plan.
  • Analysis of the data sources.
  • Preparation of S2T, A D and design documents
  • To determine the ETL jobs design flow for the Extract, transform .load, reconciliation process ...etc. and Data quality reports.
  • Involved in creating the parallel jobs sequencers with using data stage.
  • Adhering to cleanup standards using Data Clean.
  • Quality Process Management using IQMS.
  • Knowledge transition about the existing systems to the new joiners.
  • Prepare and validate product architecture and design model.
  • Configuration and defect management using star team.
  • Guiding project team to prepare UTP and UTRs for the functionality testing
  • Integration of all the modules.
  • Co-ordination with third party vendor teams.
  • Involved in the project and views creation in StarTeam while moving the code to UAT and Runways
  • Involved in all the configuration management activities till code moved to production.
  • Involved in Release Management of the application and Production Support for fixing the issues.

Environment: Oracle 10g, Perl, Unix shell scripting, python,plsql. UNIX , IBM Web sphere Data Stage 8.1,DB2 , My Sql, Win SCP ,Command Centre , Star team ,SQL developer, Squirrel, Teradata ,Mainframe, BTEQ ,RDC ,Bridger XG ,choice point ,humming bird, putty, IPMS, IQMS,NORKOM .

Confidential

Role: ETL Developer

Responsibilities:

  • Analysis of Business Requirements and System Requirements.
  • Providing the feasible and efficient design approach of the application to fit according to the requirements.
  • Study the application management process, procedures, identifying areas for improvements and implementing the same.
  • Work Assignment from the functional technical perspectives.
  • Determination of the resource requirements for the project.
  • Debug the existing ETL code
  • Define Future task and prioritize task according to client priority.
  • Managing delivery expectation from Client and managing concerns and escalations point from client
  • Define responsibilities and reviewing deliverables.
  • Tracking project schedule Change Request
  • Track project risk and prepare detailed mitigation plan.
  • Knowledge, process, and configuration management.
  • Effort and delivery estimations.
  • Team management and address personal and team related issues or concerns.
  • Production support request prioritization.
  • ETL Siperian jobs Monitoring
  • Constructing and Changing the Data Stage jobs as per requirements provided by the clients.

Environment: Siperian XU, data stage 7.5x2, Autosys, Toad, Unix, Oracle 10g,Perl, Unix shell scripting, plsql, humming bird, putty,DB2 , My Sql, Teradata , Oracle ,Win SCP ,Command Centre , Star team ,SQL developer, Squirrel .

Confidential

Role: ETL Developer

  • SAP Retail - supports all of retail business
  • RACE supports the contract business
  • Reliable MX supports the private label Reliable catalog
  • Currently at OfficeMax, the business is divided into Retail, Contract Commercial, Enterprise, Direct, Reliable and .com and the items within these channels have individual hierarchy systems.
  • The overall objective of OfficeMax Data Stage from the offshore perspective is to develop Interfaces in Data Stage Server followed by unit testing. The design specifications is primarily be the responsibility of the onsite team while coding and various forms of testing performed by the offshore team. The deliverables generated by the offshore team will undergo a user acceptance test before deploying them in a live environment. The scope at offshore is to Understand the design documents, Develop interfaces using Data Stage parallel jobs and Sequence Jobs, Unit testing of developed code and support onsite integration testing and go live by providing bug fixes and changes to the code.

Responsibilities:

  • Primary responsibilities include assisting project teams on ETL Architecture /Design / Development / Deployment.
  • Built Interfaces and automated with Informatica ETL tool, PL/SQL and Unix Shell Scripting.
  • Designed ETL processes and develop source-to-target data mappings, integration workflows, and load processes.
  • Designed, developed, and implemented enterprise-class data warehousing solutions.
  • Review of source systems and proposed data acquisition strategy.
  • Designed Sequencers to automate the whole process of data loading.
  • Gave expert solution to the users for problem in Datastage .
  • Translated the business processes into Datastage mappings.
  • Collect, analyze, and process data for business clients and senior management using multiple databases or systems, spreadsheets, and other available tools for reporting and/or decision making that have a major impact on various clients / organizations.
  • Facilitating problem solving and decision making in business groups by building business cases and providing insights through complex analysis using multiple databases or systems.
  • Collecting and analyzing process, performance, and operational data for generating reports and graphs.
  • Interpreting user requests to determine appropriate methods and data files for extracting data.
  • Designing and/or developing specific databases for collecting, Tracking, and reporting data.
  • Analyzing and interpreting the data to produce clear, concise information that aids decision making.
  • Prioritizing and coordinating requests with various stakeholders to meet deadlines.
  • Developed and maintained documentation to client standards / provide feedback on standards where they can be improved.
  • Helped in establishing ETL procedures and standards for the objects improving performance enhancements and migrated the objects from Development, QA, and Stage to Prod environments.

Environment: - Oracle 9i ,RACE, SAS , MS DOS scripting ,IBM Web sphere Data Stage 7.5 ,DB2 , Win SCP ,Command Centre , Star team ,SQL developer, Squirrel, humming bird, putty .

Confidential

Role: ETL Developer

Responsibilities:

  • Translated the business processes into Datastage mappings.
  • Collect, analyze, and process data for business clients and senior management using multiple databases or systems, spreadsheets, and other available tools for reporting and/or decision making that have a major impact on various clients / organizations.
  • Facilitating problem solving and decision making in business groups by building business cases and providing insights through complex analysis using multiple databases or systems.
  • Collecting and analyzing process, performance, and operational data for generating reports and graphs.
  • Interpreting user requests to determine appropriate methods and data files for extracting data.
  • Designing and/or developing specific databases for collecting, Tracking, and reporting data.
  • Generating reports on a periodic/ad-hoc basis in accordance with business requirements.
  • Performing required due diligence before presenting reports/ outputs ensuring the integrity of data is maintained.
  • Analyzing and interpreting the data to produce clear, concise information that aids decision making.
  • Prioritizing and coordinating requests with various stakeholders to meet deadlines.
  • Developed and maintained documentation to client standards / provide feedback on standards where they can be improved.
  • Helped in establishing ETL procedures and standards for the objects improving performance enhancements and migrated the objects from Development, QA, and Stage to Prod environments.
  • Primary responsibilities include assisting project teams on ETL Architecture /Design / Development / Deployment.
  • Built Interfaces and automated with Informatica ETL tool, PL/SQL and Unix Shell Scripting.
  • Designed ETL processes and develop source-to-target data mappings, integration workflows, and load processes.
  • Designed, developed, and implemented enterprise-class data warehousing solutions.
  • Review of source systems and proposed data acquisition strategy.
  • Designed Sequencers to automate the whole process of data loading.
  • Gave expert solution to the users for problem in Datastage.

Environment: Data Stage 7.5, UNIX, Windows 2000/NT, IBM UDB-DB2, DBArtisan, Star Team, Mercury Quality center, Autosys , Shell PERL Scripting

Confidential

Role: ETL Developer

Responsibilities:

  • Translated the business processes into Data stage mappings.
  • Developed and implemented technical best practices for data movement, data quality, data cleansing, and other ETL related activities. Developed and ensured adherence to globally defined standards for all developed components.
  • Provide weekend production support in rotation.
  • Database Design Relational and Dimensional models using Erwin.
  • Developed and implemented technical best practices for data movement, data quality, data cleansing, and other ETL-related activities. Developed and ensured adherence to globally defined standards for all developed components.
  • Support the database architecture team to accomplish enterprise objectives.
  • Design and implement data models according to the project requirement.
  • Participate in data architecture strategy and policy making initiatives.
  • Archive, recover and delete data as directed by the seniors.
  • Maintain and prepare documentation of the data architectural process and progress.
  • Report to the senior architect with regular job progress.
  • Drilled into the data / profile the data / sort out what data will work.
  • Assessed scope and assist in determinations of what can be done in the time and budget available / Determine max value in time and budget available.
  • Designed a solid foundation that can scale and grow.
  • Developed and maintained documentation to client standards / provide feedback on standards where they can be improved.

Environment: Data Stage 7.5, UNIX, Windows 2000/NT, IBM UDB-DB2, DBArtisan, Star Team, Mercury Quality center, ControlM, Shell PERL Scripting

Confidential

Role: ETL Developer

Responsibilities:

  • Prepared documentation including requirement specification.
  • Installed and configured Power center Client and Server Tools.
  • Creating Dimensions and Facts in the physical data model.
  • Documented the metadata to capture from the sources
  • Created Sessions and batches to extract the data.
  • Involved in the creation of jobs using work flow manager to validate schedule run and monitor the jobs.
  • Involved in designing the procedures for getting the data from all systems to Data Warehousing system.
  • Performed production support activities.
  • Handled CRs Change Requests .
  • Performed impact analysis during the system enhancements.
  • Requirements Gathering from End Users.
  • Prepared Design Document for DQD Implementation.
  • Developed ETL mappings, Transformations and Loading using Informatica Power Center 6.1.
  • Involved in Scheduling using the scheduling tool like AUTOSYS.
  • Developed JIL Job Information Language Scripts in UNIX Environment.
  • Extensively used ETL to load data from Relational Databases.
  • Involved in Developing and testing all the Informatica mappings using Lookups and Update Strategy and other transformations.
  • Developed mappings using different types of transformations like source qualifier, Stored procedure, expression, filter, update strategy, lookup, and sequence generator.
  • Involved in Unit Testing.
  • Used the Power center Designer to develop processes for extracting, cleansing, transforming, integrating, and loading data into data warehouse database.
  • Assisted warehouse designer while designing the models.
  • Created Unix Shell Scripts to automate the process.
  • Environment: Informatica Power center 6.1 7.0, Windows NT, Sybase, UNIX and PERL scripting, Autosys, Flat Files, DBArtisan JIL scripts .

ADDITIONAL INFORMATION

Technical Skills:

  • ETL
  • IBM InfoSphere Data Stage 7.0/7.5.1/8.0/8.1/8.5/8.7/9.1, Informatica Power Center 6.1 .
  • OLAP Microsoft SQL Server Analysis Services SSAS 2005, SAS Enterprise Guide 4.2 4.3

Business Intelligence:

Microsoft SQL Server Reporting Services SSRS 2005, SAP Business Objects XI3.1/XIR2/XI/6.5, SAP Business Objects Dashboards formerly Xcelcius .

Business Analysis:-

  • Functional and System requirement Requirements Gathering, User Road shows and Interviews, Business Requirements Gathering, Process Flow Diagrams, Data Flow Diagrams, System Context Diagrams, Systems Analysis, Data Analysis, System Requirements Analysis, Design, Data Modeling, Data Architecture, Data Flow, Data Cleansing.

Data Warehousing Data modeling:-

  • Dimensional data modeling using Ralph Kimball methodology, Star Schema Modeling, Snowflake Modeling, FACT and Dimensions Tables, Physical and Logical Data Modeling, Entity relationship modeling, ERWIN 7.x 8.x, Visio Materialized Views, Partitioning, Advanced SQL Tuning, Parallel Execution, Table and Index Compression, and Change Data Capture, slowly changing dimensions SCD .
  • Databases :- Oracle 9i/8i, IBM UDB DB2 V8.0 V9.0, MS SQL Server 2005, MS SQL Server 2008R2, 12.x/11.x, MS Access ,My SQL , Teradata , Netezza .
  • Programming Unix Shell Scripting :- Perl Programming, SQL, PL/SQL, Transact SQL, VB,C ,ASP.net Vendor Tools IBM Cognos Incentive Compensation Management V7.x and V8.0 formerly Varicent SPM and Sungard-Protegent Compliance , Business Objects XI
  • Other Tools Utilities :-
  • Borland Star Team, ControlM 6.3 7.0, PVCS, Visual Source Safe, Mercury quality Center, Autosys, , Version Ones, Embarcadero DBArtisan 7.1 7.2, Squirrel SQL Client 3.4, TOAD, SQL Developer

We'd love your feedback!