Data Engineer Resume
Cincinnati, OhiO
SUMMARY
- Over 10+ years of IT experience in Software Development Life Cycle involving Requirements Gathering, Design Analysis, Development, Maintenance, Implementation and Testing.
- Over 9+ years of hands - on experience with ETL Tools: Informatica 8.x/9.x/10.x/Informatica/Informatica Cloud (IICS) and IBM DataStage 8.x/7.x/9.1/11.3 versions with special emphasis on system analysis, design, development and implementation of ETL methodologies.
- Over 8+ years of experience in writing complex SQL queries, including the use of stored procedures, functions and triggers to implement business rules and validations in various environments.
- Setup Virtual Machines using Google Cloud through Compute Instance (GCE) and Kubernetes Instance (GKE) templates to deploy the applications for testing.
- Enable networking in Virtual Private Cloud (VPC) to establish connectivity using firewall rules for proper and secured data exchange.
- Scheduled deployment of multiple machine with startup script on Google Cloud Platform (GCP) using deployment manager written on Jinja and Python
- Strong experience in Dimensional Modeling using Star and Snowflake schemas to build Fact and Dimensional tables. Implemented SCD’s (Slowly Changing Dimensions) and CDC (CDC Change Data Capture).
- Expertise in Database Design and building Enterprise Data Warehouse/Data Marts, and Multiple Dimensional Modeling using Erwin and MS-Visio.
- Responsible for Performance Tuning at SQL query to satisfy the business need and meet the SLA.
- Expertise in working with various operational sources like Microsoft Azure SQL, DB2, SQL Server, Oracle, Teradata, Flat Files into a staging area.
- Expertise in creating Informatica Mapplets/Worklets/Mappings/Workflows for complex data transformations.
- Worked with IBM Infosphere DataStage Administrator to understand environment issues; understand difference in tool versions between Lower regions
- Scripting in multiple languages on UNIX, LINUX, Perl, AIX, DOS & VB script etc
- Assisted in deploying Informatica application in Windows server, Creation of Project Folders and Authorization of User with access.
- Experience in resolving Data Transformation, Cleansing using IDQ (Informatica Data Quality) and Capturing rejects and Exception and Error reporting.
- Hands-on experience with several BI tools, Reporting tools and Analytic tools like OBIEE, SSAS, SSRS, Business Objects, Cognos
- Completed IBM Blockchain course successfully; Have hands on Hyperledger Composer & Fabric to create shared ledger across multiple accounts.
- Working experience in interacting with business analysts and developers to analyze the user requirements, functional specifications and system specifications.
- Excellent communication skills, Good organizational skills, Outgoing personality, Self-motivated, Initiative, thrive to learning, Hardworking, Ability to work independently or cooperatively in a team.
- Experienced in Teradata utilities like in Fastload, Multiload, Tpump, Fastexport and BTEQ Teradata utilities.
TECHNICAL SKILLS
Data Warehouse: Informatica Powercenter 10.1/10.2/10.4, Informatica Power Exchange, Informatica Administration Console, IDQ 10.x, Informatica Developer 10.x, SnowFlake Enterprise Edition, Informatica Intelligent Cloud Services (IICS)
CRM: Salesforce, Visual Force
Dimensional Modeling: ERwin R8.1, MS-Visio.
Reporting Tools: Cognos 8.x/7.x, Microstrategy, Tableu 10.3
Databases: Teradata R12 & R13, Oracle 12c/11g/10g/ 9i, DB2, MS SQL 2005/ 2000/ 7.0, Informix, MS Access 97/ 2000/2007, Sqoop, Hive, MySQL 5.6, Toad 12.8, Microsoft Azure SQL
Languages: SQL, PL/ SQL, C, C++, Perl V 5.10.1, Java 1.6,1.7,1.8, .Net, VB
Scripting languages: Shell scripts (C Shell, K Shell, and Bourne Shell), AWK, VI, and SED, Python 2.7, Python 3.4, Pycharm, Spyder, iPython Jupyter Notebook (Anaconda)
Operating Systems: HP-UX, IBM-AIX, Sun Solaris, Redhat enterprise Linux 6.10 (Santiago), Linux, Windows 2000/XP//2003/Vista/X, Mac Mojave 10.14.4, WIndows 10
Other Tools and Applications: SSRS, SQL Plus, SQL*Loader, SQL Developer, Minitab, HTML, Putty, WinSCP, IP Switch, OIPA Rules Palette, MS Visio, IBM Developer for Z systems (IDZ) v14.0.0.6, GIT Version Controller 8.6
Scheduling Tools: Autosys, Tivoli, CA WA Workstation (ESP installed on mainframe), Informatica Scheduler, Run Deck v2, Crontab, Control-M
Cloud apps: Google cloud - Compute Engine (GCE), App Engine (GAE), Kubernetes Engine (GKE), Big Query, Cloud SQL, Cloud Spanner, Cloud BigTable, Cloud Storage AWS - S3, EC2, EBS, Athena, Lambda, Matillion
PROFESSIONAL EXPERIENCE
Confidential - Cincinnati, Ohio
Data Engineer
Responsibilities:
- Design and Develop ETL jobs using Informatica Power Center Designer tools like Source Analyzer, Workflow Designer, Mapping Designer, Mapplet Designer and Transformation like Normalizer, Stored Procedure, Aggregator, Joiner, Lookup (Connected & Unconnected), Source Qualifier, Filter, Update Strategy, Stored Procedure, Router, and Expression. Analyze existing ETL jobs and update to encapsulate the recent Regulatory rules with the current Hardware standards but with adequate performance.
- Collaborate with the team to understand the type of business value and suggest the better ways to store into Datawarehouse as Star or Snowflake Schema with Normalization forms(1F,2F,3F) which can be available for statistical analysis and reporting to the stake holders
- Create and deploy ETL jobs using Informatica 10.x to transform the source systems data with respect to challenges faced due to datatype, incorrect data, missing values. Remove the junks like Ctrl+M, Unreadable characters, ASCII values by changing to UTF-8 formats using Unix (Shell/Ksh/Csh/Bash) Scripts or Python Scripts.
- Create SQL and Informatica jobs with ability to create complex joins and ensure high throughput/performance. Tweak the performance by creating DB indexes if the bottle necks at Source or Drop/Recreate index or Bulk Loader if the bottle neck at target DB. Performance at mapping needs to be enhanced by splitting the mapping into multiple and Staging tables if necessary.
- Analyze and implement SCD-II logic through Informatica 10.x in the target warehouse to maintain the history. Implement required Staging, DataMart, DataWarehouse tables with proper SQL statements like Create/Alter tables, Procedures, Functions, Index, Create/Replace views, Materialized Views. Identify Columns and Constraints.
- Enhance and implement the best possible solution in the scheduling the batch using Control-M which does not impact the users or with minimal outage.
- Write Email Notification using Informatica 10.x, Mail utilities in Unix/Linux and in corporate the Ticketing logic to raise tickets with priority in ServiceNow to inform the stake holders in case of failure related to the batch processing. Troubleshoot and resolve application system errors with Application support team using Informatica Monitor, Session Logs, Workflow Logs, Informatica 10.x Debugger, Echo/Print tools in Unix/Python and PL/SQL Procedure debugger with DBMS Output options.
- Work with Business System Analysts to define specifications, create tasks/story cards using AGILE methodology tools Jira and update the progress at each stage like Identification, Analysis, Development, Testing, Business SignOff, Implementation and Postproduction.
- Create Change Request/Ticket (CR) using ServiceNow and deployment plans for go-live; Provide the detailed instruction about deployment to the IT implementation team. Prepare various Check points/Milestones for validating the implementation at different intervals.
- Created external tables in AWS Athena on the files for purpose of consumption to users which were recently loaded into S3;
- File loading was performed using python scripts using the module Boto3 to connect to AWS
- Create and schedule Sqoop job in BigData ecosystem to ingest source data from Oracle 12c database into AWS Hadoop cluster. Using SQL query based on filters on selecting required columns and records from the source database.
- Design and developed minor data transformations using python and deployed as event driven through AWS Lambda for serverless implementation
- Convert the XML & JSON semi structured sources using SnowFlake Flatten and Lateral Flatten function on Variant type fields. Flattened source needs to be transformed into DataFrame using Utilities like Pandas, Numpy and load into warehouse using loader modules SqlAlchmy, PyMysql, MySql, PyDB2.
- Matillion jobs were used to load multiple exports from snowflake into S3. ELT was performed using the files from S3 and loaded into target Oracle Database.
- Extensively worked on creating Spark jobs using RDD to transform the input by identifying the delta of changed records and loading into a Dataframe. Dataframe result was then scheduled to load Hive warehouse.
- Collaborated in developing Spark jobs for identifying and extracting delta records from NoSQL database like Mongo DB and FTP to ETL landing zone for downstream processing
Confidential - Cincinnati, OH
Data Engineer
Responsibilities:
- Create Microsoft Azure SQL Server Procedures to load the data from Staging to target warehouse
- Efficiently created informatica workflow using transformations like Expression, Router, Sorter, Aggregator, Lookup (Unconnected & Dynamic), Normalizer, Webservice to extract the data from Azure SQL source to Mainframes
- Imported Mainframe VSAM data map to Power Exchange for existing sources to enable the connectivity to informatica
- Designed Informatica Mappings, Tasks and Task Flow using Informatica Cloud Data Integration.
- Created Autosys Jills to enable Scheduling of jobs based on calendar events by including dependency and job status
- Worked on multiple areas throughout the lifecycle of projects including Requirements Gathering, Mappings, Designing and Testing coordination.
- Performed walk through of implementation plan to provide sequence of events and milestone details for end clients to make them aware about recent changes and features.
- Conducted Design review meetings to maintain organization standards in terms of features used in Informatica transformation, Naming standards, SQL Server Procedure efficiency and Unix Scripting programs
- Identify CDC using Power exchange and feed into target warehouse built on Microsoft Azure
- Created charts and graphs for visualization to team to identify the variety and range of data expected using Python Seaborn module using Jupyter Notebook.
- Identify the parameters from existing code to mention in the param files which enables implementation of Informatica ETL code at various testing regions without any issues.
- Created Audit informatica worklet to update the status of the batch including start time and end time; Worklet able to capture the status of the job dynamically capture error codes/success status
- Working with multiple teams in capturing the field level logic for Target layout.
- Created informatica powerexchange Data maps to transform into Mainframe as VSAM Seq files.
- Setup Virtual Machines using Google Cloud through Compute Instance and GKE Instance templates to deploy the applications for testing
- Enabled networking using VPC to establish connectivity using firewall rules for proper and secured data exchange.
- Creation of Informatica mappings through Cloud technology (IICS) with advanced connection to Microsoft Azure platform
- Performed standardization rules in the files loaded in hadoop using informatica Hadoop connections. Creation of Spark jobs in performing changes based on rules like date formatting, decimal formatting, removal of unwanted fields and records.
- Created Hive tables from spark jobs to load transformed/standardized data into higher level region and made sure if it matched with target system requirements.
- Created iPython scripts in Jupyter notebooks to search the last modified informatica param files/text files in the given region
Confidential - Cincinnati, OH
Advanced Software Developer
Responsibilities:
- Involved in Agile development process with team, integrating Scrum practices from project definition through post-release Facilitate planning, daily scrum, & sprint review meetings; proactively remove impediments; report status in Scrum of Scrums.
- Analyzing user requirements and defining functional specifications using Agile Methodologies
- Developing and implementing complex data warehouse and data mart solutions using ETL tool Informatica BDM 10.1.1
- File watcher script to wait batch load for file arrival. Based on SLA, incident was cut to inform the business team about delay.
- Create ETL jobs to load Flat file data into target region as Dim and fact tables. Legacy PowerCenter jobs are re-written using latest version of Informatica i.e. Informatica Developer/BDM.
- Extracting and Loading data between legacy systems and informatica power center.
- Expertise in conversion of Power center jobs to Python scripts using modules SQLAlchemy, Paramiko, mysql and pymysql.
- Expertise is converting data into data frame using Python Pandas.
- Studying, analyzing and developing database schemas such as Star Schema and Snowflake Schema.
- Leading multiple modeling, simulations and analysis efforts to uncover the best data warehouse solutions.
- Developing Unix shell scripts for analyzing and directing data feeds
- Identify the relationship between multiple dimensions with fact table. Perform load balancing between jobs to use the infrastructure resources promptly.
- Identified CDC through PowerCenter Exchange and applied SCD-II using transformations. SCD-II is maintained to record the history of changes on the dimensions
- Identified CDC using MD5 algorithm between source and target records and created mapping to load only new or updated records.
- Creating data pipelines using third party tools such as Talend, Data Bricks, Azure functions and AWS functions.
- Leveraging the data harvested to correlate, quantify and guide remediation activities through models to measure data quality without rules
- Profile data to identify volume, variety and special characters from source. Create necessary transformation using informatica PowerCenter & BDM to load the data into target Data Warehouse.
- Creating database procedures, packages, triggers and view
- Developing and implementing test validations of the data warehouse and data marts.
- Analyzing test results and recommending modifications to meet project specifications.
- Dimensional Data Modeling using Data modeling Star Schema/Snowflake modeling, FACT & Dimensions tables, Physical & logical data modeling.
- Worked on creation and deployment of ETL jobs to Hadoop environment. Strong understanding on Hadoop ecosystems with hands on Spark, Map Reduce, Hive, PIG, MYSQL Yarn, Zookeeper.
- Deploying applications in informatica and migrating the applications to different environments
- Being technical resource for direct communications to team members in the project development, testing and implementation process
- Documenting modifications and enhancements made to the Data Warehouses and data marts as required by the project.
- Skilled in SQL with ability to create complex joins and ensure high throughput/performance
- Highly skilled in Python, working with API’s, JSON data sets.
- Creating reports using Tableau based on Business requirements.
- Modify and Re-write existing ETL informatica jobs to snowflake SQL. Implement SCD-II logic in the target warehouse to maintain the history.
- Utilized Rest API and Webservice Consumer to make call to API and directed the outputs as JSON and XML
- Parser (Any to XML) and Serializers (XML to Any) were used to transform the JSON & XML inputs to required output as business requirements.
- Scheduling of Informatica Power Center Jobs, Developer Jobs and Python Scripts through Run Deck based on crontab arguments.
- Setup Virtual Machines using Google Cloud through Compute Instance (GCE) and Kubernetes Instance (GKE) templates to deploy the applications for testing.
- Enable networking in Virtual Private Cloud (VPC) to establish connectivity using firewall rules for proper and secured data exchange.
- Scheduled deployment of multiple machine with startup script on Google Cloud Platform (GCP) using deployment manager written on Jinja and Python
Confidential - Columbus, OH
Application Developer
Responsibilities:
- Developed source to target mapping documentation based on business requirement specifications. Staying up to date with ETL development standards and best practices and applying the concepts in Job design and construction. Self-documenting ETL Jobs to comply with Metadata standards.
- Prepared Detailed Technical, Low Level design documents.
- Written SQL scripts to create and drop the indexes those are passed as parameters in the pre & post sessions.
- Involved in creating Cursor Scripts for loading bulk volume data into Target DB.
- Worked on changes in CA WA Workstation (ESP) to schedule informatica jobs along with other jobs dependencies.
- Worked on Perl DBI Modules to load multiple tables present under different schema. Achieved it through table info() and append the schema name with table name
- Created a shell script to generate insert statements and load into Oracle database; Also worked sqlldr utility to load bulk volume of data
- Ability to design and support the development of a data platform for data processing (data ingestion and data transformation) and data repository using Big Data Technologies like Hadoop stack including HDFS cluster, PIG, Spark, Hive and Impala.
- Created informatica mapping which are invoked through web services whenever application required values from database.
- Created various reports (summary reports, matrix reports, pie charts, dashboards) and setup report folders to authenticate users based on their profiles (permissions).
- Handled XML parser to read the inputs from web service requests and generated outputs from end points.
- Handled defect fixes on Functional Language Code which has all mathematical calculations related to premium payments.
- Involved in Quoting system (VBI, Hueler and PAQS) defect fixes; worked on multiple platforms oracle forms, vpms and xml.
- Worked on IAM compliance project to mitigate all ID used in the application.
- Identify and replace IDs which are out of compliance by correcting Roles, Permission groups, Standards in naming, enforce Password expirations and allocating ownership.
- Experienced in implementing IAM rules to make the application compliance.
- Worked on Oracle application - Oracle Insurance Products and Annuities (OIPA) related defects on the products related to Deferred Annuities and Immediate Annuities.
- Worked on enhancement related to OIPA to implement premium tax computation logic for each state in USA.
- Supported P0/P1 incidents for OIPA Application during batch failures; Support during implementation including weekends.
- Created procedure to mask data before loading into lower region which helps to iterating scenario and also avoids any PII violations.
- Worked on changes in CA WA Workstation (ESP) using IBM Developer for Z’os (IDZ) to schedule informatica jobs along with other jobs dependencies.
- Worked on Perl DBI Modules to load multiple tables present under different schema. Achieved it through table info() and append the schema name with table name
- Created a shell script to generate insert statements and load into Oracle database; Also worked sqlldr utility to load bulk volume of data
- Debugging and issue fixing in VPMS; Actuaries Product models were developed using vpms. functional Scripting through scala.
- Creating tickets with criticality using ServiceNow (SNOW) and allocation to assigned groups.
- Resolve issues based on SLA of tickets and report as defect through ALM/QC which requires code changes.
- Created projects in SOAP UI to test the webservice calls and validated the outputs. Invoked informatica web service as well.
Confidential - Cranston, RI
ETL Sr Lead/Consultant - Onsite
Responsibilities:
- Created, Modified, Tested, implemented and supported various Data Warehouse applications (Datastage 8.7, UNIX, Teradata R13) to feed company Data Warehouse from legacy source systems (SQL Server, VSAM, Oracle, Flat files) and maintain data integrity throughout different platforms.
- Responsibilities included analysis of user requirements; creating specifications, modification of existing programs and creating new ones on both Datastage and UNIX sites; coding, testing and implementation.
- Assisted Datastage Admin to create environment variables and projects using Datastage Administrator.
- Extensively used Parallel Stages like Transformer, Complex flat file stage, Join, Lookup, Filter, Aggregator, Modify, Copy, Sort, Funnel, Row Generator, Teradata connector stage and Peek for development and debugging purposes.
- Created support documents for the offshore team. Conducted Knowledge Transfer sessions to help the offshore team understand the process.
- Deployed projects from development environment to Pre-prod and from Pre-prod to Production.
- Prepared mapping documents and high-level documents.
- Designed DataStage Parallel jobs involving complex business logic, update strategies, transformations, filters, lookups and necessary source-to-target data mappings to load the target.
- Migrated previously developed projects from Datastage 7.5 to 8.5 and later to 11.3.
- Involved in Agile development process with team, integrating Scrum practices from project definition through post-release Facilitate planning, daily scrum, & sprint review meetings; proactively remove impediments; report status in Scrum of Scrums.
- Strong Knowledge of Data Profiling and Data Quality.
- Extensively used Data Stage Director to import/export metadata, Datastage Components between projects.
- Extensively wrote user-defined SQL coding for overriding for Auto generated SQL query in Data Stage.
- Worked on IWay Connector Extract stage for multiple Datastage Jobs
- Created complex queries, Stored Procedures, Triggers with multiple join conditions using Teradata R13.
- Developed UNIX Shell scripting as part of file manipulation, Scheduling and text processing.
- Optimized SQL queries for maximum performance.
- Implemented SCD-II logic mappings and also used various stages like Transformations, Change Capture, Filter, Sequence Generator, Aggregator, Merge, etc.
- Creating SharePoint form pages to track requests till completion.
- Developed user defined Routines and Transforms by using Datastage Basic language.
- Maintaining Mainframe backups and handling JCL execution.
- Ability to design and support the development of a data platform for data processing (data ingestion and data transformation) and data repository using Big Data Technologies like Hadoop stack including HDFS cluster, PIG, Spark, Hive and Impala.
- Created complex Teradata SQL queries, Triggers with multiple join conditions by keeping an eye on best performance as a goal.
- Developed UNIX Shell scripting as part of file manipulation, Scheduling and text processing.
- Creating SharePoint form pages to track requests till completion.
- Worked on the Autosys scheduling tools for executing and monitoring the Informatica job in Dev and test environments.
- Configuring the Informatica Cloud Real Time (ICRT) to consume and expose as API which can be consumed by third party application and receive the response; Used Teradata Utilities like MLOAD, FLOAD, T-PUMP to performing bulk transformation loads.
Confidential - Cranston, RI
ETL Sr Lead/ Consultant
Responsibilities:
- Developed source to target mapping documentation based on business requirement specifications. Staying up to date with ETL development standards and best practices and applying the concepts in Job design and construction. Self-documenting ETL Jobs to comply with Metadata standards.
- Used Informatica as an ETL tool to extract data from source systems Mainframes, SQL, Oracle and DB2.
- Extensively used complex transformations like Aggregator, Joiner, Lookup (Connected & Unconnected), Source Qualifier, Filter, Update Strategy, Stored Procedure, Router, and Expression.
- Prepared Detailed Technical, Low Level design documents & Unit Test case documents for review and records.
- Worked on the Autosys tool for scheduling and monitoring the Informatica job in Dev and test environments.
- Designed mappings involving complex business logic, update strategies, transformations, filters, lookups and necessary source-to-target data mappings to load the target.
- Worked Data Verification Tool to analyze the data during testing.
- Extensively used Informatica Designer, Workflow Manager & Monitor between projects.
- Extensively wrote user-defined SQL coding for overriding for Auto generated SQL query in Source Qualifier.
- Extensively used Email Notification, Nested Condition, Sessions and Workflow activities in informatica.
- Written SQL scripts to create and drop the indexes those are passed as parameters in the pre & post sessions.
- Responsibilities included analysis of user requirements; creating specifications, modification of existing programs and creating new ones on both Informatica and UNIX sites; coding, testing and implementation.
- Prepared scripts and Informatica Jobs to automate manual tasks to reduce manual efforts.
- Involved in creating Cursor Scripts for loading bulk volume data into Target Oracle DB.
- Created Sessions, reusable Worklets and Batches in Workflow Manager and Scheduled the batches and sessions at specified frequency.
- Build the Informatica jobs to extract the data from Salesforce and load to ODS database based on IDL & Delta process and also purge & archive the historical data from Salesforce.
Confidential
Informatica Sr ETL Developer
Responsibilities:
- Analyzing possible Data Issues and Creating Informatica Jobs for loading data from multiple source systems into the Target Teradata database.
- Used Informatica Powercenter Data Masking option to protect the confidential customer account information.
- Modified the shell/Perl scripts as per the business requirements.
- Responsible for Performance Tuning at the Mapping Level, Session Level, Source Level and the Target Level for Slowly Changing Dimensions Type1, Type2, and Type3 for Data Loads.
- Design/developed and managed Power Center upgrades from v7.x to v8.6, Migrate ETL code from Informatica v7.x to v8.6. Integrate and manage workload of Power Exchange CDC.
- Extensively worked with SCD Type-I, Type-II and Type-III dimensions and data warehousing Change Data Capture (CDC).
- Worked on Power Center Designer tools like Source Analyzer, Warehouse Designer, Mapping Designer, Mapplet Designer and Transformation Developer.
- Effectively managed the migration of the transformations/mappings from development to Production.
Confidential
Informatica Sr ETL Developer
Responsibilities:
- Analyzing the whole system and identifying the changes needs to be done.
- Taking care of production incident and providing fix in an optimized way.
- Handling CR which is requested as a part of implementation.
- Generating reports and sharing across clients for the review.
- Involved in project management tasks Estimation, Project Planning, Status and transitions.
- Involved in Deployment and Administration of SSIS packages with Business Intelligence development studio.
- Involved in performing incremental loads while transferring data from OLTP to data warehouse using different data flow and control flow tasks in SSIS.
- Interacted with the reports developers to validate presented data.
- Responsible for Unit, System and Integration testing. Developed Test scripts, Test plan and Test Data.
- Generated Surrogate Keys for composite attributes while loading the data into Data Warehouse using Key Management functions.
- Responsible for Unit, System and Integration testing. Developed Test scripts, Test plan.
- Creation of Informatica mappings to cover the business need.
- Automated Report generation about Status and share to End-User
- Supported business users during User Acceptance Test (UAT) and Post Implementation Phase.
Confidential
Informatica ETL Developer
Responsibilities:
- Hash Key mapping, a testing tool was designed and thoroughly handled using Informatica
- Writing test cases and test scripts which validates business logics in the Target table.
- Preparation of Technical documents which includes Job Design and transformation details
- Preparation of the Data modification scripts to test for the scenarios during Unit Testing.
- Involved in Performance Testing and Operational Acceptance Testing in Pre-Prod Region to make sure PROD readiness check.
- Involved in the development of Common Error Management to capture and handle the error records.
- Worked on Defects, managed through the tool HP-ALM and tracked them to closure
- Performed the Integration and System testing on the ETL jobs.
- Responsible for preparing ad hoc jobs which are used for One-time data fix.
- Parameterized Hard Code value for the purpose of re-usability.
- Attending status call with Clients and provide regular updates on PROD status.
- Effectively worked on Onsite and Offshore work model.
- Efficiently created Teradata SQLs to satisfy any business needs.
Confidential, Richmond, VA
Informatica ETL Developer
Responsibilities:
- Involved in creating Mappings, sessions & workflows using Informatica.
- Optimization of the different SQL queries to ensure faster response time.
- Extensively involved in developing of BTEQ scripts to load records from Staging to EDW.
- Involved in fixing defects of the loading scripts.
- Involved in Tuning of SQL queries to enhance Performance.
- Involved in the development of Common Error Management to capture and handle the error records.
- Created Informatica Jobs to handle Ad-hoc requests and Scheduled jobs using Autosys, Tivoli and CA7.
- Handled multiple defects related to Mapplet to handle enhancements.
- Developed and tested all the backend programs, Informatica mappings and update processes.
- Developed and documented data Mappings/Transformations, Audit procedures and Informatica sessions.
- Analyzed, conceptualized/designed the database that serves the purpose of proving critical business metrics.
- Responsible for UNIT Testing and System Testing. Documented Test results
- Used Teradata Utilities like MLOAD, FLOAD, T-PUMP to performing bulk transformation loads.