Application Developer Resume
SUMMARY
- 7+ years of experience in Data warehousing, Informatica, Teradata, Oracle, SQL, PL/SQL, DB2. Design deploy and maintained enterprise class security, network and systems management applications in AWS.
- Expertise in Data Warehousing, Data Migration, Data Integration and Reporting using Informatica, Informatica Data Quality, Teradata and Tableau.
- Implemented various Data warehouse projects and supported them using Ralph Kimball and Bill Inmon methodologies.
- Designed and implemented secure Cloud solutions using AWS services like EC2, S3, IAM roles and policies, RDS, Auto Scaling and Elastic Load Balancing in VPC.
- Created ETL jobs for the data in Amazon S3 by invoking AWS Glue ETL jobs from AWS Lambda function
- Experience in Extract, Transform, Load (ETL) of data into Data Warehouse using Informatica 9.x, 10.x.
- Experience in designing Workflows, Mappings, Sessions and Scheduling them using Informatica 10.1, 9.6 and 9.5.
- Performed data profiling and analysis using Informatica Data Explorer (IDE) and Informatica Data Quality (IDQ).
- SQL Query Tuning and Informatica performance tuning involving source, target and map level bottlenecks.
- Experience driving in - person discussions with senior personnel regarding best practices, project management and risk mitigation for data warehousing and cloud services.
- Extracting and loading data from various sources (Relational database, Flat Files, XML, Excel) into data warehouse
- Implemented data warehousing techniques for Data cleansing, Slowly Changing Dimension phenomenon’s (SCD) and Change Data Capture (CDC).
- Hands on experience on logical and physical data modelling, performance tuning, code review process, data normalization processes, data integration methods, Agile SDLC and production support.
- Experience coordination with Data Integration, Network, DBA, Server Systems and Storage Systems for cross-team projects, resource procurement, system maintenance, server migrations and upgrades and change management
- Worked with Terabyte Size Source Target Databases like Netezza & Teradata. Configured Power Exchange DB2 and ORACLE CDC sources
- Experience working in all phases of the Software Development Life Cycle (SDLC). Worked in Kanban and Agile with frequent releases and providing immediate solutions for business needs.
- Knowledge on Microsoft SQL Business Intelligence Tools.
- Familiarity working with Relational, NoSQL, JavaScript API, Rest API and Data Extract API
- Knowledge on Business intelligence tools like Business objects, Crystal Reports, OBIEE, Tableau.
- Documenting all aspects of ETL processes, definitions and mappings to support knowledge transfer.
- Excellent oral/written/interpersonal communication skills, quick learner, willing to adapt to the team/organizational environment.
TECHNICAL SKILLS
Data Warehousing /ETL Tools: Informatica Power Center 10.2/9.5/9.1/8.6/8.1 (Source Analyzer, Data Warehouse Designer, Mapping Designer, Mapplet, Transformations, Workflow Manager, Workflow Monitor, Worklets), AWS, Data cleansing, Autosys, Star Schema, Snowflake SchemaOLTP, SQL*Plus, SQL*Loader
Databases: Oracle 11g/10g/9i, SQL Server 2014/2012/2008 , DB2, Cassandra, Netezza, HP Vertica, Teradata, Amazon RDS (PostgreSQL and Oracle)
Programming Languages: SQL, TSQL, PL/SQL, JAVA
Scripting Languages: Shell, Korn, Perl
Big Data Ecosystems: Hadoop, Hive, AWS, CloudWatch, S3, Redshift Spectrum, Athena, Glue, AWS Redshift, Scala, Spark SQL
PROFESSIONAL EXPERIENCE
Confidential
Application Developer
Responsibilities:
- Interact with end-users and functional analysts to identify and develop Business Specification Documents (BSD) and transform it into technical requirements.
- Worked in development of applications especially in LINUX environment and familiar with all its commands and worked on Jenkins continuous integration tool for deployment of project and deployed the project into Jenkins using GIT version control system
- Trigger the spark job in AWS lambda script using json parameters which will finally run those in EMR and store the relative output in S3.
- Worked on creating automation scripts for migrating data from onprem to AWS using DMS, SCT, Flyway
- Created CICD pipelines facilitating the end users to use the automation.
- Converted dblinks to Informatica workflows
- Hands-on experience on implementingCloud Solutionsusing various AWS Services includingEC2, VPC, S3, Glacier, EFS, AWS Kinesis, Lambda, Directory Services, Cloud Formation, Ops works, CodePipeline, CodeBuild, CodeDeploy, Elastic Beanstalk,RDS, Data Pipeline, DynamoDB, Redshiftetc.
- Hands-on experience on Architecting and securing the Infrastructure on AWS usingIAM, KMS,Cognito,API Gateway,Cloud Trail,Cloud Watch,Config,Trusted Advisor, Security Groups, NACLetc.
- Strong experience in Major AWS services likeCloud Formation, Cloud Front, Cloud Watch, Cloud Trail, VPC, RDS, DynamoDB, SQS, SNS.
- Migrated informatica workflows to AWS EC2 connecting to sources and Targets both from onprem and Datalakes
Environment: Informatica 10.2.0, Shell, Python, Perl, Autosys, MS Visio, Toad, SQL Developer, Bitbucket, Urban Code Deploy, Jira, Jenkins,Splunk,AWS services
Confidential
Application Developer
Responsibilities:
- Involved in requirement gathering and analysis of data from different source systems. Created elevated level solution design document and engineered logical and physical models for the new platform
- Technology implementations of cloud-based technologies, data cleansing and conversions, systems development life cycle and interface design/development.
- Development adhering to and complying with all applicable, federal and state laws, regulations and guidance.
- Developed mappings and workflows in Informatica, processed various sources like Oracle tables, Flat Files (delimited and fixed width), XML files and MQ source.
- Experience in TIER 1 Production Support, implementing Change Request, Work Order, Problem Ticket, and work request. Handled recovery process in case of workflow failures.
- Lead production deployment of highly-available and fault-tolerant enterprise data warehouse applications.
- Created mappings for reconciliation of data for asset management using Normalizer, Transaction Control and Java
- Improved the performance of batch jobs by analysing explain plans, implementing indexes, database partitioning, parallel hints, Informatica partitioning, pushdown optimization and caching
- Developed and executed a migration strategy using AWS Data Migration service and AWS schema conversion Tool to move Data Warehouse on premise to AWS Redshift and use AWS Cloud Watch to collect and monitor AWS RDS instances in lower environments.
- Created a AWS Glue Data Catalog and used Athena to directly query the data from Amazon S3 using the AWS Red Shift Spectrum
- Configured secure file transfer using encryption by AWS KMS. Created Shell scripts to push and pull files.
- Created an error framework using PM Tables in Informatica, PL/SQL blocks/procedures to generate reports detailing business and technical errors in the batch run.
- Worked on ESP for scheduling jobs. Monitored daily batch jobs proactively and worked for the fixes to stop it from abending.
- Writing Perl scripts for data acquisitions, standardizing the data format and structure and for distinct phases of the Diff Framework. And, python scripts for automating some processes across the environment.
- Created Unit test cases in Ruby and performed testing of individual code blocks as well as for the batch of workflows.
- Followed Agile SDLC method working on everyday tasks. Planning stories and estimations them, analysed burn down chart to understand the remaining work for the sprint.
- Provided UAT/EBF (Emergency bug fix) and Production Support for several incremental releases.
Environment: Informatica 10.2.0,9.6.1, AWS, Oracle12c/11g, Teradata, Netezza, PL/SQL, Shell, Python, Perl, CA work station, Ruby, Data Modelling, MS Visio, Toad, SQL Developer, DB Visualizer, GIT, Urban Code Deploy, ServiceNow
Confidential
Informatica Developer
Responsibilities:
- Responsible for building various subject areas containing information from various upstream and manipulating the data as per business needs and documenting all aspects of ETL processes, definitions and mappings
- Developed well-tuned Mappings using Informatica Designer Tool which involves identifying and rectifying the bottlenecks at various levels like Source, Target, Mapping, Session and System Development of Exception Handling and Error Handling process.
- Tuning the Informatica mapping, session performance by using Source-Target Optimization. Created dimensions model star schemas using Kimball methodology.
- Created Attributes, Metrics, security filters and other report objects, drilling, defined hierarchies using Micro Strategy.
- Worked on SQL joins, Stored Procedures, Packages, Triggers, Cardinalities, Loops, Aliases, Views, Aggregate conditions, parsing of objects and hierarchies.
- Developed Reusable Maplets and Transformations for standard business units, to use in multiple mappings.
- Created and monitored Sessions using Workflow Manager. Extracted data from Oracle, DB2 and Flat files. Defined Target Load Order Plan for loading data correctly into different Target Tables.
- Defined mapping variables and parameters to capture load history, row counts and to log errors of a session run.
Environment: Informatica, IBM Maximo, Korn, Shell, DB2, GIT, Oracle 10g/11g, MS SQL Server 2008, DB Visualizer, Autosys, Micro Strategy