We provide IT Staff Augmentation Services!

Data Engineer Consultant Resume

5.00/5 (Submit Your Rating)

Sunnyvale, CA

SUMMARY:

  • 8+ years of experience in designing, developing, implementing and supporting Data - warehousing/ETL/BI/ Business Analytic solutions in Talent Acquisition, Healthcare Retail, Financial Services and Manufacturing domains.
  • Proficient as Teradata developer and strong experience in handling large scale data in MMP architecture (70-75 TBs). Also, worked on other databases - Oracle, MySQL, PostgreSQL.
  • Extensive experience in data modelling with Primary Indexes, Secondary Indexes, Join Indexes, PPI, Global temporary tables, Volatile tables, Derived Tables and writing and tuning complex SQL Queries/Scripts using multi-table joins, window functions.
  • Proficient in performance analysis, monitoring and tuning SQL query using EXPLAIN PLAN, Collect Statistics in Teradata.
  • Strong knowledge of Teradata architectural concepts like - Fallback protection, Clique, Journals, key demographics for PI identification, Data purging on date range, Performance tuning techniques for bad queries and workaround for skewed data.
  • Extensive experience in creating ad-hoc reports, daily, monthly, quarterly and yearly aggregates, Macros, Views, Stored Procedures, Packages, Triggers.
  • Experienced in Designing/Developing/Tuning/Managing ETL pipelines using SQL, HIVE SQL, ETL toolsets - Informatica Powercenter 9.x, Informatica IDQ , Pentaho data Integration (Kettle) with source systems like APIs, JSON files, HDFS, relational tables, SAP, text/excel files, XML files.
  • Expertise in the concepts of Data Warehousing, Dimensional Modeling, logical and physical Data Modeling, Fact and Dimensional Tables, Error handling, Re-startability of batch jobs.
  • Extensively worked with Teradata utilities like BTEQ, Fast Export, Fast Load, Multi Load, TPT to export and load data to/from different source systems including flat files. 
  • Hands-on scripting experience in Unix Shell Scripting (Bash and Korn) and python2.7 for Automation, DML/ DDL parsing, retrieving data, FTP files from remote server, implementing file watcher mechanism, backup of repository and folder, merging many files and polishing data for downstream ETL processes
  • Experienced in Tableau9.x tool set (Desktop, Server, Reader). Skilled in Desktop for rich Data Visualization, Dashboards, Reporting and Analysis, creating Calculations, Metrics, Score cards, Attributes, Filters, Prompts, Drills, Search, Interactive Dashboards, Data Blending and Formatting.
  • Exposure to all SDLC phases and have experience in working in agile environment using Scrum framework with cross functional teams in onshore-offshore model.
  • Exposure to various scheduling and pipeline management tools - Cron, Control-M, Autosys, Tidal Enterprise solutions, Job Scheduler.
  • Have exposure to Hadoop ecosystem and technologies like HDFS, Hive, Hive SQL Sqoop.

TECHNICAL SKILLS:

  • SQL
  • Teradata R13/14/15/16| BTEQ
  • FastLoad
  • MultiLoad TPT
  • FastExport
  • Oracle10g/11g/12c/| PostgreSQL 9.x
  • Mysql| Informatica Powercenter9.x 
  • Informatica IDQ9.x
  • Unix Shell Scripting
  • HIVE
  • HADOOP| SCOOP|HDFS
  • HIVE SQL 
  • Tableau 9.x
  • Python 2.7
  • PLSQL
  • Pentaho Data Integration (Kettle) 6.x
  • Pentaho Business Analytics Visio 6.x
  • ERwin| Microsoft Visio
  • Crontab
  • TIDAL Enterprise scheduler 
  • Airflow| Control-M| Autosys
  • Job Scheduler
  • JIRA
  • GIT
  • Scrum
  • Sharepoint

CORE COMPETENCIES:

  • Data Warehousing
  • Dimensional Modeling
  • Data Modeling| ETL pipelines design and development 
  • Data Analysis| Data Management
  • Data Pipeline Management
  • Report Creation| Data Visualization 
  • Data Cleansing| Data Profiling
  • Data Migration| Requirement Gathering
  • Quality Assurance
  • Agile Methodologies
  • Documentation
  • Production issues Analysis.

PROFESSIONAL EXPERIENCE:

Confidential, Sunnyvale, CA

Data Engineer Consultant

Responsibilities:

  • Performed analysis on business requirements, KPI and provide solution to certify data for business use
  • Analyzed the current data movement process and procedures and provided inputs for improvements.
  • Interacted with technical and business analyst, data scientists, business managers to build test datasets and resolve data issues
  • Created logical and physical data model depending on the business requirement.
  • Designed and implemented data model for Data Quality Audits and Profiled data coming from different source systems coming from different countries and performed Data Quality Audit
  • Analyzed massive and highly complex data sets, performing ad-hoc analysis and data manipulation, created data-models, daily, monthly, quarterly, yearly aggregates
  • Define and maintained metadata, data sources and set up Validation Rules
  • Created data-models and daily, monthly, quarterly, yearly aggregates in PostgreSQL and Teradata to build reports and visualization.
  • Created Worksheets and Dashboard, Interactive Dashboards, repots and visualization using Tableau 9 to analysis KPIs 
  • Developed ETL pipelines to populate EWD and Data Marts from Hadoop, relational tables (PostgreSQL, Teradata, mysql), APIs, JSON files, XML files, flat files.
  • Used Informatica PWX to connect to Hadoop and developed complex mapping using various transformations.
  • Extensively used Informatica Powercenter tool set (Designer, Repository Manager, Workflow Manager, and Workflow Monitor), SQL, HiveQL, Scoop scripts, Unix Shell Scripting, Python, Pentaho PDI (Kettle) ETL solution.
  • Evaluated datasets for accuracy and quality. Identified and solved issues concerning data management to improve data quality.
  • Used Unix Shell Script and python2.7 for scripting and automation.
  • Provided resolution of production bugs and performance related issues.

Confidential, Woonsocket, RI

Teradata Developer Consultant

Responsibilities:

  • Worked with cross functional teams, Architect and the Client during various stages of the project and was involved in complete Software Development Life Cycle from Design, Development, Testing, Deployment, Documentation.
  • Created new dimension data models in Teradata and wrote complex Teradata SQL Queries using joins, subqueries, indexes, views, join indexes, PPI, Secondary indexes for data access used in ETL jobs and regression test scripts.
  • Designed and customized data models for Data warehouse per the requirement. Wrote complex SQL queries using analytical/window functions, regexp, multi-table joins, group by.
  • Build the new reusable ETL framework to be used in multiple jobs and mapping to load data into Data Mart.
  • Tuned the existing ETL workflows, SQL source queries, implemented push down optimization strategies.
  • Designed and developed the Audit and control framework using control tables for Informatica jobs to keep track of all the job execution.
  • Responsible for Development, Coding & testing.
  • Fine-tuned the Teradata queries and automated the process to achieve the SLA between all the systems within 30 minutes.
  • Migrated the data using NPARC from one data center to another data center.
  • Extensively used Teradata Utilities like BTEQ, Fastload, MLoad, TPT, Fast Export for batch processing to load\unload data into Teradata tables.
  • Created new and Refactored\Tuned existing ETL jobs using Informatica PowerCenter, SQL, Unix Shell Scripting, Informatica IDQ to extract data from flat files, relational table, XML files and to transform\load the data in target tables accordingly.
  • Created Shell routines for process automation like starting/scheduling ETL jobs, to FTP files from remote server, implementing file watcher mechanism, backup of repository and folder etc.
  • Involved in deployment roll outs for data validation purposes.

Confidential, Indianapolis, IN

Teradata Developer Consultant

Responsibilities:

  • Involved in Complete Software Development Lifecycle Experience (SDLC) from Business Analysis to Development, Testing, Deployment and Documentation.
  • Analyzed existing OLTP and BI system of the region and identified required dimensions and facts for EDW. Participated in designing the Dimensional Model for the EWD.
  • Designed and drafted design in SDD for EDW based on BRD and architectural documents.
  • Created complex ETL flows using Informatica PowerCenter to extract and transform data from flat-files, SAP R/3 tables, relational tables (Oracle and Teradata).
  • Design and implemented framework for error logging mechanism using tables.
  • Wrote complex SQL queries using analytical/window functions, regexp, multi-table joins, group by.
  • Wrote macros, packages, functions, triggers and SQL queries to support ETL mappings. Wrote, tested and implemented Teradata BTEQ, Fastload, Multiload and scripts, DML and DDL to load data.
  • Designed and implemented CDC (change Data capture) mechanism using mapping variables. Also, implemented SCD1, SDC2 (slowly changing dimensions type1, type2).

Confidential, Akron, OH

Data warehouse Developer consultant

Responsibilities:

  • Parsed high-level design specification and created Low-level design documents. Designed ETL jobs to load into staging tables and then to Dimensions and Facts.
  • Developed and implemented ETL workflows using Informatica Powercenter to extract data from flat files, relational databases (Teradata, Oracle) and SAP R3 tables. Develop ABAP programs for the same and performed Cleansing of data.
  • Worked on BTEQ, FASTLOAD, MULTILOAD to load data into Teradata target tables.
  • Involved in Performance tuning at source, target, mappings, sessions, and system levels.
  • Wrote complex Teradata SQL Queries using joins, subqueries, indexes, views, join indexes, PPI, Secondary indexes for data access and log manipulations etc.
  • Analyzed production issues to implement as enhancements to the system.

Confidential

Oracle Developer

Responsibilities:

  • Coordinated with Business Analyst and Architect for Requirement analysis and implemented the same into a functional database design.
  • Developed packages, stored procedures, functions and triggers to perform calculations and implement business logic.
  • Implemented Triggers, Views, Synonyms, Hints, Partitioning of Tables, Global Temporary Tables, materialized views, collections and ref cursors.
  • Performed database validations during production deployments and warranty support post deployment.
  • Created shell scripts to automate various processes like ftp files to and from third party server, file watcher, checking memory availability on servers, preparing/cleansing flat files to load database etc.

We'd love your feedback!