We provide IT Staff Augmentation Services!

Snowflake Cloud Data Engineer Resume

3.00/5 (Submit Your Rating)

Charlotte, NC

SUMMARY

  • A well enthusiastic consulting and technical expert with around 8 Years of experience in Information Technology with extensive experience in projects involving data warehousing, data integration, business intelligence, reporting, data migration, business analysis, design, development, and documentation.
  • Expertise in software design, development, and deployment of large and complex software applications. Has led and participated on teams doing analysis, design, implementations, and enhancements and testing of software applications.
  • Experience in Data Analysis, Data Migration, Data Validation, Data Cleansing, Data Verification, identifying data mismatch, Data Import, and Data Export using multiple ETL tools such as DataStage and Informatica.
  • Experience in Design, Development, and implementation of the enterprise - wide architecture for structured and unstructured data providing architecture and consulting services to Business Intelligence initiatives and data driven business applications.
  • Have experience in Dimensional Modeling using Snowflake schema methodologies of Data Warehouse and Integration projects
  • Experience with Business Process Modeling, Process Flow Modeling, Data flow modeling
  • Experience in working with creating ETL specification documents, & creating flowcharts, process workflows and data flow diagrams.
  • Worked with processes to transfer/migrate data from AWS S3/Relational database and flat files common staging tables in various formats to meaningful data into Snowflake.
  • Strong Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
  • Experience in various phases of data warehouse development lifecycle, including gathering requirements to design, development, testing.
  • Strong development skills with Azure Data Lake, Azure Data Factory, SQL Data Warehouse Azure Blob, Azure Storage Explorer
  • Implemented Copy activity, Custom Azure Data Factory Pipeline Activities for On-cloud ETL processing.
  • Experience in Monitoring and Tuning SQL Server Performance
  • Experience in Data Requirement Analysis, Design, Development of ETL process using IBM DataStage 11.X.
  • Experience in Designing and Implementing Data Warehouse applications, mainly Transformation processes using ETL tool DataStage.
  • Experience in Data Warehousing applications, responsible for the Extraction, Transformation and Loading (ETL) of data from multiple sources into Data Warehouse.
  • Developed efficient jobs for data extraction/transformation/loading (ETL) from different sources to a target data warehouse.
  • Excellent Experience in Designing, Developing, Documenting, Testing of ETL jobs and mappings in Server and Parallel jobs using Data Stage to populate tables in Data Warehouse and Data marts.
  • Experience in new enhancements in the IBM WebSphere DataStage - Multiple Job Compile, Surrogate key generator Stage, Job Report, Message handler options
  • Worked on DataStage production job scheduling process using the Scheduling tools and data stage scheduler.
  • Experience in developing very complex mappings, reusable transformations, sessions, and workflows using ETL tool to extract data from various sources and load into targets.
  • Good hands-on experience in RDBMS like Oracle, SQL Server, Teradata, and DB2.
  • Excellent knowledge in Extraction, Cleansing and Modification of data from/to various Data Sources like Flat Files, Sequential files, Comma separated files (.csv), XML and Databases like Oracle, DB2 etc.
  • Extensive experience in writing UNIX shell scripts for data validation, manipulation, and transformation.
  • Extensively worked with Teradata utilities like BTEQ, Fast Export, Fast Load, Multi Load to export and load data to/from different source systems including flat files.
  • Hands on experience using query tools like TOAD, SQL Developer, PLSQL developer and Teradata SQL Assistant.
  • Experience in Extract, Transform, Load data into Data Warehouse using DataStage, Informatica, SQL scripts with Netezza database.
  • Have been part of Waterfall to Agile transformation and have been part of scrum team following agile Methodologies.
  • Participated in CAB meetings and followed defined change management process as per company standards for production deployments.
  • Wrote, tested, and implemented Teradata Fast load, Multi load and Bteq scripts, DML and DDL.
  • Played as a liaison between Business and IT team in transforming Data/Reporting requirements to build software solutions.
  • Experience in agile projects and very familiar with agile ceremonies
  • Developed multiple reusable codes for deployment across ETL projects.
  • Experience in coding optimized SQL queries on databases like MySQL, SQL server.
  • Expertise in SQL and PL/SQL programming to write Views, Stored Procedures, Functions and Triggers.
  • Experience in using GIT/SVN for software development version control.
  • Experience in unit testing the applications using JavaScript testing frameworks Jest, Jasmine.
  • Hands-on experience working with Continuous Integration (CI) build-automation tools such as Maven, Jenkins.
  • Experience on manipulating IDE or tools such as Eclipse, Web storm, Microsoft Visual Studio.
  • Good experience in creating Workflows, Lightning Process Builder, Reports and Force.com sites.

TECHNICAL SKILLS

Languages: Java, SQL, PL/SQL, Python.

Databases: Oracle Database 10g/11g, Teradata 15.0, Netezza, DB2, SQL Server.

Web Technologies: HTML 5, CSS3, JavaScript, jQuery, AJAX, JSON, XML.

ETL Tools: IBM Data stage 11.X, DBT data build tool, Informatica PowerCenter 10.x/9. x.

Data Warehouse: Snowflake Cloud.

BI and Reporting: Tableau, Power BI.

Web/Application Servers: Apache Tomcat.

Version Control Tools: GIT, SVN

Operating Systems: Windows 2000/XP/Vista 7/8/10, Mac, Unix, Linux

Development Tools (IDEs): Visual Studio, NetBeans, Eclipse

Debugging tools: Developer tools

Cloud Technologies: AWS EC2, S3 bucket, IAM, SNOWFLAKE.

Testing tools & other Technologies: Selenium, Postman, SOAP UI, Big Data.

Methodologies: Agile, Waterfall.

PROFESSIONAL EXPERIENCE

Confidential, Charlotte, NC

Snowflake Cloud Data Engineer

Responsibilities:

  • Responsible for requirement gathering, user meetings, discussing the issues to be resolved and translated the user inputs into ETL design documents.
  • Responsible for documenting the requirements, translating the requirements into system solutions, and developing the implementation plan as per the schedule.
  • Created ETL mapping document and ETL design templates for the development team.
  • Lead the team towards Successful project release and acts as lead b/w onshore team and offshore team to have a proper handshake.
  • Created external tables on top of S3 data which can be queried using AWS services like Athena.
  • Snowflake data engineers will be responsible for architecting and implementing very large-scale data intelligence solutions around Snowflake Data Warehouse.
  • A solid experience and understanding of architecting, designing and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse is a must.
  • Need to have professional knowledge of AWS Redshift. Developing ETL pipelines in and out of data warehouse using combination of Python and Snowflakes Snows Writing SQL queries against Snowflake.
  • Involved in migration from on prem to Cloud AWS migration.
  • Process Location and Segments data from S3 to Snowflake by using Tasks, Streams, Pipes, and stored procedures.
  • Led a migration project from Teradata to Snowflake warehouse to meet the SLA of customer needs.
  • Used Analytical function in hive for extracting the required data from complex datasets.
  • Extensively used to handle the Equipment Data and Vision Link data by using XML, XSD, JSON files and loaded into Teradata database.
  • To Build distributed, reliable, and scalable data pipelines to ingest and process data in real-time.
  • Created ETL pipelines using Stream Analytics and Data Factory to ingest data from Event Hubs and Topics into SQL Data Warehouse.
  • Responsible for Migration of key systems from on-premises hosting to Azure Cloud Services Snow SQL Writing: SQL queries against Snowflake.
  • Designing and implement a fully operational production grade large scale data solution on Snowflake Data Warehouse.
  • Experience on Migrating legacy data warehouse and other databases (SQL Server / Oracle Database 10g/11g, Teradata 15.0, DB2) to Snowflake.
  • Data Analysis and Profiling to discover key join relationships, data types, and assess data quality utilizing SQL queries. Data cleansing, Data manipulation and exploratory analysis to identify, analyze and interpret trends and patterns in large data sets.
  • Data Modeling/Data Architecture: Organization and arrangement of data in Staging /Intermediate/Final Targets using tables, views, etc.
  • Used DBT totest the data(schema tests, referential integrity tests, custom tests) and ensures data quality.
  • Used DBT to debug complex chains of queries. They can be split into multiple models and macros that can be tested separately.
  • Data Movement: ETL process design, access, manipulation, analysis, interpretation, and
  • Presentation of data per business logic.
  • Worked on Migrating objects from DB2 to Snowflake.

Environment: Snowflake cloud Datawarehouse, AWS S3, AWS IAM, DBT (data build tool), Oracle Database 10g/11g, Teradata 15.0, Python, Tableau, Netezza, control-M, SQL Server, DB2.

Confidential, VA

Data Engineer

Responsibilities:

  • Experience in learning architecting data intelligence solutions around Snowflake Data Warehouse and architecting snowflake solutions as developer.
  • Build, create and configure enterprise level Snowflake environments. Maintain, implement, and monitor Snowflake Environments.
  • Hands-on experience with Snowflake utilities, Snowflake SQL, Snow Pipe, etc.
  • Worked in Snowflake advanced concepts like setting up Resource Monitors, Role Based Access Controls, Data Sharing, Virtual Warehouse Sizing, Query Performance Tuning, Snow Pipe, Tasks, Streams, Zero- copy cloning etc.
  • Created Snow pipe for continuous load data and used copy to bulk load the data.
  • Created Internal, External stage and transformed data during load.
  • Assisted to map existing data ingestion platform to AWS cloud stack for the enterprise cloud initiative.
  • Used tools like IBM UCD, GIT/GitHub for version control.
  • Provide support for customer data warehouse data to serve analytical reporting for marketing, life sciences and various other user communities by accomplishing return to service and enhancements to process for consistent and timely data delivery.
  • Develop processes for extracting, cleansing, transforming, integrating, and loading data into data warehouse database.
  • ETL Development for Control Architecture, Common Modules, Sequence Controls, and major critical interfaces.

Environment: Snowflake, Python, Linux, AWS services, SQL

Confidential - Texas

Data Analyst

Responsibilities:

  • Created automation scripts to streamline ETL processes, Data imports/exports, and API pulls using Python and Shell languages
  • Created reports and data visualization dashboards using complex SQL logic and BI tool Tableau.
  • Provided data warehousing solutions and designed data models to efficiently store and process data.
  • Scheduled ETL jobs on Linux servers and Amazon cloud (AWS) using Crontab and CloudWatch.
  • Daily activities include extracting, transforming, integrating, and loading client data, developing batch and streaming.
  • Data pipelines, performing data analysis, supporting production activities, and resolving production issues.
  • Worked collaboratively with Technical Architects, Data Analysts, and Project Managers to gather requirements and technical specifications and reformed development activities in an effective and timely manner
  • Evaluated new database technologies and data integration mechanisms to stay current with industry trends

Environment: AWS S3, GitHub, Service Now, SQL, Apache Spark, Tableau

Confidential

Data Analyst

Responsibilities:

  • Participated in all phases including Requirement Analysis, Client Interaction, Design, Coding, Testing, Support and Documentation.
  • Created Dimensions and Facts in the physical data model.
  • Installed and configured DataStageClient and Server Tools. Used the DataStage Designer to develop processes for extracting, cleansing, Transforming, integrating, and to Load data like
  • FLAT Files, ORACLE, and DB2 into database.
  • Extensively used SQL and PL/SQL coding in DataStage jobs for processing data.
  • Developed batches and sequencers in designer to run and control set of jobs.
  • Used the DataStageDirector and its run-time engine to schedule running the job, testing, and debugging its components, and monitoring the resulting executable versions.
  • Used DataStage Manager for importing metadata from repository, new job categories and creating new data elements.
  • Used Parallel Extender for distributing load among different processors by Implementing pipeline and partitioning of data in parallel extender.
  • Used Quality Stage to access the Metadata Server to obtain live access to current metadata for enterprise data.
  • Used Quality Stage to access Data generated by WebSphere Information Analyzer.
  • Used Partition methods and collecting methods for implementing parallel processing.
  • Used DataStageManager for importing metadata from repository, new job categories and creating new data elements.
  • Used DataStageDirector to schedule running the solution, testing, and debugging its components and monitoring the resulting executable versions both on adhoc and scheduled basis.
  • Worked on programs for scheduling Data loading and transformations using Data Stage from UDB/DB2 to Oracle 12x using PL/SQL.
  • Used DS Administrator to Configure, Create and Maintain DataStagejobs.
  • Wrote Unix Shell Scripts to automate the process.

Environment: IBM Data Stage 11.3, DataStage Director, PL/SQL, Oracle 12x, UDB/DB2, UNIX, Windows.

We'd love your feedback!