We provide IT Staff Augmentation Services!

Sr. Snowflake Developer Resume

0/5 (Submit Your Rating)

NJ

SUMMARY

  • Have around 8 years of IT experience in Data Architecture, Analysis, Design, Development, Implementation, Testing and Support of Data Warehousing and Data Integration Solutions using Snowflake, Teradata, Matillion, Ab Initio and AWS S3.
  • Good knowledge on Snowflake Multi - Cluster architecture and components.
  • Extensively worked on data migration from on prem to the cloud using Snowflake and AWS S3.
  • Have good knowledge and experience on Matillion tool.
  • Developed around 50 Matillion jobs to load data from S3 to SF tables.
  • Have good knowledge on Core Python scripting.
  • Extensively worked on writing JSON scripts and have adequate knowledge using APIs.
  • Good understanding of Entities, Relations and different types of tables in snowflake database.
  • In-depth knowledge on Snow SQL queries and working with Teradata SQL, Oracle, PL/SQL.
  • Strong working exposure and detailed level expertise on methodology of project execution.
  • Extensive work experience in Bulk loading using Copy command.
  • Worked on Tasks, streams and procedures in Snowflake.
  • Created different types of tables in Snowflake like Transient tables, Permanent tables and Temporary tables.
  • Extensive experience in creating complex views to get the data from multiple tables.
  • Have good knowledge on Snowpipe and SnowSQL.
  • Experience with Snowflake cloud data warehouse and AWS S3 bucket for continuous data load using Snowpipe.
  • Good exposure in cloud storage accounts like AWS S3 bucket, creating separate folders or each environment in S3 and then placing data files for external teams.
  • Splitting bigger files based on the record count by using split function in AWS S3.
  • Experience with command line tool using Snow SQL to put the files in different staging area and run SQL commands.
  • Experience in querying External stages (S3) data and load into snowflake tables.
  • Experience in using Snowflake zero copy Clone, SWAP, Time Travel and Different Table types.
  • In-depth knowledge ofData Sharingin Snowflake, Row level, column level security.
  • Clear understanding of Snowflakes advanced concepts like virtual warehouses, query performance using micro- partitions and Tuning.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Develop transformation logics using Snowpipe for continuous data loads.
  • Have good knowledge on Python and UNIX shell scripting.
  • Extensive experience in creating BTEQ, FLOAD, MLOAD, FAST EXPORT and have good knowledge about TPUMP and TPT.
  • Observed the usage of SI, JI, HI, PI, PPI, MPPI and compression on various tables.
  • Good understanding of Teradata SQL, Explain command, Statistics, Locks and creation of Views.
  • Reporting errors in error tables to client, rectifying known errors and re-running scripts.
  • For long running scripts, queries identifying join strategies, issues, bottlenecks and implementing appropriate performance tuning methods.
  • Writing Unit Test cases and submitted Unit test results as per the quality process for Snowflake, Ab initio and Teradata changes.
  • Experience in working with (HP QC) for finding defects and fixing the issues.
  • Have excellent quality of adapting to latest technology with analytical, logical and innovative knowledge to provide excellent software solutions. Productive, dedicated and capable of working independently.

TECHNICAL SKILLS

Operating systems: Linux, Windows

Databases: Snowflake, Teradata, Oracle

Cloud technologies: Snowflake, AWS S3

ETL Tools: Matillion, Ab Initio, Teradata

Languages: Python, SQL, PL/SQL

Tools: and Utilities: Snow SQL, Snowpipe, Teradata Load utilities

PROFESSIONAL EXPERIENCE

Confidential, NJ

Sr. Snowflake Developer

Technology Used: Snowflake, Matillion, Oracle, AWS and Pantomath

Responsibilities:

  • Impact analysis for business enhancements and Detail Design documents preparation.
  • Participated in sprint calls, worked closely with manager on gathering the requirements.
  • Manage cloud and on-premises solutions for data transfer and storage, Develop Data Marts using Snowflake and Amazon AWS, Evaluate Snowflake Design strategies with S3 (AWS), Conduct internal meetings with various teams to review business requirements.
  • Create and maintain different types of Snowflake objects like transient, temp and permanent.
  • Worked on SnowSQL and Snowpipe, Converted Oracle jobs into JSON scripts to support the snowflake functionality.
  • Created Snowpipe for continuous data load, Used COPY to bulk load the data.
  • Involved in End-to-End migration of 80+ Object with 2TB Size from Oracle server to Snowflake, Data moved from Oracle Server to AWS snowflake internal stage with copy options, created roles and access level privileges and taken care of Snowflake Admin Activity end to end.
  • Converted around 100 views queries from Oracle server to snowflake compatibility, and created several secure views for downstream applications
  • Constructing enhancements in Matillion, Snowflake, JSON scripts and Pantomath.
  • Implemented business transformations, Type1 and CDC logics by using Matillion.
  • Used COPY to bulk load the data from S3 to tables, Created data sharing between two snowflake accounts (PROD—DEV)
  • Monitored the project processes, making periodic changes and guaranteeing on-time delivery.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Design and code required Database structures and components. Involved in creating new stored procedures and optimizing existing queries and stored procedures. Created data sharing between two snowflake accounts (Prod—Dev).
  • Created complex views for power BI reports
  • Designed and implemented efficient data pipelines (ETLs) in order to integrate data from a variety of sources into Data Warehouse.
  • Performed data quality analysis using SnowSQL by building analytical warehouses on Snowflake.
  • Validating the data from Oracle Server to Snowflake to make sure it has Apple to Apple match.
  • Testing code changes with all possible negative scenarios and documenting test results.
  • Working with System Integrator consultants at a deep technical level to successfully position and deploy Snowflake in customer environments.
  • Responsible to implement coding standards defined by snowflake.
  • Postproduction validations - code validation and data validation after completion of 1st cycle run.
  • Worked on performance tuning/improvement process and QC process
  • Supporting downstream applications with their production data load issues
  • Created multiple ETL design docs, mapping docs, ER model docs, Unit test case docs.
  • Taking care of Production runs and Prod data issues

Confidential, NYC

Sr. Data Engineer

Technology Used: Snowflake, Teradata, Ab Initio, AWS and Autosys

Responsibilities:

  • Impact analysis for business enhancements and modifications.
  • Involved in End-to-End migration of 40+ Object with 1TB Size from Oracle on prem to Snowflake.
  • Data moved from Oracle à AWS àsnowflake internal stageàSnowflake with copy options.
  • Involved in Data migration from Teradata to snowflake.
  • Code review to ensure standard in coding defined by Teradata.
  • Responsible for various DBA activities such as setting up access rights and space rights for Teradata environment.
  • Published reports and dashboards using Power BI.
  • Involved in creating new stored procedures and optimizing existing queries and stored procedures.
  • Created internal and external stage and transformed data during load.
  • Used Table CLONE, SWAP and ROW NUMBER Analytical function to remove duplicated records.
  • Worked with both Maximized and Auto-scale functionality.
  • Used Temporary and Transient tables on diff datasets.
  • Cloned Production data for code modifications and testing.
  • Time traveled to 56 days to recover missed data.
  • Created tasks to run SQL queries and Stored procedures
  • Validating the data from ORACLE to Snowflake to make sure it has Apple to Apple match.
  • Testing code changes with all possible negative scenarios and documenting test results
  • Deploying codes till UAT by creating tag and build life.
  • Postproduction validations like code and data loaded into tables after completion of 1st cycle run.
  • Worked on performance tuning/improvement process and QC process
  • Supporting downstream applications with their production data load issues
  • Created multiple ETL design docs, mapping docs, ER model docs, Unit test case docs.

Confidential, NJ

ETL Developer

Technology Used: Ab Initio, Informix, Oracle, UNIX, Crontab

Responsibilities:

  • Extensively worked on data extraction transformation and loading form source to target system using BTEQ, FASTLOAD and MULTILOAD
  • Writing ad-hoc queries and sharing results with business team
  • Worked on performance tuning by using explain and collect statistic commands.
  • Did error handling and performance tuning for long running queries and utilities
  • Constructing the enhancements in Ab Initio, UNIX and Informix.
  • Used Toad to verify the counts and results of the graphs and Tuning of Ab Initio graphs for better performance.
  • Involved in performance improvement process and quality review process and supporting existing down streams and their production load issues.
  • Used sandbox parameters to check in and checkout of graphs from repository Systems.
  • Created ETL design docs, Unit, Integrated and System test cases. Involved in production moves.
  • Migrate code into production and Validate data loaded into tables after cycle completion
  • Creating FORMATS, MAPS, Stored procedures in Informix database
  • Creating/modifying shell scripts to execute Graphs and to load data to into tables by using IPLOADER

We'd love your feedback!