We provide IT Staff Augmentation Services!

Sr Snowflake Developer Resume

4.00/5 (Submit Your Rating)

Mount Laurel, NJ

SUMMARY

  • Qualified professional with 10 years of extensive IT Experience especially in Data Warehousing and Business Intelligence applications in Financial, Retail, Telecom, Insurance, HealthCare, and Technology Solutions industries.
  • Strong experience in migrating other databases to Snowflake.
  • Participate in design meetings for creation of the Data Model and provide guidance on best data architecture practices.
  • Participates in the development improvement and maintenance of snowflake database applications.
  • Evaluate Snowflake Design considerations for any change in the application.
  • Build the Logical and Physical data model for snowflake as per the changes required.
  • Define roles, privileges required to access different database objects.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Design and code required Database structures and components.
  • Build the Logical and Physical data model for snowflake as per the changes required.
  • Worked with cloud architect to set up the environment.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Design and code required Database structures and components.
  • Ensure in corporation of best practices and lessons learned from prior projects.
  • Coding for Stored Procedures/ Triggers.
  • Implement performance tuning where applicable.
  • Designs batch cycle procedures on major projects using scripting and Control.
  • Develop SQL queries SnowSQL
  • Develop transformation logic using snow pipeline.
  • Optimize and fine tune queries.
  • Performance tuning of Big Data workloads.
  • Have good Knowledge in ETL and hands on experience in ETL.
  • Write highly tuned and performant SQLs on various DB platform including MPPs.
  • Develop highly scalable, fault tolerant, maintainable ETL data pipelines to handle vast amount of data.
  • Build high quality, unit testable code.
  • Operationalize data ingestion, data transformation and data visualization for enterprise use.
  • Define architecture, best practices, and coding standards for the development team.
  • Provides expertise in all phases of the development lifecycle from concept and design to testing and operation.
  • Work with domain experts, engineers, and other data scientists to develop, implement, and improve upon existing systems.
  • Deep dive into data quality issues and provide corrective solutions.
  • Interface with business customers, gathering requirements and deliver complete Data Engineering solution.
  • Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc.
  • Deliver quality and timely results.
  • Mentor and train junior team members and ensure coding standard are followed across the project.
  • Help talent acquisition team in hiring quality engineers.

TECHNICAL SKILLS

Cloud Technologies: Snowflake, SnowSQL Snowpipe AWS.

Spark, Hive: LLAP, Beeline, Hdfs,MapReduce,Pig,Sqoop,HBase,Oozie,Flume

Reporting Systems: Splunk

Hadoop Distributions: Cloudera, Hortonworks

Programming Languages: Scala, Python, Perl, Shell scripting.

DataWarehousing: Snowflake, Redshift, Teradata

DBMS: Oracle, SQL Server, MySQL, Db2

Operating System: Windows, Linux, Solaris, Centos, OS X

IDEs: Eclipse, NetBeans.

Servers: Apache Tomcat

PROFESSIONAL EXPERIENCE

Confidential

Sr Snowflake Developer

Responsibilities:

  • Created tables and views on snowflake as per the business needs.
  • Used Tab Jolt to run the load test against the views on tableau.
  • Created reports on Metabase to see the Tableau impact on snowflake in terms of cost
  • Participated in sprint planning meetings, worked closely with manager on gathering the requirements.
  • Designed and implemented efficient data pipelines (ETLs) in order to integrate data from a variety of sources into Data Warehouse.
  • Developing ETL pipelines in and out of data warehouse using Snowflakes SnowSQL Writing SQL queries against Snowflake
  • Performed data quality issue analysis using SnowSQL by building analytical warehouses on Snowflake.
  • Implemented data intelligence solutions around Snowflake Data Warehouse.
  • Perform various QA tasks if necessary.
  • Designed and Created Hive external tables using shared Meta - store instead of derby with partitioning, dynamic partitioning, and buckets.
  • Implemented Apache PIG scripts to load data to Hive.
  • Developed Python scripts to take backup of EBS volumes using AWS Lambda and Cloud Watch
  • Creating scripts for system administration and AWS using languages such as BASH and Python.
  • Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation- using Jenkins along with Python and Shell scripts to automate routine jobs.
  • Created Pre-commit hooks in Python/shell/bash for authentication with JIRA-Pattern Id while committing codes in SVN, limiting file size code and file type and restricting development team to check-in while code commit.
  • Integration of Puppet with Apache and developed load testing and monitoring suites in Python.
  • Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deploy and services.
  • Created roles and access level privileges and taken care of Snowflake Admin Activity end to end.
  • Converted 230 views query is from Sql server snowflake compatibility.
  • Publishing customized interactive reports and dashboards, report scheduling using Tableau server.
  • Administered user, user groups, and scheduled instances for reports in Tableau.
  • Involved in installation of Tableau desktop 8.1, Tableau server Application software.
  • Worked on snowflake connector for developing python applications.
  • Installing python connectors and python connector API.
  • Knowledge of Azure Site Recovery and Azure Backup Installed and Configured the Azure Backup agent and virtual machine backup, Enabled Azure Virtual machine backup from the Vault and configured the Azure Site Recovery (ASR).

Confidential, Mount Laurel, NJ

Sr. Snowflake Developer

Responsibilities:

  • Involved in End-to-End migration of 800+ Object with 4TB Size from Sql server to Snowflake.
  • Data moved from Sql Server à Azure àsnowflake internal stageàSnowflake. with copy options.
  • Created roles and access level privileges and taken care of Snowflake Admin Activity end to end.
  • Converted 230 views query’s from Sql serveràsnowflake compatibility.
  • Retrofitted 500 Talend jobs from SQL Server to Snowflake.
  • Worked on SnowSQL and Snow pipe
  • Converted Talend Job lets to support the snowflake functionality.
  • Created data sharing between two snowflake accounts (Prod—Dev).
  • Migrate the database 500 + Tables and views from Redshift to Snowflake.
  • Redesigned the Views in snowflake to increase the performance.
  • Unit tested the data between Redshift and Snowflake.
  • Creating Reports in Looker based on Snowflake Connections.
  • Validation of Looker report with Redshift database.
  • Created data sharing out of snowflake with consumers.
  • Validating the data from SQL Server to Snowflake to make sure it has Apple to Apple match.
  • Consulting on Snowflake Data Platform Solution Architecture, Design, Development, and deployment focused to bring the data driven culture across the enterprises.
  • Driving replacing every other data platform technology using Snowflake with lowest TCO with no compromise on performance, quality, and scalability.
  • Building solutions once for all with no band-aid approach.
  • Played key role in testing Hive LLAP and ACID properties to leverage row level transactions in hive.
  • Volunteered in designing an architecture for a dataset in Hadoop with estimated data size of 2PT/day.
  • Integrated Splunk reporting services with Hadoop eco system to monitor different datasets.
  • Used Avro, Parquet and ORC data formats to store in to HDFS.
  • Developed workflow in SSIS to automate the tasks of loading the data into HDFS and processing using hive.
  • Develop alerts and timed reports Develop and manage Splunk applications.
  • Provide leadership and key stakeholders with the information and venues to make effective, timely decisions.
  • Establish and ensure adoption of best practices and development standards.
  • Communicate with peers and supervisors routinely, document work, meetings, and decisions.
  • Work with multiple data sources.

Environment: Snowflake, Redshift, SQL server, AWS, AZURE, TALEND, JENKINS and SQL

Confidential, Aloha, OR

Snowflake/NiFi Developer

Responsibilities:

  • Evaluate Snowflake Design considerations for any change in the application.
  • Build the Logical and Physical data model for snowflake as per the changes required.
  • Define roles, privileges required to access different database objects.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Design and code required Database structures and components.
  • Build the Logical and Physical data model for snowflake as per the changes required.
  • Worked with cloud architect to set up the environment.
  • Involved in Migrating Objects from Teradata to Snowflake.
  • Created Snowpipe for continuous data load.
  • Used COPY to bulk load the data.
  • Created internal and external stage and transformed data during load.
  • Used FLATTEN table function to produce lateral view of VARIENT, OBECT and ARRAY column.
  • Worked with both Maximized and Auto-scale functionality.
  • Used Temporary and Transient tables on diff datasets.
  • Cloned Production data for code modifications and testing.
  • Shared sample data using grant access to customer for UAT.
  • Time traveled to 56 days to recover missed data.
  • Developed data warehouse model in snowflake for over 100 datasets using whereScape.
  • Heavily involved in testing Snowflake to understand best possible way to use the cloud resources.
  • Developed ELT workflows using NiFI to load data into Hive and Teradata.
  • Worked on Migrating jobs from NiFi development to Pre-PROD and Production cluster.
  • Scheduled different Snowflake jobs using NiFi.
  • Used NiFi to ping snowflake to keep Client Session alive.
  • Worked on Oracle Databases, RedShift, and Snowflakes
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Major challenges of the system were to integrate many systems and access them which are spread across South America; creating a process to involve third party vendors and suppliers; creating authorization for various department users with different roles.

Environment: Snowflake, SQL server, AWS, and SQL

Confidential, New York City, NY

Big Data Engineer

Responsibilities:

  • Primarily worked on a project to develop internal ETL product to handle complex and large volume healthcare claims data. Designed ETL framework and developed number of packages.
  • Performing Data source investigation, developed source to destination mappings and data cleansing while loading the data into staging/ODS regions.
  • Involved in various Transformation and data cleansing activities using various Control flow and data flow tasks in SSIS packages during data migration.
  • Applied various data transformations like Lookup, Aggregate, Sort, Multicasting, Conditional Split, Derived column etc.
  • Played key role in testing Hive LLAP and ACID properties to leverage row level transactions in hive.
  • Volunteered in designing an architecture for a dataset in Hadoop with estimated data size of 2PT/day.
  • Integrated Splunk reporting services with Hadoop eco system to monitor different datasets.
  • Used Avro, Parquet and ORC data formats to store in to HDFS.
  • Developed workflow in SSIS to automate the tasks of loading the data into HDFS and processing using hive.
  • Develop alerts and timed reports Develop and manage Splunk applications.
  • Provide leadership and key stakeholders with the information and venues to make effective, timely decisions.
  • Establish and ensure adoption of best practices and development standards.
  • Communicate with peers and supervisors routinely, document work, meetings, and decisions.
  • Work with multiple data sources.
  • Designed and Created Hive external tables using shared Meta-store instead of derby with partitioning, dynamic partitioning, and buckets.
  • Implemented Apache PIG scripts to load data to Hive.
  • Developed Mappings, Sessions, and Workflows to extract, validate, and transform data according to the business rules using Informatica.
  • Querying, creating stored procedures and writing complex queries and T-SQL join to address various reporting operations and random data requests.
  • Performance monitoring and Optimizing Indexes tasks by using Performance Monitor, SQL Profiler, Database Tuning Advisor, and Index tuning wizard.
  • Acted as point of contact to resolve locking/blocking and performance issues.
  • Wrote scripts and indexing strategy for a migration to Confidential Redshift from SQL Server and MySQL databases
  • Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift.
  • Used JSON schema to define table and column mapping from S3 data to Redshift.
  • Wrote indexing and data distribution strategies optimized for sub-second query response.
  • Developed microservice on boarding tools leveragingPythonandJenkinsallowing for easy creation and maintenance of build jobs andKubernetesdeploy and services.
  • Automated Release Notes using Python and Shell scripts.
  • Used Python scripts to configure the WebLogic application server in all the environments.

Confidential

ETL Developer

Responsibilities:

  • Developed Logical and Physical data models that capture current state/future state data elements and data flows using Erwin 4.5.
  • Responsible for design and build data mart as per the requirements.
  • Extracted Data from various sources like Data Files, different customized tools like Meridian and Oracle.
  • Extensively worked on Views, Stored Procedures, Triggers and SQL queries and for loading the data (staging) to enhance and maintain the existing functionality.
  • Done analysis of Source, Requirements, existing OLTP system and identification of required dimensions and facts from the Database.
  • Created Data acquisition and Interface System Design Document.
  • Designed the Dimensional Model of the Data Warehouse Confirmation of source data layouts and needs.
  • Extensively used Oracle ETL process for address data cleansing.
  • Developed and tuned all the Affiliations received from data sources using Oracle and Informatica and tested with high volume of data.
  • Responsible for developing, support and maintenance for the ETL (Extract, Transform and Load) processes using Oracle and Informatica PowerCenter.
  • Created common reusable objects for the ETL team and overlook coding standards.
  • Reviewed high-level design specification, ETL coding and mapping standards.
  • Designed new database tables to meet business information needs. Designed Mapping document, which is a guideline to ETL Coding.
  • Used ETL to extract files for the external vendors and coordinated that effort.
  • Migrated mappings from Development to Testing and from Testing to Production.
  • Performed Unit Testing and tuned for better performance.
  • Created various Documents such as Source-to-Target Data mapping Document, and Unit Test Cases Document.

We'd love your feedback!