We provide IT Staff Augmentation Services!

Sr. Snowflake Developer Resume

2.00/5 (Submit Your Rating)

Plano, TX

SUMMARY

  • Total 9+ hands on experience with building product ionized data ingestion and processing pipelines using Java, Spark, Scala etc and also experience in designing and implementing production grade data warehousing solutions on large scale data technologies
  • Strong experience in migrating other databases to Snowflake.
  • Having experience on creating 2D drawings using AutoCAD.
  • Work with domain experts, engineers, and other data scientists to develop, implement, and improve upon existing systems.
  • Worked on various Talend components such as tMap, tFilterRow, tAggregateRow, tFileExist, tFileCopy, tFileList, tDie etc.
  • Experience in analyzing data usingHiveQL.
  • Participate in design meetings for creation of the Data Model and provide guidance on best data architecture practices.
  • Experience withSnowflake Multi - Cluster Warehouses.
  • Experience inSplunkreporting system.
  • Understanding of SnowFlake cloud technology.
  • Experience in buildingSnowpipe.
  • Experience in using SnowflakeCloneandTime Travel.
  • Experience in various data ingestion patterns to hadoop.
  • Worked onClouderaandHortonworksdistribution.
  • In - depth understanding of SnowFlake cloud technology.
  • In-Depth understanding of SnowFlakeMulti-cluster Size and Credit Usage.
  • Experience withSnowflake Multi-Cluster Warehouses.
  • In-depth understanding of NiFi.
  • Experience in building ETL pipelines using NiFi.
  • Experience withSnowflake Virtual Warehouses.
  • Responsible for determining the bottlenecks and fixing the bottlenecks with performance tuning using Netezza Database.
  • Experience in buildingSnowpipe.
  • Participates in the development improvement and maintenance of snowflake database applications.
  • Experience in various methodologies likeWaterfallandAgile.
  • Hands on experience in Hbase, Pig.
  • Extensive experience in developing complex stored Procedures/BTEQ Queries.
  • In-depth understanding of Data Warehouse/ODS, ETL concept and modeling structure principles.
  • Build the Logical and Physical data model for snowflake as per the changes required.
  • Define roles, privileges required to access different database objects.
  • In-depth knowledge of SnowflakeDatabase, Schema and Tablestructures.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Worked with cloud architect to set up the environment.
  • Coding for Stored Procedures/ Triggers.
  • Designs batch cycle procedures on major projects using scripting and Control
  • Develop SQL queries SnowSQL
  • Develop transformation logic using snowpipeline.
  • Optimize and fine tune queries
  • Performance tuning of Big Data workloads.
  • Experience inObject Oriented Analysis, Design(OOAD) and development of software using UML Methodology, good knowledge ofJ2EE design patternsandCore Java design patterns.
  • A solid experience and understanding of architecting, designing and operationalization of large scale data and analytics solutions on Snowflake Cloud Data Warehouse.
  • Have good Knowledge in ETL and hands on experience in ETL.
  • Operationalize data ingestion, data transformation and data visualization for enterprise use.
  • Mentor and train junior team members and ensure coding standard is followed across the project.
  • Help talent acquisition team in hiring quality engineers.
  • Experience in real time streaming frameworks likeApache Storm.
  • Worked onClouderaandHortonworksdistribution.
  • Progressive experience in the field of Big Data Technologies, Software Programming and Developing, which also includes Design, Integration, Maintenance.
  • Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python / Java.
  • Extensive experience in developing complex stored Procedures/BTEQ Queries.
  • ETL pipelines in and out of data warehouse using combination of Python and Snowflakes SnowSQL Writing SQL queries against Snowflake.

TECHNICAL SKILLS

Cloud Technologies: Snowflake, SnowSQL, Snowpipe.

Spark, Hive: LLAP, Beeline, Hdfs, MapReduce, Pig, Sqoop, HBaseOozie, Flume.

Reporting Systems: Splunk

Hadoop Distributions: Cloudera,Hortonworks

Programming Languages: Scala, Python, Perl, Shell scripting.

DataWarehousing: Snowflake, Redshift, Teradata

DBMS: Oracle,SQL Server,MySql,Db2

Operating System: Windows, Linux, Solaris, Centos, OS X

IDEs: Eclipse, Netbeans.

Servers: Apache Tomcat

PROFESSIONAL EXPERIENCE

Confidential, Plano TX

Sr. Snowflake Developer

Responsibilities:

  • Involved in End to End migration of 800+ Object with 4TB Size from Sql server to Snowflake.
  • Data moved from Sql Server Azure snowflake internal stage Snowflake with copy options.
  • Created roles and access level privileges and taken care of Snowflake Admin Activity end to end.
  • Converted 230 views query’s from Sql server snowflake compatibility.
  • Retrofitted 500 Talend jobs from SQL Server to Snowflake.
  • Worked on SnowSQL and Snowpipe
  • Converted Talend Joblets to support the snowflake functionality.
  • Created data sharing between two snowflake accounts (Prod—Dev).
  • Worked on SnowSQL and Snowpipe.
  • Converted Talend Joblets to support the snowflake functionality.
  • Created Snowpipe for continuous data load.
  • Used COPY to bulk load the data.
  • ETL pipelines in and out of data warehouse using combination of Python and Snowflakes SnowSQL Writing SQL queries against Snowflake.
  • Used the UNIX for Automatic Scheduling of jobs. Involved inUnit Testingof newly created PL/SQL blocks of code.
  • Worked on EDW modules by retrieving the data from the different source systems databases like DB2, Oracle, Netezza, MS SQL Server, Flat files and then loading it into target databases of Netezza/ MS-SQL Server Database using IBM Data Stage parallel jobs.
  • Created parallel jobs to extract the data from flat files, MS-SQL Server, Netezza and DB2 databases.
  • Created data sharing between two snowflake accounts.
  • Created internal and external stage and transformed data during load.
  • Redesigned the Views in snowflake to increase the performance.
  • Unit tested the data between Redshift and Snowflake.
  • Developed data warehouse model in snowflake for over 100 datasets using whereScape.
  • Creating Reports in Looker based on Snowflake Connections.
  • Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python etc to do Extract, Load and Transform data 3+ years of experience in Data management (e.g. DW/BI) solutions.
  • Validation of Looker report with Redshift database.
  • Good working knowledge of any ETL tool (Informatica or SSIS).
  • Created Talend Mappings to populate the data into dimensions and fact tables.
  • Wrote ETL jobs to read from web APIs using REST and HTTP calls and loaded into HDFS using java and Talend.
  • Validating the data from SQL Server to Snowflake to make sure it has Apple to Apple match.
  • Consulting on Snowflake Data Platform Solution Architecture, Design, Development and deployment focused to bring the data driven culture across the enterprises
  • Building solutions once for all with no band-aid approach.
  • Implemented Change Data Capture technology in Talend in order to load deltas to a Data Warehouse.
  • Worked on Cloud Database SnowFlake Cloud Datawarehouse and Integrated Automated Generic Python Framework to Process XML, CSV, JSON,TSV,TXT files.
  • Contributed in roadmaps for enterprise cloud data lake architecture. Worked with data architects of various departments to finalize the roadmap for Data Lake
  • Develop stored procedures/views in Snowflake and use in Talend for loading Dimensions and Facts.
  • Design, develop, test, implement and support of Data Warehousing ETL using Talend.
  • Very good knowledge of RDBMS topics, ability to write complex SQL, PL/SQL.

Environment: Snowflake, Redshift, Hadoop,SQL server, AZURE, TALEND, JENKINS and SQL

Confidential, Austin, TX

Snowflake Developer

Responsibilities:

  • Evaluate Snowflake Design considerations for any change in the application
  • Build the Logical and Physical data model for snowflake as per the changes required
  • Define roles, privileges required to access different database objects.
  • ETL pipelines in and out of data warehouse using combination of Python and Snowflakes SnowSQL Writing SQL queries against Snowflake.
  • Responsible for determining the bottlenecks and fixing the bottlenecks with performance tuning using Netezza Database.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Design and code required Database structures and components.
  • Build the Logical and Physical data model for snowflake as per the changes required
  • Experience on working various distributions of Hadoop like CloudEra, HortonWorks and MapR.
  • Worked with cloud architect to set up the environment.
  • Worked on Oracle Databases, RedShift and Snowflakes
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Major challenges of the system were to integrate many systems and access them which are spread across South America; creating a process to involve third party vendors and suppliers; creating authorization for various department users with different roles.

Environment: Snowflake, SQL server and SQL.

Confidential, NYC, NY

Data Engineer

Responsibilities:

  • Perform unit and integration testing and document test strategy and results.
  • Developed workflow in SSIS to automate the tasks of loading the data into HDFS and processing using hive.
  • Develop alerts and timed reports Develop and manage Splunk applications.
  • Involved in various Transformation and data cleansing activities using various Control flow and data flow tasks in SSIS packages during data migration
  • Applied various data transformations like Lookup, Aggregate, Sort, Multicasting, Conditional Split, Derived column etc.
  • Work with multiple data sources.
  • Developed Mappings, Sessions, and Workflows to extract, validate, and transform data according to the business rules using Informatica.
  • Worked with Various HDFS file formats like Avro, Sequence File and various compression formats like snappy, Gzip.
  • Worked on data ingestion from Oracle to hive.
  • Involved in fixing various issues related to data quality, data availability and data stability.
  • Worked in determining various strategies related to data security.
  • Performance monitoring and Optimizing Indexes tasks by using Performance Monitor, SQL Profiler, Database Tuning Advisor and Index tuning wizard.
  • Worked on Hue interface for Loading the data into HDFS and querying the data.
  • Designed and Created Hive external tables using shared Meta-store instead of derby with partitioning, dynamic partitioning and buckets.
  • Wrote scripts and indexing strategy for a migration to Confidential Redshift from SQL Server and MySQL databases.
  • Used spark-sql to create Schema RDD and loaded it into Hive Tables and handled structured data using Spark SQL.
  • Used JSON schema to define table and column mapping from S3 data to Redshift.
  • Involved in converting Hive/SQL quries into Spark transformation using Spark RDDs.
  • Used Avro, Parquet and ORC data formats to store in to HDFS.

Confidential

ETL Developer

Responsibilities:

  • Developed Logical and Physical data models that capture current state/future state data elements and data flows using Erwin 4.5.
  • Read data from flat files and load into Database using SQL Loader.
  • Performed Unit Testing and tuned for better performance.
  • Responsible for design and build data mart as per the requirements.
  • Extensively worked on Views, Stored Procedures, Triggers and SQL queries and for loading the data (staging) to enhance and maintain the existing functionality.
  • Done analysis of Source, Requirements, existing OLTP system and identification of required dimensions and facts from the Database.
  • Created Data acquisition and Interface System Design Document.
  • Designed the Dimensional Model of the Data Warehouse Confirmation of source data layouts and needs.
  • Deploy various reports on SQL Server 2005 Reporting Server
  • Installing and Configuring SQL Server 2005 on Virtual Machines
  • Migrated hundreds of Physical Machines to Virtual Machines
  • Conduct System Testing and functionality after virtualization
  • Extensively involved in new systems development with Oracle 6i.
  • USEDSQLCODEreturns the current error code from the error stackSQLERRMreturns the error message from the current error code.
  • UsedImport/Export Utilitiesof Oracle.
  • Created theExternal Tablesin order to load data from flat files and PL/SQL scripts for monitoring.
  • Writing Tuned SQL queries for data retrieval involvingComplex Join Conditions.
  • Extensively used Oracle ETL process for address data cleansing.
  • Developed and tuned all the Affiliations received from data sources using Oracle and Informatica and tested with high volume of data.
  • Responsible for developing, support and maintenance for the ETL (Extract, Transform and Load) processes using Oracle and Informatica PowerCenter.
  • Created common reusable objects for the ETL team and overlook coding standards.
  • Reviewed high-level design specification, ETL coding and mapping standards.
  • Designed new database tables to meet business information needs. Designed Mapping document, which is a guideline to ETL Coding.
  • Used ETL to extract files for the external vendors and coordinated that effort.
  • Migrated mappings from Development to Testing and from Testing to Production.
  • Performed Unit Testing and tuned for better performance.
  • Created various Documents such as Source-to-Target Data mapping Document, and Unit Test Cases Document.
  • Designed the Dimensional Model of the Data Warehouse Confirmation of source data layouts and needs.

We'd love your feedback!