We provide IT Staff Augmentation Services!

Big Data Engineer Resume

5.00/5 (Submit Your Rating)

NY

SUMMARY:

  • Experienced Data Engineer with extensive hands on experience in software, architecture, deployment,and scaling Big Data Systems for large corporations. Recognized team player, cross group collaborator and a fast learner who always has a mindset to solve problems and to innovate.
  • Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems using Agile Methodologies.
  • Around 3 years of experience in a Bigdata Hadoop Ecosystem like HDFS, Spark, Hive, Kafka.
  • Design, plan, and develop programs to perform automated extract, transform and load data between external data sources using Sqoop when working with large data sets.
  • Capable of processing large sets of structured, semi - structured and unstructured data into Hadoop.
  • Experience programming in a Linux environment, including Python and shell scripting.
  • Hands on knowledge for creating web applications using Angular to create SPA.
  • Experience in the development of Data, Keyword driven and Hybrid automation frameworks.
  • Hands on experience in writing automation Scripts in Quick Test Pro tool using VB Script.

TECHNICAL SKILLS:

Big Data: Hadoop Ecosystem (HDFS, MapReduce, Hive, Sqoop), Spark, Kafka

Databases: Oracle, MS SQL Server

Web Technologies: Angular, TypeScript, Bootstrap

Programming Languages: Python, Shell Script, VB Script, SQL

Operating Systems: Unix, Windows

Automation Tools: HP QTP, UFT, Selenium WebDriver, QC, ALM

DevOps Tools: Jenkins, Jira, Confluence, Splunk, Autosys, Control-M Scheduler

Other Tools: SOAP UI, JIRA, SVN, Putty, GIT

PROFESSIONAL EXPERIENCE:

Confidential, NY

Big Data Engineer

  • Defined the workflow and implementing the ingestion of data from source systems (RDBMS, Flat files) into HDFS Hadoop cluster.
  • Created Spark jobs to load the data using PySpark libraries by applying various actions and transformations on the various datasets like Business Banking, Basel, Allowance.
  • Created Hive tables to store the processed results in a tabular format.
  • Developed an API that enabled to pull data from HIVE for calculating risk involved around wholesale credit business.
  • Implemented Kafka producers to move data from RDBMS into Kafka consumers via Kafka topics.

Environment: Spark (2.1), Sqoop, Python (2.7), Hive, Kafka, Impala, IntelliJ, JIRA (6.4), Control-M (9.0), Linux, Git, HUE

Confidential

Software Engineer

  • Defined the test scenarios and created the test cases in order to increase automation up to 70% in SAP for Material Management module.
  • Developed automation test scripts to verify the transaction codes conform to business requirements for Financial Accounting module.
  • Performed the root cause analysis by logging it in Quality Center and provided insights to minimize defects in the SAP platform.

Environment: QTP (10.0), UFT (11.5), QC (10.0), ALM (11.0), SAP ECC (6.0)

Confidential

Software Engineer

  • Developed automation scripts for Regression testing suite using VB Scripting in Quick Test Professional tool, Python and Selenium Webdriver for web-based Springboard portal.
  • Involved in developing SQL queries for My SQL database in SQL developer.
  • Used SOAP UI to test web services running on WebLogic Server.

Environment: QTP (10.0), QC (10.0), Python, SQL Developer, MS SQL Server (2008), Selenium Webdriver, SOAP UI, Putty, SVN

Confidential

Software Engineer

  • Involved in designing the hybrid framework for automation suite using Quick Test Professional.
  • Developed automation test scripts, function and object repositories for reusable components of Teamcenter application under test using VB Scripting.
  • Built UI functions to synchronize between Quick Test Pro tool and application under test.

Environment: QTP (10.00), (11.0), QART, SVN

We'd love your feedback!