We provide IT Staff Augmentation Services!

Sdet (etl Tester For Big Data) Resume

3.00/5 (Submit Your Rating)

SUMMARY

  • An Engineering graduate with around 9 years of experience in Telecom, Banking and Food sector of IT industry. Creative problem solver with experience in all phases of the software development lifecycle. Proficient in translating business requirements to cost - effective and time-effective technical solutions.

PROFESSIONAL EXPERIENCE

Confidential

SDET (ETL Tester for Big data)

Responsibilities:

  • This project follows agile approach in order to enable Business stakeholders to continuously provide feedback to the project Team. It is broken into two weeks deployment sprints.
  • Verification of the transformation rules have been applied properly and maintain the data integrity by checking the data load into Target Tables.
  • Verifying the data completeness, duplication and accuracy by making use of the primary key constraints.
  • Validation of Ingestion (Raw data) and Transformation (ETL Transformed) data into Azure data lake.
  • Create Test strategy and document the Test cases as per the Business requirement in Azure devops test plans module.
  • Review the Test cases with Product owner and the Developer to ensure none of the scenario is missed.
  • Create, Test ADF jobs and Pyspark scripts to ensure the data availability into respective environments.
  • Ensure that the ADF Jobs are working fine which is used to Ingest the data into landing zone and then transformed into normalized folder based on the Business rules.
  • Raise the defects in Jira and assign to the respective developer to make sure the defects have been fixed.
  • Deployment of Hadoop jobs and Pyspark Packages in QA environment using CICD tool, Jenkins.
  • Create data extract scripts and determine naming standards for schemas and tables in Hadoop Data Lake.
  • Working through the API systems and validating the JSON messages to test the real time data from various systems.

Environment: Azure Devops, Pyspark, ADF pipelines, Microsoft Azure storage explorer, DataBricks, Jira, Sql developer, Azure visual studio, Postman, Swagger.

Confidential, Atlanta, GA.

SDET (ETL Tester for Teradata & Big data)

Responsibilities:

  • Create, Test Hadoop jobs and Teradata BTEQ scripts to ensure the data availability into the respective environments.
  • Ingest real-time data to Postgres using NIFI, Testing Nifi functionality & Postgres reports to ensure that real-time data is available for the business reports as soon as possible.
  • Perform source system analysis to provide inputs to Data Architect in designing and developing the DDLs and data movement jobs.
  • Testing & fixing the business reports which are key for the decision making of Confidential Inventories.
  • Ensure the Jobs are working fine which is used to Ingest source system’s data into Hadoop data lake and Export to Teradata environment for fulfilling the transformations in source to target mapping document.
  • Getting the maximum use of grooming sessions by understanding the requirements carefully and check the blockers, clear if any.
  • Preparation of Test Cases based on ETL Specification Document, Use Cases, Low Level Design document.
  • Did extensive work with ETL testing including Data Completeness, Data Transformation & Data Quality for various data feeds coming from source.
  • Create Requirements and Test plan in HP ALM to fulfill the user stories created in Rally.
  • Co-ordinate with the Product owner and Data Architects on the requirement.
  • Deployment of Hadoop jobs and Teradata Packages in QA environment using CICD tool viz. Jenkins
  • E2E test of Table structures, views and data - Count, data type, length, Schema and constraint validations.
  • Analyze production data discrepancy issues raised by business.
  • All the jobs (File watcher, Ingestion, Export and Core) have been scheduled with control-m tool.
  • Raise and track defects identified during testing and following up for fixing and closure.
  • Consumed JSON messages and processed to Hadoop, Teradata & OHL systems.
  • Support the migration of code to Production and monitoring the jobs to ensure they complete successfully.
  • Support the Team and do a code fix when production job fails.
  • End to end QAT of Table structures, views and data - Count, data type, length, Schema and constraint validations.
  • ETL job execution, test analysis, source file extraction, FTP processes, Smoke test and source file validation for relevant tables during Initial and Incremental Loads.
  • Implement maximum use of Data Validation Tool (DVT) to ensure quick and error-free testing to reduce the time taking for the complete Testing.
  • Preparing the Test exit report as an approval for the code tested and promoting the code & jobs to production.

Environment: Hadoop, Teradata, SQL, PostgreSQL, OHL, NiFi, Kafka, AWS S3, RabbitMQ, Postman, Swagger, Hive, oozie, Sqoop, MySQL, SQL developer, SQL Assistant, Rally, QC, Jira and qtest.

Oracle Developer

Confidential

Responsibilities:

  • Job expectation is to manage the DB for availability, fine tune SQL, PLSQL codes, develop application modules.
  • Identify & fix application bugs in production system, develop Plsql code fix patch and deliver the same, work on new implementation of system to Bank Permata, guide on Plsql standard and best practice
  • Involved in SDLC gathering requirements from end users.
  • Managed UAT (User Acceptance Testing) phases effectively fixed the issues raised in backend.
  • Effectively managed International stakeholders.
  • Proposed initiatives like PLSQL coding standards, PLSQL version control, DB health checks and code review policies and sustained the same.
  • Focused on performance tuning of SQL, PLSQL codes that resulted in more 50% reduction in executing time. T+2 processes were promoted to T+1 process due to fine-tuning.
  • Also did production support for Amadeus application ensuring zero open incidents on monthly basis.
  • Automated data loading process through Oracle Scheduler, UNIX shells and Control-M.
  • Reconciled Closing Balance data for all customers.

Environment: Oracle 10g/11g, SQL, PLSQL, Subversion control, WinSCP, putty, SQL developer, PLSQL developer, Toad, remedy, SQL developer data modeler.

We'd love your feedback!