We provide IT Staff Augmentation Services!

Tech Lead Resume

4.00/5 (Submit Your Rating)

SUMMARY:

  • 15+ years of IT experience in Customer systems and ERP.
  • Good Experience in Big Data technologies.
  • Designed and built complex data processing pipelines, including ETL and data ingestion
  • Involved in design and development of Big Data and distributed systems with involvement in building Real time & Interactive Big Data Analytics frameworks using Hive, Apache Spark, Hadoop, Apache Spark Streaming, Kafka, SparkSQL and Mongo
  • Excellent understanding / knowledge of Hadoop architecture, HDFS architecture, Spark Architecture, Master, Slaves/Workers, Executor, Job Tracker, Task Tracker, Name Node, Data Node, Edge Node and Spark Stacks (HDFS, Hive and Spark) HUE, Zeppelin and Ambari Views
  • Multiple POCs done using the Big Data Eco System Tools to check the feasibility of building data pipelines for various use cases.
  • Built data pipelines with Sqoop, Kafka, Flume, Spark, Twitter Stream Spark Streaming, Spark Cassandra Connector, Spark Elastic Search Connector, Spark Mongo DB Connector as part of POCs.
  • Experience with Cloudera VM and Hortonworks Sandbox for monitoring and testing various big data tools.
  • Good understanding of Cassandra, Mongo databases and ElasticSearch Search Server
  • Lead and implemented complex projects (Implementations, Support and Enhancements) and managed Global teams

TECHNICAL SKILLS:

Big Data Technologies: Apache Spark, Apache Spark Streaming, Apache Spark SQL, Kafka, Hive, Hadoop (HDFS), Sqoop, Flume, SBT, Spark - Submit, Spark Connectors: Cassandra, Elastic Search, Mongo DB, Twitter Stream

Programming Skills: Scala, Java (Core Java, Collections, JDBC), PeopleSoft

ERP: PeopleSoft CRM, Vantive

Development tools: Eclipse, Maven, SVN, Git, SBT

PROFESSIONAL EXPERIENCE:

Confidential

Tech Lead

Responsibilities:

  • Involved in Design and development of data pipeline for JIBE tool
  • Developed Spark Streaming application using Scala to read data from Oracle tables and publish to Kafka topic
  • Developed of Spark Streaming application using Scala to read messages from Kafka topic and transformation to Mongo DB
  • Developed Spark program to write documents to Mongo DB from Spark application using Scala and Mongo Casbah connector
  • Developed Spark program for SDP tool to publish Repairs and Sales data to Kafka
  • Involved in the design and development of data pipeline for Streaming Data Platform
  • Involved in development of Pull Adapter, Publisher programs using Spark Streaming application with Scala to read /write messages from/to Kafka topics.
  • Written functions for creation of transformation logic in Spark application on the Kafka Messages
  • Created Kafka Topics with replication factor and retention period for various applications for Jibe and SDP
  • Created Kafka Simple Consumer and Kafka Simple Producer to test the programs in test environments
  • Created SBT and SBT assembly fat Jars for applications to be executed on the Spark cluster
  • Created Spark Cluster on Amazon EC2 to test various spark application during POC and unit test phase with test data on Amazon S3.
  • Replacing existing Hive queries with Spark SQL batch jobs
  • Generate new HDFS files for GBI to pull data from pre computed views
  • Convert Hive data to Parquet / JSON format files with Spark SQL and write to HDFS
  • Export Repairs and Contracts data to HDFS using the Sqoop tool.
  • Developed Sqoop scripts to run as batch job daily and export data from Repairs table and Contracts table
  • Loaded exported data to Hive Partitioned tables
  • Created Hive Tables and Partitions by country, product and date
  • Provided Data to GBI to provide business reports and analytics
  • Writing Hive queries to provide data to business users
  • POCs with other Big Data tools such as Spark-Cassandra connector, Spark-ElasticSearch Connector, Spark-Twitter stream analysis.
  • Developed Spark programs using Spark-CSV parser to parse the CSV files from various sources and write to HDFS with common Parquet/JSON format.
  • Developed Spark programs using Spray JSON framework for object serialization with Kafka and Mongo connectors.

Confidential

Tech Lead and Project Manager

Responsibilities:

  • Replacing existing interfaces and batch jobs in the PeopleSoft environment with latest Open Source tools for faster processing and speed availability of data
  • Evaluation of Big Data tools and fitment for various use cases in Apple business processes

Confidential

Tech Lead and Project Manager

Responsibilities:

  • Configured new Policy, Trigger Points, Rules and Actions for Active Analytics Framework (AAF) for new business processes where new notifications are required.
  • Analyze and provide solution for Production tickets for Google Search Appliance (GSA) based search for PeopleSoft Enterprise Portal for customer searches for Knowledge Documents.

Confidential

Tech Lead and Project Manager

Responsibilities:

  • Analysis and Design for CRM customizations in the areas of HR Help Desk Case and Work lists
  • Discuss with functional team to analyse the HR Help Desk business process and requirements

Confidential

Tech Lead and Project Manager

Responsibilities:

  • Implementation Manager for the multi country roll out and managed overall project delivery from Functional, Technical, Change and Testing perspective.
  • Coordinate with Business, Solution Architects, Functional Consultants and development teams for faster closure of IT & UAT defects reported
  • Conduct Triage calls with Business teams, functional POCs

Confidential

Tech Lead and Project Manager

Responsibilities:

  • Design for Customizations in the PeopleSoft CRM application in the Sales module using Application Designer.
  • Created Application Engines for bulk loading of data using File Layouts and Component Interfaces

Confidential

Tech Lead and Project Manager

Responsibilities:

  • Design and development of Customizations to the PeopleSoft FSCM product for set up related pages using Application Designer.
  • Created Application Engine programs to process the Legacy GL related data into Tier 2 Staging Tables as part of ETL migration.

Confidential

Tech Lead

Responsibilities:

  • Designed and developed Conversion of Post-Pay service to Pre-Pay service using Application Package APIs for conversion of Tariff Plan and integration with Billing system via EAI middleware .
  • Developed Application Package APIs for Change of Price Plans (Upgrade & Downgrade)
  • Designed and developed Contract & Price Plan Upgrades from Point of Sale with Integration to PeopleSoft, Billing and other down stream systems

Confidential

Tech Lead

Responsibilities:

  • Design and Configuration for Implementation of PeopleSoft CRM Marketing, Support modules
  • Designed and Developed customizations of Marketing modules to enable the PeopleSoft Campaigns functionality and custom Metrics to calculate ROI .

Confidential

Developer

Responsibilities:

  • Configured Order Capture and Customer Care modules according to the business requirements of O2.
  • Developed several custom pages, components and Integration messages using Application Designer, PeopleCode, Component Interfaces and Application Engine Programs .

We'd love your feedback!