We provide IT Staff Augmentation Services!

Bigdata Java Developer Resume

5.00/5 (Submit Your Rating)

NY

PROFESSIONAL SUMMARY:

  • IT professional 8 years of experience in Big Data Technologies, Analysis, Design, Testing, Development and Object oriented Java/J2EE based enterprise applications.
  • Experience in financial services domain, Investment Banking Credit Risk reporting.
  • Experience working with multiple distributions i.e., Horton works, MapR, Cloudera etc.
  • Experience working in agile methodology.
  • Experience in Hadoop stack with components such as MapReduce, HDFS, Oozie, Apache Falcon, Hive, Pig, Scoop, Name Node, Data Node, Job Tracker, Task Tracker, YARN.
  • Implemented Proofs of Concept on Hadoop stack and different big data analytic tools.
  • Experience with Atlassian Confluence for documentation, Git Stash, SVN, Jira, Jenkins, Crucible, Team city.
  • Performed data analysis using Hive and Pig.
  • Experience in writing Map Reduce programs and using Apache Hadoop API for analyzing the history data.
  • Hands on experience in big data ingestion tools like Flume and Sqoop.
  • Experience in tuning and troubleshooting performance issues in Hadoop cluster.
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice - versa.
  • Experience in using XML related technologies.
  • Hands on Experience in LINUX.
  • Programming Experience in UNIX Shell Scripting
  • Good understanding on cluster architecture, disaster recovery with Falcon, nosql databases, Spark, Storm.
  • Worked on analyzing, writing Hadoop MapReduce jobs using Java API, Pig and Hive.
  • Experience in developing applications using Java/J2EE/EJB/JSP/JSF/Servlets/JMS/Java Mail/JDBC/Struts/Java Script/HTML/ DHTML/ XML, Tiles, spring, Toplink, Hibernate, JSP Custom Tags and Unix Shell Scripting.
  • Implemented standards and processes for Hadoop based application design and implementation.
  • Evaluate and propose new tools and technologies to meet the needs of the organization.
  • Document and explain implemented processes and configurations in upgrades.
  • Support development, testing, and operations teams during new system deployments.
  • Worked with application teams to install operating system, Hadoop updates, patches and version upgrades as required.
  • Extensive experience in documenting requirements, functional specifications, technical specifications.
  • Highly motivated, adaptive and quick learner.
  • Exhibited excellent communication and leadership capabilities.
  • Excellent Analytical, Problem solving and technical skills.
  • Holds strong ability to handle multiple priorities and work load and also has ability to understand and adapt to new technologies and environments faster.

TECHNICAL SKILLS:

Programming Languages: Java, C++, SQL, PIG, PL/SQL, Python, Unix Shell Scripting.

Java Technologies: JDBC, Java,J2ee, Spring and Spring REST

Databases: Oracle8i/9i, NO SQL (HBase),MYSQL, MS SQL server.

IDE s & Utilities: Eclipse and IntelliJ

Web Dev. Technologies: HTML, XML, Java Script, JQuery and Angular JS.

Protocols: TCP/IP, HTTP and HTTPS.

Operating Systems: Linux, MacOS, WINDOWS 98/00/NT/XP.

Hadoop ecosystem: Hadoop and MapReduce, Oozie, Spark, Sqoop, Hive, PIG, HBASE, HDFS, Falcon, Zookeeper, Nosql Cassandra.

PROFESSIONAL EXPERIENCE:

Confidential, NY

Bigdata Java Developer

Responsibilities:

  • Hadoop development and implementation.
  • Develop market value and asset liability calculations for metrics IA (Inventory Aging), ITR (Inventory Turnover Ratio), CFTR(Customer Facing Trade Ratio), Covered Funds.
  • Participate in review of business requirements and contribute to the development of functional design.
  • Provide end-end execution support and manage changes and releases in production environment.
  • Writing efficient map reduce programs based on the requirement.
  • Pre-processing using Hive, Pig and Shell scripts.
  • Perform dynamic use of output in oozie and schedule all hadoop jobs using OOZIE scheduler.
  • Import and export data using Scoop from RDBMS to HDFS and viceversa.
  • Issue Management using JIRA and perform code reviews using Crucible.
  • Translate complex functional and technical requirements into detailed design.
  • Use Eclipse IDE, IntelliJ for developing code modules in the development environment
  • Coordinate with offshore team members and validate critical bug fixes.
  • Provide project status updates, test metrics and reports.
  • Review timelines and scope for project releases.
  • Evaluate effort estimations for project phases.
  • Design training documents for end user of applications.
  • Being a part of a POC effort to help build new Hadoop clusters.
  • Propose best practices/standards.

Environment: Hadoop, MapReduce, Hive, Sqoop, HDFS, Java, Python, Oozie, Shellscripting, Oracle, Crucible, JIRA, Confluence, Eclipse, Win mergeConfidential, Vesey St, NY

Hadoop Developer

Responsibilities:

  • Create test data for Cleaning and Standardization module.
  • Validate and execute test cases on various builds.
  • Test and fix raised defects according to business requirement.
  • Responsible to ingest data into HDFS using shell scripts for different projects.
  • Written Map Reduce jobs to ingest data into Hive or Hbase tables.
  • Developed HQL scripts to extract data from hive and Sqoop out to RDBMS.
  • Dumped the data using Sqoop into HDFS/Hive for analyzing.
  • Involved in parsing XML/JSON for data received.
  • Created all the services to work with the various entities provided and restified the services using REST APIs.
  • Designed and Developed Servlets for authentication.
  • Performed root-cause analysis of issues, some coding changes, and testing of changes, processes.
  • Also involved in bug fixing.
  • Involved in developing Pig scripts.
  • Prepare documentation on Cleansing and Standardization module including project workflow, Hive and Pig scripts.
  • Involve in daily meetings to analyze and execute test cases.
  • Involved in migrating applications and support applications in production environment.

Environment: Map Reduce,HDFS, Hive, Tez, Pig, Sqoop, Hue, HBase, Solr/Lucene, Oozie, Zookeeper, yarn Map Reduce, Shell Script, IDE: Eclipse, SVN.Java, JDK1.6, SQL, Log4J, RAD, Web sphere, Eclipse, AJAX, JavaScript, JQuery, CSS3, SVN, WinScp, Putty, FTP, Linux, Cronjob SQL Developer

Confidential, NY

Hortonworks Consultant

Responsibilities:

  • Involved in analyze functional requirements and design big data implementations.
  • Installed and configured Hadoop ecosystem like HBase, Flume, Pig and Sqoop.
  • Managed and reviewed Hadoop Log files.
  • Load log data into HDFS using Flume. Worked extensively in creating MapReduce jobs to power data for search and aggregation.
  • Scheduling / Monitoring Oozie coordinator and workflow controls task.
  • Written shell scripts to move external log data into HDFS for processing.
  • Developed Pig Latin Scripts to extract the data and load into HDFS/Hive/HBase
  • Worked extensively withSqoop for importing metadata from Oracle.
  • Continuous Integration with Jenkins
  • Issue Management using JIRA
  • Designed a data warehouse using Hive. Created partitioned tables in Hive.
  • Mentored analyst and test team for writing Hive Queries.
  • Extensively used Pig for data cleansing.
  • Developed REST full services for UI Applications.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Developed the Pig UDF’S to pre-process the data for analysis.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
  • Cluster co-ordination services through ZooKeeper.

Environment: Hadoop, MapReduce, HDFS, Pig, Hive, HBase, Falcon, ZooKeeper, Kofka, Storm, Oozie, Java (jdk1.6), Javascript, Spring, HCatalog, Tableau, JSP, Oracle, Teradata, REST Oracle 11g/10g, PL/SQL, SQL*PLUS, Windows NT, UNIX Shell Scripting, Kafka, Jenkins, Confluence, Git Repo, Jira, IntelliJ.

Confidential, Naperville, IL

Java Hadoop Developer

Responsibilities:

  • Involved in review of functional and non-functional requirements.
  • Facilitated knowledge transfer sessions.
  • Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and preprocessing.
  • Involved in installation, configuration, supporting and managing Hadoop clusters.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Involved in defining job flows.
  • Involved in managing and reviewing Hadoop log files.
  • Involved in running Hadoop streaming jobs to process terabytes of xml format data.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Responsible to manage data coming from different sources.
  • Good understanding on NOSQL database.
  • Supported Map Reduce Programs those are running on the cluster.
  • Involved in loading data from UNIX file system to HDFS.
  • Installed and configured Hive and also written Hive UDFs.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
  • Gained very good business knowledge on health insurance, claim processing, fraud suspect identification, appeals process etc.
  • Developed a custom File System plug in for Hadoop so it can access files on Data Platform.
  • This plugin allows Hadoop MapReduce programs, HBase, Pig and Hive to work unmodified and access files directly.
  • Wrote custom SQL to pull data from IBM Netezza data warehouse to perform analytics
  • Designed and implemented MapReduce-based large-scale parallel relation-learning system
  • Extracted feeds form social media sites such as Facebook, Twitter using Python scripts.
  • Setup and benchmarked Hadoop/HBase clusters for internal use
  • Setup Hadoop cluster on Amazon EC2 using whirr for POC.
  • Wrote recommendation engine using mahout.

Environment: Java, Eclipse, Oracle 10g, Sub Version, Hadoop, Hive, HBase, Linux,, MapReduce, HDFS, Hive, Java (JDK 1.6), Hadoop Distribution of Hortonworks, Cloudera, AWS, MapReduce, DataStax, IBM DataStage 8.1, Tableau, Puppet, Oracle 11g / 10g, PL/SQL, SQL*PLUS, Toad 9.6, Windows NT, UNIX Shell Scripting.

Confidential

Java/Oracle Developer

Responsibilities:
  • Prepared program Specification for the development of PL/SQL Proceduresand Functions.
  • Created Custom Staging Tables to handle import data.
  • Run Batch files for loading database tables from flat files using SQL*loader.
  • Developed PL /SQL code for updating payment terms.
  • Created indexes on tables and Optimizing Stored Procedure queries.
  • Design, Development and testing of Reports using SQL*plus.
  • Created Indexes and partitioned the tables to improve the performance of the query.
  • Involved in preparing documentation and user support documents.
  • Modified existing codes and developed PL/SQL packages to perform Certain Specialized functions/enhancement on oracle application.
  • Develop benchmarking routines by using Teragen and Terasort.
  • Worked extensively in creating MapReduce jobs to power data for search and aggregation using Java API, Pig and Hive.
  • Develop applications using Java/J2EE/EJB/JSP/JSF/Servlets/JMS/Java Mail/JDBC/Struts/Java Script/HTML/ DHTML/ XML, Tiles, spring.
  • Developed Map Reduce codes for data manipulation.
  • Written the Apache PIG scripts to process the HDFS data.
  • Designed a data warehouse using Hive. Created partitioned tables in Hive.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
  • Involved in preparing documentation and user support documents.
  • Involved in preparing test plans, unit testing, System integration testing, implementation and maintenance.

Environment: Oracle 9i/10g, PL/SQL, MapReduce, Java, JDBC, HTML, VMware, HIVE, Eclipse, PIG, Hive, Sqoop, Flume, Linux, UNIX.

We'd love your feedback!