We provide IT Staff Augmentation Services!

Java Developer Resume

2.00/5 (Submit Your Rating)

SUMMARY

  • Software professional having around 8 years of Industry Experience as a Big Data/Java Technical Consultant.
  • Hands on experience in installing, configuring, and using Hadoop ecosystem components like MapReduce, HDFS, HBase, Hive, Sqoop, Pig and Flume
  • Around 3 Years of experience extensively in Hadoop echo system components such as HDFS, Map Reduce, Pig, Hbase, Sqoop and Hive for scalability, distributed computing and high performance computing
  • In depth understanding/knowledge of Hadoop Architecture and its components such as HDFS, Yarn, Resource Manager, Node Manager, Job History Server, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce .
  • Expertise in writing Hadoop Jobs for analyzing data using MapReduce, Hive and Pig running on Yarn .
  • Having experience in Spark 2.0.0 , writing spark streaming , creating Data frames and Datasets from the existing datasets to perform actions on different types of data.
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice - versa.
  • Hands on experience in full software development life cycle implementation including Business Interaction, Requirement Analysis, Design, Development, Testing and Documentation phases
  • Capable of processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture.
  • Experience in Java, JSP, Servlets, Hibernate, Spring, JBoss, JDBC, Java Script, XML , and HTML
  • Ability to adapt to evolving technology, strong sense of responsibility and accomplishmentHands-on experience on handing multi terabytes of datasets.
  • Self-starter, proactive, possess good communication skills and understanding of business work flow.
  • Excellent analytical, problem solving, communication and interpersonal skills with ability to interact with individuals at all levels and can work as a part of a team as well as independently.

TECHNICAL SKILLS

Big Data Ecosystem: HDFS and Map Reduce, Pig, Hive, Impala, YARN, HUE, Oozie, Zookeeper, Apache Spark,

Operating Systems: Windows, Ubuntu, Unix

Programming Languages: C, C++, Java, SCALA

Scripting Languages: Java Scripting

Databases: Oracle 11g/10g/9i, MySQL, DB2, MS-SQL Server, SQL, PL/SQL,

NoSQL Databases: HBase, Cassandra.

Build Tools: Ant, Maven, sbt

Development IDEs: NetBeans, Eclipse IDE

Web Servers: Apache Tomcat 6

Version Control Tools: SVN, Git.

Packages: Microsoft Office, putty.

PROFESSIONAL EXPERIENC

Confidential

Java Developer

Environment:  Java1.6, JSP, Struts1.x, Spring3.2, Hibernate4.6, Eclipse and Oracle10g

Responsibilities:

  • Designed JSPs as per the requirements.
  • Implemented Struts for the controller logic.
  • Responsible for High level design, Low level design preparation and Development activity.
  • Followed Agile software development practice paired programming, scrum status meetings.
  • Updated Struts configuration file, Validation and tiles xml document.
  • Implemented Client side validation using Java Script
  • Implemented Based on MVC Architecture.
  • Involved in database development
  • Involved in unit testing
  • Developed pojo classes in hibernate
  • Documenting the technical details.

Confidential

Hadoop Developer

Environment : Spark, sqoop, scala, Hadoop YARN, Hive, Pig, Flume, Kafka, Oozie, cassandra.

Responsibilities :

  • Data Ingestion implemented using SQOOP, SPARK , loading data from various RDBMS, CSV, XML files .
  • Data cleansing, transformations tasks are handled using SPARK using SCALA and HIVE .
  • Data Consolidation was implemented using SPARK, HIVE to generate data in the required formats by applying various ETL tasks for data repair, massaging data to identify source for audit purpose, data filtering and store back to HDFS .
  • Scripts developed to load Log data using FLUME and stores data in HDFS on daily basis.
  • Worked on real-time data processing using Spark Streaming and Kafka using Scala .
  • SPARK-Scala RDD s are used to transform , filter data which contains “ ERROR”, “FAILURE”, “WARNING” in the log lines and then stored into HDFS .
  • Uploaded data to Hadoop Hive and combined new tables with existing databases.
  • Worked on writing Scala programs using Spark on Yarn for analyzing data.
  • Created HBase tables to load large sets of structured data. 
  • Created PIG script jobs in maintaining minimal query optimization.
  • Responsible for writing Hive Queries for analyzing data in Hive warehouse using Hive Query Language (HQL).
  • Maintaining Project documentation for the module.
  • Exported the analyzed data to the relational databases (MySQL, DB2) using Sqoop into HDFS for visualization and to generate reports for the BI team.
  • Monitoring System Metrics and logs for any problems
Confidential

Hadoop Developer

Environment : Hadoop, HDFS, Map-Reduce, Hive, Pig, Flume, Sqoop, Oozie, Zookeeper

Responsibilities:

  • Implemented the sqoop jobs for importing/exporting data to/from database/HDFS .
  • written MR jobs for daily and weekly data load jobs.
  • Wrote Hive and Pig Scripts to analyze customer satisfaction index, sales patterns etc.
  • Extended Hive and Pig core functionality by writing custom UDFs using Java.
  • Installed and configured Hadoop .
  • Written Hive queries for data analysis to meet the business requirements.
  • Creating Hive tables and working on them using Hive QL.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Developed multiple MapReduce jobs using java for data cleaning and preprocessing.
  • Installed and configured Pig and also written PigLatin scripts.
  • Imported/Exported data using Sqoop to load data from Teradata to HDFS/Hive on regular basis
  • Implemented generic export framework for moving data from HDFS to RDBMS and vice-versa.
  • Worked on loading data from LINUX file system to HDFS.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Responsible for processing unstructured data using Pig and Hive.

Confidential

JAVA DEVELOPER

Environment : Java1.6, JSP, Struts1.x, Spring3.2, Hibernate4.6, Eclipse and Oracle10g

Responsibilities:

  • Implemented Struts for the controller logic.
  • Extensively used SQL queries, PL/SQL stored procedures & triggers in data retrieval and updating of information in the Oracle database using JDBC.
  • Expert in writing, configuring and maintaining the Hibernate configuration files and writing and updating Hibernate mapping files for each Java object to be persisted .
  • Expert in writing Hibernate Query Language (HQL) and Tuning the hibernate queries for better performance .
  • Updated Struts configuration file , Validation and tiles xml document.
  • Implemented Client side validation using Java Script
  • Implemented Based on MVC Architecture.
Confidential

Java Developer

Environment : Oracle, JDK, Struts, Hibernate, Tomcat, Windows 2000.

Responsibilities :

  • Designed and developed user interfaces using HTML, JSP and Struts tags.
  • Experienced in developing applications using all Java/J2EE technologies like Servlets, JSP, EJB, JDBC etc.
  • Validating the views using validator plug-in in Struts Frame Work.
  • Writing test cases using JUNIT , doing test first development.
  • Writing build files using ANT. Used Maven in conjunction with ANT to manage build files
  • Used Hibernate for the data persistence and interaction with database.
  • Involved in developing the Struts Action Classes .
  • Develop test cases for Unit testing and sanity testing
Confidential

Java Developer

Environment : Oracle, JDK, Struts, Hibernate, Tomcat .

Responsibilities :

  • Design and developed user interfaces using HTML, JSP and Struts tags .
  • Involved in application performance tuning (code refractory).
  • Writing test cases using JUNIT , doing test first development.
  • Writing build files using ANT. Used Maven in conjunction with ANT to manage build files.
  • Involved in implementing Data Access Object (DAO) classes .
  • Involved in developing the business logic as per functional specification usingCore Java and J2EE .
  • Used Hibernate persistence logic to interact with the database.
  • Involved in writing hibernate mapping files to provide communication between Java objects and database tables.

We'd love your feedback!