We provide IT Staff Augmentation Services!

Full Stock Java Devloper  Resume

4.00/5 (Submit Your Rating)

Charlotte, NC

SUMMARY:

  • Successful history of effectively implementing systems and directing key initiatives. Deep - rooted interest in designing and crafting efficient modern software.
  • Skilled in troubleshooting with the proven ability to provide creative and effective solutions through the application of highly developed problem-solving skills.
  • Hadoop Developer: Experience in installing, configuring, maintaining and monitoring of Hadoop Clusters Apache, Cloudera and Sandbox.
  • Hadoop Distributions: Horton works, Cloudera CDH4, CDH5 and Apache Hadoop.
  • Hadoop Ecosystem: Hands-on experience on Hadoop Ecosystem including HDFS, Sqoop, MapReduce, Yarn, Pig, Hive, Impala, Zookeeper, and Oozie.
  • Cassandra Developer and Modeling: Configured and setup Cassandra Cluster. Expertise in data modeling and analysis of Cassandra and Cassandra Query Language.
  • Data Ingestion: Using Flume, designed the flow and configured the individual components. Efficiently transferred bulk data from and to traditional databases with Sqoop.
  • Data Storage: Experience in maintaining distributed Storage HDFS and Columnar Storage HBase.
  • Data Processing: Processed data using Map Reduce and Yarn. Worked on Kafka as a proof of concept for log processing.
  • Data Analysis: Expertise in analyzing data using Pig scripting, Hive Queries, Sparks (python) and Impala.
  • Management and Monitoring: Maintained and coordinated service Zookeeper apart from designing and monitoring Oozie workflows. Used Azkaban batch job scheduler for controlling workflow of jobs.
  • Messaging System: Used Kafka as proof of concept to achieve faster message transfer across systems.
  • Scripting: Expertise in Hive, PIG, Impala, Shell Scripting, Perl Scripting, and Python.
  • Cloud Platforms: Configured Hadoop clusters in OpenStack and Amazon Web Services (AWS).
  • Visualization Integration: Integrated tableau, Google Charts, D3.js, and R with Hadoop cluster and MS Excel with Hive using ODBC connector.
  • Java/J2EE: Expertise in spring web MVC and Hibernate. Proficient in HQL (Hibernate Query language).
  • Project Management: Experience in Agile, Jira and Scrum project management.
  • Web Interface Design: Html, CSS, JavaScript, and bootstrap.
  • A quick learner with a proclivity for new technology and tools.

TOOLS/METHODS: Java/J2EE, Hue, C++, C. LINUX, Windows, UNIX, MapReduce, Hive, Impala, Pig, Sqoop, ZooKeeper, Flume, HBase. Netezza, DB2, Teradata, Servlets, JSP, JSF, Struts, JDBC, JQuery, CSS, SAX, DOM, JSON. Python, Perl, Java Script, Autosys, WebLogic, WebSphere, JUnit, Log4j, J2ME, Eclipse, Net Beans. MySQL, Oracle.

PROFESSIONAL EXPERIENCE:

Full Stock Java Devloper 

Confidential, Charlotte, NC

Responsibilites:
  • Coordinated with business users to gather business requirements and interacted with technical leads for the Application design level.
  • Implemented all custom file upload process in pyspark.
  • Implemented common jdbc utility for data sourcing in spark.
  • Worked on optimizing and tuning spark applications by using persist, cache, broadcast.
  • Improved the performance of spark jobs by configuring job settings.
  • Involved in edgenode migration for enterprise level cluster and re-built the application as per the new standards in architecture level.
  • Created workflows in Oozie along with managing/coordinating the jobs and combining multiple jobs sequentially into one unit of work.
  • Imported and exported data from different RDBMS systems such as Oracle, Teradata, SqlServer, Netezza and Linux systems such as Sas Grid.
  • Handled semi-structured data such as excel, csv and imported from sas grid to hdfs by using sftp process.
  • Ingested data into hive tables, using sqoop and sftp process.
  • Used compression techniques such as Snappy, Gzip for data loads and archival.
  • Data level transformations have been done in intermediate tables before forming final tables.
  • Data Integrity checks have been handled using hive queries, Hadoop and Spark.
  • Involved in writing Autosys scripts for Scheduling jobs using an automation tool Autosys.
  • Daily, Monthly, Quarterly and adhoc based data loads automated in Austosys and will run as per calendar dates scheduled.
  • Involved in Production Support, BAU Actvities and Release management.
  • Expertise in writing custom UDFs in Hive.

Environment: Cloudera Hadoop, Pyspark, Hive, Pig, Shell, Sqoop, Oozie Workflows, Teradata, Netezza, Sql Server, Oracle, Hue, Impala.

Senior Hadoop Developer

Confidential, Denver CO

Responsibilities:

  • Planned, installed and configured the distributed Hadoop Clusters.
  • Ingested data using Sqoop to load data from MySQL to HDFS on regular basis from various sources.
  • Configured Hadoop tools like Hive, Pig, Zookeeper, Flume, Impala and Sqoop.
  • Ingested data into Hive tables from MySQL, Pig and Hive using Sqoop.
  • Used Tez for faster execution of MapReduce jobs.
  • Wrote Batch operation across multiple rows for DDL (Data Definition Language) and DML (Data Manipulation Language) for improvised performance using the client API calls
  • Grouped and filtered data using hive queries, HQL and Pig Latin Scripts.
  • Queried both Managed and External tables created by Hive using Impala.
  • Implemented partitioning and bucketing in Hive for more efficient querying of data.
  • Created workflows in Oozie along with managing/coordinating the jobs and combining multiple jobs sequentially into one unit of work.
  • Designed and created both Managed/ External tables depending on the requirement for Hive.
  • Expertise in writing custom UDFs in Hive.
  • Used Pig’s svn repository of user-contributed functions.
  • Integrated Hive tables with visualization tools like Tableau and Microsoft Excel.

Environment: Cloudera distribution CDH4, Hadoop, Map Reduce, MySQL, Linux, Hive, Pig, Impala, Sqoop, Zookeeper.

Senior Hadoop Developer

Confidential, MA

Responsibilities:

  • Worked on analyzing Hadoop cluster and different Big Data analytic tools including Pig, Hive database and SQOOP.
  • Installed Hadoop, Map Reduce, HDFS, and developed multiple mapreduce jobs in PIG and Hive for data cleaning and pre-processing.
  • Coordinated with business customers to gather business requirements. And also interact with other technical peers to derive Technical requirements and delivered the BRD and TDD documents.
  • Extensively involved in Design phase and delivered Design documents.
  • Involved in Testing and coordination with business in User testing.
  • Importing and exporting data into HDFS and Hive using SQOOP.
  • Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data.
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
  • Experienced in defining job flows.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Experienced in managing and reviewing the Hadoop log files.
  • Used Pig as ETL tool to do Transformations, even joins and some pre-aggregations before storing the data onto HDFS.
  • Load and Transform large sets of structured and semi structured data.
  • Responsible to manage data coming from different sources.
  • Involved in creating Hive Tables, loading data and writing Hive queries.
  • Utilized Apache Hadoop environment by Cloudera.
  • Created Data model for Hive tables.
  • Involved in Unit testing and delivered Unit test plans and results documents.
  • Exported data from HDFS environment into RDBMS using Sqoop for report generation and visualization purpose.
  • Worked on Oozie workflow engine for job scheduling.

Environment: HDFS, HIVE, Map Reduce, Shell, PIG, SQOOP, Oozy.

Hadoop Developer

Confidential, Boston, MA

Responsibilities:

  • Installed and configured Apache Hadoop 1.0.1 to test the maintenance of log files in Hadoop cluster.
  • Worked on Cloudera to analyze data present on top of HDFS.
  • Involved in the installation of CDH3 and up-gradation from CDH3 to CDH4.
  • Developed Map Reduce Programs for data analysis and data cleaning.
  • Developed PIG Latin scripts for the analysis of semi structured data.
  • Used Hive and created Hive tables and involved in data loading and writing Hive UDFs.
  • Used Sqoop to import data into HDFS and Hive from other data systems.
  • Analyzed data using Hadoop components Hive and Pig.
  • Migration of ETL processes from MySQL to Hive to test the easy data manipulation.
  • Worked on Oozie workflow engine for job scheduling.
  • Developed Hive queries to process the data for visualizing.

Environment: Apache Hadoop 1.0.1, HDFS, CentOS 6.4, JAVA, MapReduce, Eclipse Indigo, HIVE, PIG, SQOOP, Oozie and MySQL.

Java Developer

Confidential

Responsibilities:

  • Designing the initial Web-WAP pages for a better UI as per the requirement.
  • Involved in developing functional flow of the mZone application.
  • Integrated social media APIS to the application.
  • Used Ajax and JavaScript to handle asynchronous request to server, CSS to handle look and feel of the application.
  • Involved in design of basic Class Diagrams, Sequence Diagrams and Event Diagrams as a part of Documentation.
  • Involved and gained good exposure in creating the Hibernate POJO’s and developed Hibernate mapping Files.
  • Worked on tuning of back-end Oracle stored procedures using TOAD.
  • Used Hibernate, object/relational-mapping (ORM) solution, technique of mapping data representation from MVC model to Oracle Relational data model with an SQL-based schema.
  • Developed SQL queries and Stored Procedures using PL/SQL to retrieve and insert into multiple database schemas.
  • Performed Unit Testing Using JUnit and Load testing using LoadRunner.
  • Implemented Log4J to trace logs and to track information.

Environment: JSP, Struts, Jquery, Tomcat, CSS, JUnit, Log4j, SQL/PLSQL, Oracle 9i, Hibernate, Web services.

Java Developer

Confidential

Responsibilities:

  • Involved in Requirements gathering, Requirements analysis, Design, Development, Integration and Deployment.
  • Used JavaScript to perform checking and validations at Client's side.
  • Extensively used Spring MVC framework to develop the web layer for the application. Configured DispatcherServlet in web.xml.
  • Designed and developed DAO layer using spring and Hibernate apart from using Criteria API.
  • Created/generated Hibernate classes and configured XML apart from managing CRUD operations (insert, update, and delete).
  • Involved in writing HQL and SQL Queries for Oracle 10g database.
  • Used log4j for logging messages.
  • Developed the classes for Unit Testing by using JUnit.
  • Developed Business components using Spring Framework and database connections using JDBC.

Environment: Spring Framework, Spring MVC, Hibernate, HQL, Eclipse, JavaScript, AJAX, XML, Log4j, Oracle 9i, Web Logic, TOAD.

We'd love your feedback!