We provide IT Staff Augmentation Services!

Java Developer Resume

0/5 (Submit Your Rating)

Portland, OR

SUMMARY

  • 8+ years of experience in Enterprise Application Development, Data Warehousing and Big Data technologies.
  • 2+ years of experience as Hadoop Developer and hands on experience in Hadoop Ecosystem.
  • 1+ years of experience in installing, configuring, testing Hadoop ecosystem components.
  • Experience in performing Big Data Analytics using MapReduce for Apache Hadoop.
  • In depth knowledge of HDFS file system
  • Writing custom data types, input and output formats.
  • Good understanding of differences in different releases of Apache Hadoop such as new MapReduce API, old MapReduce API, classic runtime and yarn runtime.
  • Exporting data to Relational Databases using SQOOP and vice versa.
  • Using flume to load weblog data into HDFS.
  • Analyzing data using pig scripts and hive queries. Writing custom UDF’s for analysis.
  • Chaining jobs and implement workflows using OOZIE.
  • Installing and Administering the Hadoop Cluster. Using Cloudera Manager.
  • Working knowledge of Hadoop federation and high - availability features in new release.
  • Knowledge of NoSql and hands on experience in Hbase and MongoDB.
  • Knowledge of Sql and hands on experience in mySQl and Microsoft SQL Server.
  • Performing analysis using high level languages such as Ruby and Python. Extending Apache pig functionality using Python.
  • Experience as a Java Developer in Web/intranet, client/server technologies using Java, J2EE, Servlets, JSP, JSF, EJB, JDBC and SQL.
  • Experience with Application Servers and Web Servers such as BEA Web Logic Server, JBoss Server, IBM Web Sphere and Apache Tomcat.
  • Good understanding of XML methodologies (XML,XSL,XSD) including Web Services and SOAP
  • Extensive experience in different IDE’s like Net Beans, Eclipse - Indigo/Galileo & Helios.
  • Good interpersonal and communication skills. Team player with strong problem solving skills.
  • Ability to learn and master new technologies and to deliver outputs in short deadlines with excellent communication and inter personnel skills.

TECHNICAL SKILLS

Hadoop/Big Data: HDFS, Mapreduce, HBase, Pig, Hive, Sqoop, Flume, MongoDB, HBase, Oozie, Zookeeper, spark, storm, & Kafka

Java & J2EE Technologies: Core Java

IDE’s: Eclipse, Net beans

Big data Analytics: Datameer 2.0.5

Frameworks: MVC, Struts, Hibernate, Spring

Programming languages: C, C++, Java, Python, Ruby, Ant scripts, Linux shell scripts

Databases: Oracle 11g/10g/9i, MySQL, DB2, MS-SQL Server

Web Servers: Web Logic, Web Sphere, Apache Tomcat

Web Technologies: HTML, XML, JavaScript, AJAX, SOAP, WSDL

Network Protocols: TCP/IP, UDP, HTTP, DNS, DHCP

ETL Tools: Informatica, Pentaho

Testing: Win Runner, Load Runner, QTP

PROFESSIONAL EXPERIENCE

Confidential, Alpharetta, Georgia

Hadoop Developer

Environment: Hadoop, MapReduce, HDFS, Hive, Java, SQL, Cloudera Manager, Pig, Sqoop, Oozie, ZooKeeper,, PL/SQL, MySQL, Windows, Oozie, HBase, SPARK

Responsibilities:

  • Responsible for building scalable distributed data solutions using Hadoop
  • Worked extensively with Flume for importing social media data
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager
  • Upgraded the Hadoop Cluster from CDH3 to CDH4, setting up High availability Cluster and integrating HIVE with existing applications
  • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs
  • Handled importing of data from various data sources using sqoop, performed transformations using Hive, MapReduce, loaded data into HDFS
  • Configured Sqoop and developed scripts to extract data from MySQL into HDFS
  • Hands-on experience with productionalizing Hadoop applications viz. administration, configuration management, monitoring, debugging and performance tuning
  • Created Hbase tables to store various data formats of PII data coming from different portfolios
  • Data processing using SPARK.
  • Cluster co-ordination services through ZooKeeper
  • Partitioning data streams using KAFKA.

Confidential, Bellevue, Washington

Hadoop Developer

Environment: Hadoop, MapReduce, HDFS, Hive, Java, SQL, Cloudera Manager, Pig, Sqoop, Oozie, ZooKeeper,, PL/SQL, MySQL, Windows, Oozie, HBase, STORM

Responsibilities:

  • Responsible for building scalable distributed data solutions using Hadoop
  • Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, managing and reviewing data backups and Hadoop log files
  • Worked extensively with Flume for importing social media data
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager
  • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs
  • Handled importing of data from various data sources using sqoop, performed transformations using Hive, MapReduce, loaded data into HDFS
  • Configured Sqoop and developed scripts to extract data from MySQL into HDFS
  • Hands-on experience with productionalizing Hadoop applications viz. administration, configuration management, monitoring, debugging and performance tuning
  • Created Hbase tables to store various data formats of PII data coming from different portfolios
  • Processing of streaming data using STORM.
  • Cluster co-ordination services through ZooKeeper

Confidential, Peoria, IL

Hadoop Developer

Environment: Hadoop, MapReduce, HDFS, Hive, Pig, Sqoop, CouchDB, Flume, HTML, XML, SQL, MySQL J2EE, Eclipse

Responsibilities:

  • Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data cleansing and preprocessing
  • Importing and exporting data into HDFS and Hive using Sqoop
  • Used Multithreading, synchronization, caching and memory management
  • Used JAVA, J2EE application development skills with Object Oriented Analysis and extensively involved throughout Software Development Life Cycle (SDLC)
  • Proactively monitored systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures
  • Extracted files from MongoDB through Sqoop and placed in HDFS and processed
  • Used Flume to collect, aggregate, and store the web log data from different sources like web servers, mobile and network devices and pushed to HDFS
  • Load and transform large sets of structured, semi structured and unstructured data
  • Supported Map Reduce Programs those are running on the cluster
  • Wrote shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions
  • Involved in loading data from UNIX file system to HDFS, configuring Hive and writing Hive UDFs
  • Utilized Java and MySQL from day to day to debug and fix issues with client processes
  • Managed and reviewed log files
  • Implemented partitioning, dynamic partitions and buckets in HIVE

Confidential, Arlington, VA

Data Warehouse/BI Analyst

Environment: Data Collection, Data Warehouse, Development, Excel, IT, Metrics, Modeling, Oracle, PowerPoint, Project, Quality, Release, Scripts, SDLC, SQL, SQL Server, System, Test, ETL

Responsibilities:

  • Created Enterprise Business Intelligence reports and worked as Dashboard designer in a Data Warehouse environment
  • Facilitated analyses of analytical business needs and translated them into system requirements and solution definitions
  • Developed and managed metrics, KPIs and reports based on user needs
  • Experienced with data dictionaries, data analysis, data marts and relational databases
  • Hadoop Lead ETL/ELT solution design and development using various industry standard or custom built tools
  • Strong experience in designing and developing large scale and optimized Data Warehouses using Teradata, Oracle
  • In-depth hands on experience in database, ETL/ELT design, development and excellent data analysis skills
  • Experience in developing DW solutions for Sales, Operations, Support and Finance
  • Proficient with SQL language and reasonable understanding of Informatica and Pentaho
  • Experience with OLAP products and metadata tools on the market

Confidential, Portland, OR

JAVA Developer

Environment: JDK 1.5, J2EE 1.4,Agile Development Process, Struts 1.3, Spring 2.0, Web Services (JAX-WS, Axis 2) Hibernate 3.0, RSA, JMS, JSP, Servlets 2.5, WebSphere 6.1, SQL Server 2005, Windows XP, HTML, XML, IBM Rational Application Developer (RAD), ANT 1.6, Log4J, XML, XSLT, XSD, jQuery, JavaScript, Ext JS, JUnit 3.8, SVN

Responsibilities:

  • Worked in an Agile work environment with Content Management system for workflow management and content versioning
  • Involved in designing user screens and validations using HTML, jQuery, Ext JS and JSP as per user requirements
  • Responsible for validation of Client interface JSP pages using Struts form validations
  • Integrating Struts with Spring IOC
  • Used Spring Dependency Injection properties to provide loose-coupling between layers
  • Implemented the Web Service client for the login authentication, credit reports and applicant information using Apache Axis 2 Web Service
  • UsedHibernateORM framework withSpringframework for data persistence and transaction management
  • Used Hibernate 3.0 object relational data mapping framework to persist and retrieve the data from database
  • Wrote SQL queries, stored procedures, and triggers to perform back-end database operations
  • Developed ANT Scripts to do compilation, packaging and deployment in the WebSphere server
  • Implemented the logging mechanism using Log4j framework
  • Wrote test cases in JUnit for unit testing of classes

Confidential, Minnekonka, MN

Junior JAVA Developer

Environment: Java, JavaScript, HTML, CSS, JDK 1.5.1, JDBC, Oracle10g, XML, XSL, Solaris and UML

Responsibilities:

  • Involved in Design, Development and Support phases of Software Development Life Cycle (SDLC)
  • Reviewed the functional, design, source code and test specifications
  • Involved in developing the complete front end development using Java Script and CSS
  • Author for Functional, Design and Test Specifications
  • Implemented Backend, Configuration DAO, XML generation modules of DIS
  • Analyzed, designed and developed the component
  • Used JDBC for database access
  • Used Data Transfer Object (DTO) design patterns
  • Unit testing and rigorous integration testing of the whole application
  • Written and executed the Test Scripts using JUNIT
  • Actively involved in system testing
  • Developed XML parsing tool for regression testing
  • Prepared the Installation, Customer guide and Configuration document which were delivered to the customer along with the product

We'd love your feedback!