We provide IT Staff Augmentation Services!

Sql/java Developer Resume

0/5 (Submit Your Rating)

Boston, MA

SUMMARY

  • Having 8 Years of progressive experience in complete project life cycle of the software development process including requirement gathering, design, development, testing, Implementation and maintenance using SQL, Python, Java/J2EE and Big data technologies.
  • Extensive knowledge on Node.js, HTML5, CSS, Jscript, developed front end using JS. Around 3years of work experience as Hadoop Developer with good knowledge of Hadoop framework, Hadoop Distributed file system, parallel processing architecture and Data Analytics along with Data ware housing technologies with ETL Methodologies.
  • Hands on experience in major Big Data components like HDFS, Map Reduce, Hive, Pig, Sqoop, Oozie, Flume, Apache Kafka, Apache Storm, Apache spark, Zookeeper, Avro and NoSQL databases like HBase, Cassandra, MongoDB.
  • Hands on Experience on UNIX Shell Scripting for performing all admin operations.
  • Hands on Experience on Impala Scripts for ETL and Analysis.
  • Proficient in IBM MDM(Master Data Management) Workbench
  • Hands on experience in Import/Export of data using Hadoop Data Management tool SQOOP.
  • Hands on Experience in writing complex Map Reduce programs to perform analytics based on different common patterns including Joins, Sampling,Data Organization, filtering and Summarization.
  • Extensive experience performing ETL/Data warehousing database testing.
  • Hands on experience in writing Map Reduce programs using Java and Python.
  • Experience with Apache crunch framework for testing and running map reduces programs.
  • Hands on experience creating Hive tables and working on them using Hive QL and written Hive queries for data analysis to meet business requirements.
  • Experience in writing Custom UDF’s like UDAF’s and UDTF’s for extending Hive and Pig core functionality
  • Hands on experience in writing Pig scripts using Pig Latin to perform ETL operations.
  • Hands on experience in performing real time analytics on big data using HBase and Cassandra.
  • Experience in using Flume to stream data into HDFS.
  • Experience with Oozie Workflow engine in running workflow jobs with actions that run Hadoop Map Reduce and Pig jobs
  • Good practical understanding of cloud infrastructure like Amazon Web Services (AWS)
  • Experienced with configuring, monitoring large clusters using different distributions like Cloudera, Horton work.
  • Monitored multiple Hadoop clusters environments using Cloudera Manager and Ganglia
  • Experience in Software Development Life Cycle (Requirements Analysis, Design, Development, Testing, Deployment and Support).
  • Extensive experience in middle - tier development using J2EE technologies like JDBC, JNDI, JSP, Servlets, JSP, JSF, Struts, spring, Hibernate, JDBC, EJB.
  • Experienced with working in SOA architecture by implementing SOAP /Rest Web Services that will integrate with multiple applications.
  • Experience with web-based UI development using jquery UI, jquery, CSS, HTML, HTML5, XHTML and JavaScript.
  • Experience in using IDEs like Eclipse, Net Beans
  • Experience with build tools like Maven and Ant.
  • Development experience in DBMS like Oracle, MS SQL Server, Teradata and MYSQL.
  • Developed stored procedures and queries using PL/SQL.
  • Expertise in RDBMS like Oracle, MS SQL Server, MySQL and DB2.
  • Support development, testing, and operations teams during new system deployments.
  • Evaluate and propose new tools and technologies to meet the needs of the organization.
  • An excellent team player and self-starter with good communication skills and proven abilities to finish tasks before Confidential deadlines.

PROFESSIONAL EXPERIENCE

Confidential, Minneapolis, MN

Hadoop Developer

Responsibilities:

  • Configured deployed and maintained multi-node Dev and Test Kafka Clusters.
  • Developed multiple Kafka Producers and Consumers from scratch implementing organization’s requirements
  • Responsible for creating, modifying and deleting topics (Kafka Queues) as and when required with varying configurations involving replication factors, partitions and TTL.
  • Designed and developed tests and POC’s to benchmark and verify data flow through the Kafka clusters.
  • Developed code to write canonical model JSON records from various input sources to Kafka Queues.
  • Configured deployed and maintained a single node storm cluster in DEV environment.
  • Configured, deployed and maintained a single node Zookeeper cluster in DEV environment.
  • Experienced with Accessing Hive tables to perform analytics from java applications using JDBC.
  • Developed storm bolts and topologies involving Kafka spouts to stream data from Kafka.
  • Configured Hive bolts and written data to hive in Hortonworks Sandbox as a part of POC.
  • Performed functional testing on few updates fields in application.
  • Experienced with batch processing of data sources using Apache Spark, Elastic search.
  • Installed and configured Hortonworks Sandbox as part of POC involving Kafka-Storm-HDFS data flow.
  • Analyzed the data by performing Hive queries on a existing database.
  • Developed code base to stream data from sample data files > Kafka > Kafka Spout >Storm Bolt > HDFS Bolt.
  • Developing predictive analytic using apache Spark Scala APIs.
  • Documented the data flow form Application > Kafka > Storm > HDFS > Hive tables.

Environment: Hadoop Distributed File System (HDFS), Hive, Hortonworks Sandbox, Apache Kafka, Apache Storm, Java jdk.1.8, Eclipse LUNA, Zookeeper, JSON file format.

Confidential, Columbus, OH

Hadoop Developer

Responsibilities:

  • Involved in Cluster Setup, monitoring andadminstrationtaskslikecommison, decommisonnodes, assigningQuotas .
  • Experiencedwithrunning multiple jobs workflowusingOOzie Client APIandJava schedulers.
  • Involved on day-to-day support, development activities/issues on Cassandra servers.
  • ImplementedSentiment Analytics toolthatperoform analysis on large xml filesusingMap Reduce programs.
  • Unit test and Documentation test of existing Python Scripts.
  • Created Cassandra tables using CQL to load large sets of Structured, semi-structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios.
  • Worked on cassandraforimplementing data analytics.
  • Experiencedwith handling Avro data files, Json filesusingAvro Data Serailization system.
  • Created new design solution so that analytics and reporting can be directly done through Impala.
  • Experienced with optimizing sort & shuffle phase in Mapreduce framework.
  • Implemented custom counters to save log information to external system.
  • Experienced with Implementing Map Side, Reduce side and optimized join base implementation.
  • Worked on Apache crunch framework for running Mapreduce Pipelines for easy testing.
  • Performed data aggregation using Apache crunch.
  • Involved in creatingHivetables,partitionsandloadingwith data andwritinghivequerieswhichwill run internally in map reduce way.
  • Implementedanaylytical platform thasusedHiveQLfunctionsand different kind of join operations likeMap joins, Bucketed Map joins.
  • DevelopedHiveUDFsfor rating aggregation.
  • ExperiencedwithoptimzingHiveQueries, Joinsand hands on experiencewithHiveperoformencetunning.
  • UsedOozie tool for job scheduling.
  • Developed new functions on customer’s requirements based on existing Python code
  • Created user defined types to store specialized data structures in Cassandra.
  • DevelopedHbase Java client API for CRUD Operations.
  • Writing Unit Test Cases usingMRUnit, Junitand Easy Mock.
  • Importing and exporting data into HDFS and Hive using Sqoop. Involved in loading data from UNIX file system to HDFS.
  • Optimized Map Reduce Jobs to use HDFS efficiently by using various compression mechanismsLZO,snappy.
  • Handled importing of data from various data sources, performed transformations using Hive, Map Reduce, loaded data into HDFS and extracted the data into HDFS using Sqoop.
  • Implemented MongoDB datawarehouse for energy trading data.
  • Implemented advanced procedures like text analytics and processing using the in-memory computing capabilities like Spark
  • Designed & assisted informatica ETL developer to create new Aggregate.
  • Experienced in Managing, Reviewing log files using Web UI and Cloudera manager.
  • Data Extraction, Transformation and loading process using ETL methodologies.
  • Used the spark - Cassandra connector to load data to and from Cassandra.
  • Developed ETL processes to transfer data from different sources, using Sqoop, Impala, and bash.

Environment: Hadoop, Map Reduce, HDFS, Hive, Sqoop, HiveQL, Oozie, Avro Data Serialization,Cloudera, Java, My SQL, SQL, Unix,Eclipse, maven, JUnit, Jenkins

Confidential, Concord, NH

Hadoop Developer

Responsibilities:

  • Loaded data into HDFS and extracted the data from MySQL into HDFS using Sqoop.
  • Experienced with performing analytics on Time Series data using HBase and Java API.
  • Supported Map Reduce programs running on the cluster and wrote Mapreduce jobs using Java API.
  • Implemented POC in HBASE to decide between Tallvs. Narrow tables designdecisions.
  • Experienced with accessing NoSQL database like HBase using different client API's like Thrift, Java and Rest API.
  • Performed Data Analytics with ETL Methodologies.
  • Understanding of Business scenarios and converting it to MDM business logic. Implement the business rules as MDM external rules.
  • Analysis of spark streaming.
  • Analysis And development of spark Cassandra connector to load data from flat file to Cassandra.
  • Created data-models for customer data using the Cassandra Query Language.
  • Integrated NoSQL database like Hbase with Map Reduce to move bulk amount of data into HBase.
  • Implemented Hive UDF's to validate against business rules before data move to Hive tables.
  • Experienced with join different data sets using Pig join operations to perform queries using pig scripts.
  • Experienced with Pig Latin operations and writing Pig UDF's to perform analytics.
  • Implemented Unix shell scripts to perform cluster admin operations.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports
  • Created POC to store Server Log data into Cassandra to identify System Alert Metrics
  • Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms.
  • Configured Flume to extract the data from the web server output files to load into HDFS.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
  • Experienced with monitoring, debugging cluster using Ganglia.

Environment: Horton works, HBase,Mapreduce, HDFS,HBase, Hive, Pig, Oozie, Sqoop, Flume, Ganglia,Oracle 10g

Confidential, San Francisco, CA

Java/Hadoop Developer

Responsibilities:

  • Installed and configuredHadoopMap Reduce, HDFS, Developed multiple Map Reduce jobs in java for data cleaning and preprocessing.
  • Migrated existing SQL queries to HiveQL queries to move to big data analytical platform.
  • Integrated Cassandra file system to Hadoop using Map Reduce to perform analytics on Cassandra data.
  • Implemented Real time analytics on Cassandra data using thrift api.
  • Responsible to manage data coming from different sources.
  • Supported Map Reduce Programs those are running on the cluster.
  • Involved in loading data from UNIX file system to HDFS.
  • Worked on installing cluster, commissioning & decommissioning of data node, name node recovery, capacity planning, and slots configuration.
  • Load and transform large sets datainto HDFS using Hadoopfs commands.
  • Supported in setting up updating configurations for implementing scripts with Pig and Sqoop.
  • Designed the logical and physical data modeling wrote DML scripts for Oracle 9i database
  • Used Hibernate ORM framework with Spring framework for data persistence
  • Wrote test cases in JUnitfor unit testing of classes
  • Involved in templates and screens in HTML and JavaScript

Environment: Java, HDFS, Cassandra, Map Reduce, Sqoop, JUnit, HTML, JavaScript, Hibernate, spring, Pig.

Confidential, Boston, MA

SQL/Java developer

Responsibilities:

  • Involved in complete requirement analysis, design, coding and testing phases of the project.
  • Implemented the project according to the Software Development Life Cycle (SDLC).
  • Developed JavaScriptbehavior code for user interaction.
  • Used HTML, JavaScript, and JSP and developed UI.
  • Used JDBC and managed connectivity, for inserting/querying& data management including stored procedures and triggers.
  • Designed the logical and physical data model, generated DDL scripts, and wrote DML scripts for Sql Server database.
  • Part of a team, which is responsible for metadata maintenance and synchronization of data from database.
  • Involved in the design and coding of the data capture templates, presentation and component templates.
  • Developed an API to write XML documents from database.
  • Used JavaScript and designed user-interface and checking validations.
  • Developed JUnit test cases and validated users input using regular expressions in JavaScript as well as in the server side.
  • Developed complex SQL stored procedures, functions and triggers.
  • Mapped business objects to database using Hibernate.
  • Wrote SQL queries, stored procedures and database triggers as required on the database objects.
  • Working closely with other engineers in designing and developing a REST api using Node.js

Environment: Java, spring, XML, Hibernate, SQL Server, JUnit, JSP, JavaScript.

Confidential

SQL Server Developer

Responsibilities:

  • Actively involved in different stages of Project Life Cycle.
  • Documented the data flow and the relationships between various entities.
  • Actively participated in gathering of User Requirement and System Specification.
  • Created new Database logical and Physical Design to fit the new business requirement and implemented the same using SQL Server.
  • Created Clustered and Non-Clustered Indexes for improved performance.
  • Created Tables, Views and Indexes on the Database, Roles and maintained Database Users.
  • Followed and maintained standards and best practices in database development.
  • Provided assistance to development teams on Tuning Data, Indexes and Queries.
  • Developed new Stored Procedures, Functions, and Triggers.
  • Implemented Backup and Recovery of the databases.
  • Actively participated in User Acceptance Testing, and Debugging of the system.

Environment: SQL, PL/SQL, Windows 2000/XP, MS SQL Server 2000, IIS.

We'd love your feedback!