We provide IT Staff Augmentation Services!

Hadoop Kafka Developer Resume

3.00/5 (Submit Your Rating)

Windsor, CT

SUMMARY

  • Overall 9years' of professional IT experience with4 yearsof experience in analysis, architectural design, prototyping, development, Integration and testing of applications using Java/J2EE Technologies and5 yearsof experience inBig Data AnalyticasHadoop Developer.
  • Around 3 years working experience in Talend(ETL Tool) to developing & leading the end to end implementation of Big Data projects, comprehensive experience as a Hadoop Developer in Hadoop Ecosystem like Hadoop,Map Reduce, Hadoop Distributed File System (HDFS), HIVE,IMPALA,Yarn,Ozie, Hue,Spark.
  • Experience in developingMap Reduce ProgramsusingApache Hadoopfor analyzing the big data as per the requirement.
  • Hands on experience in creating Apache SparkRDD transformations on Data sets in the Hadoop data lake.
  • Extensive experience in developingPIG Latin Scriptsand usingHive Query Languagefor data analytic.
  • Hands on experience working onNoSQLdatabases includingHbase, Cassandraand its integration with Hadoop cluster.
  • Good working experience usingSqoopto import data intoHDFSfromRDBMSand vice - versa.
  • Experienced in Data Ingestion projects to inject data intoData lakeusing multiple sources systems using Talend Bigdata.
  • Experience inHadoop administration activitiessuch as installation and configuration of clusters usingApache, Cloud-eraandAWS.
  • Good experience in building pipelines usingAzure Data Factoryand moving the data intoAzure Data Lake Store.
  • Hands on experience in solving software design issues by applying design patternsincluding Singleton Pattern, Business Delegator Pattern, Controller Pattern, MVC Pattern, Factory Pattern, Abstract Factory Pattern, DAO PatternandTemplate Pattern
  • Experienced in creative and effective front-end development usingJSP, JavaScript, HTML 5,DHTML, XHTML AjaxandCSS.
  • Experience in analysis, design, development and integration usingBigdata-HadoopTechnology likeMapReduce,Hive, Pig,Sqoop,Ozzie,Kafka Streaming,HBase, Azure, AWS,Cloudera,Horton works,Impala,Avro,Data Processing,Java/J2EE,SQL.
  • Good knowledge on Hadoop Architecture and its components such asHDFS,MapReduce,Job Tracker,Task Tracker,Name Node,Data Node.
  • Experience in SCOPE language to communicate withCOSMOSfor data integrations.
  • Excellent and experience and knowledge of Machine Learning, Mathematical Modeling and Operations Research. Comfortable with R, Python, SAS and Weka, MATLAB, Relational databases. Deep understanding & exposure of Big Data Eco-system
  • Datacenter Migartion, Azure Data Services have strong virtualization experience
  • Experience in troubleshooting and resolving architecture problems including database and storage, network, security and applications
  • Extensive experience in developing strategies for Extraction, Transformation and Loading data from various sources into Data Warehouse and Data Marts using DataStage.
  • Having extensive experience in Data Integration and Migration using IBM Infosphere DataStage(9.1), Quality stage, SSIS, Oracle, Teradata, DB2, SQL and Shell script along with technical certifications in ETL development from IBM and Cloudera .
  • Experienced in scheduling sequence, parallel and server jobs using DataStage Director, UNIX scripts and scheduling tools. Designed and developed parallel jobs, server and sequence jobs using DataStage Designer.
  • IBM ETL Talend Data Stage Developer with 8+ years in Information technology having worked in Design, Development, Administrator and Implementation of various database and data warehouse technologies (IBM Talend Enterprise edition and Data Stage v9.X/8.X/7.X) using components like Administrator, Manager, Designer and Director
  • Extensive ETL tool experience using IBM Talend Enterprise edition, InfoSphere/WebSphere DataStage, Ascential DataStage, Bigdata Hadoop and SSIS. Worked on DataStage client tools like DataStage Designer, DataStage Director and DataStage Administrator.
  • Expertise on working with various databases in writingSQlqueries, Stored Procedures, functions and Triggers by usingPL\SQLandSQl.
  • Experience in NoSQL Column-Oriented Databases like Cassandra, HBase, MongoDB and Filo DB and its Integration with Hadoop cluster.
  • Good exposure ofWeb ServicesusingCFX/ XFIREandApache Axis, for the exposure and consumption ofSOAPMessages
  • Working knowledge of database such asOracle8i/9i/10g,MicrosoftSQL Server,DB2.Experience in writing numerous test cases usingJUnitframework withSelenium.
  • LeverageAWS, Informatica Cloud, Snowflake Data Warehouse, Confidential corp Platform, AutoSys, and Rally Agile/SRUMto implementData Lake, Enterprise Data Warehouse, and advanced data analyticssolutions based on data collection and integration from multiple sources (Salesforce, Salesconnect, S3, SQL Server, Oracle, NoSQL and Mainframe systems).
  • Strong work ethic with desire to succeed and make significant contributions to the organization.Strong problem solving skills, good communication, interpersonal skillsand agood team player.

TECHNICAL SKILLS

Big Data Technologies: Hive, Hadoop, Map Reduce, Hdfs, Sqoop, R, Flume, Spark, Apache Kafka, Hbase, Pig, Elastic search, AWS, Oozie, Zookeeper, Apache hue, Apache Tez, YARN, Talend, Storm, Impala, Tableau and Qlikview.

Programming Languages: Java JDK1.4/1.5/1.6 JDK 5/JDK 6, C/C, Matlab, R, HTML, SQL, PL/SQLSQL, C, C++, Java, J2EE, Pig Latin, Hive, Scala, Java, Python, TSQL, Latin, HiveQ

Framework: Hibernate 2.x/3.x, Spring 2.x/3.x,Struts 1.x/2.x and JPA

Web Services: WSDL, SOAP, Apache CXF/XFire, Apache Axis, REST, Jersey

Client Technologies: JQUERY, Java Script, AJAX, CSS, HTML 5, XHTML

Operating Systems: UNIX, Windows, LINUX

Application Servers: IBM Web sphere, Tomcat, Web Logic, Web Sphere

Web technologies: JSP, Servlets, Socket Programming, JNDI, JDBC, Java Beans, JavaScript, Web Services JAX-WS

Databases: Oracle 8i/9i/10g, Microsoft SQL Server, DB2 MySQL 4.x/5.x

Java IDE: Eclipse 3.x, IBM Web Sphere Application Developer, IBM RAD 7.0

Tools: TOAD, SQL Developer, SOAP UI, ANT, Maven, Visio, Rational Rose, Datastage

PROFESSIONAL EXPERIENCE

Hadoop Kafka Developer

Confidential, Windsor CT

Responsibility:

  • Developed workflows for complete end to end ETL process starting with getting data into HDFS, validating and applying business logic, storing clean data in hive external tables, exporting data from hive to RDBMS sources for reporting and escalating and data quality issues.
  • Working as onsite coordinator and providing technical assistance, troubleshooting and alternative development solutions.
  • Handled importing of data from various data sources performed transformations using Spark and loaded data into hive.
  • Involved in performance tuning of Hive (ORC table)for design, storage, and query perspectives.
  • Developing and deploying using Horton works HDP 2.3.0 in production and HDP 2.6.0 in the development environment.
  • Worked extensively with Sqoop for importing and exporting the data from HDFS to Relational Database systems and vice-versa.
  • Used Kafka Streams to Configure Spark Streaming to get information and then store it in HDFS.
  • Partitioned data streams using Kafka, designed and configured Kafka cluster to accommodate heavy throughput of 1 million messages per second. Used Kafka producer API's to produce messages.
  • Handled ingestion of data from different data sources into HDFS using Sqoop and perform transformations using Hive, Map Reduce and then loading data into HDFS.
  • Data ingestion pipeline development from oracle database to Azure cosmosDB using Kafka utils.
  • Developed Kafka consumers to upsert documents to Azure cosmos collections.
  • Wrote AZURE POWERSHELL scripts to copy or move data from local file system to Azure Blob storage and Implemented OLAP multi-dimensional cube functionality using Azure SQL Data Warehouse.
  • Responsible for Federating our big monolithic Kafka cluster into multiple Kafka clusters without any downtime or data loss to reduce risk, make the system more maintainable and provide for various needs of our customers
  • Installed, configured and maintained multiple new Kafka and ZK clusters easily and effectively using puppet and terraform
  • Set up our pipelines to be highly available even if an AWS AZ or Region goes down
  • Lead team to manage and monitor multiple Kafka clusters with 100's of Kafka brokers
  • Expanded the Zookeeper cluster in our production environment from 3 to 5 hosts to improve resiliency
  • Upgraded multiple Kafka clusters from 0.10.2 to 2.3.1 without any downtime or data loss in the pipeline
  • Came up with multiple scripts to automate the upgrade process (rolling kafka restarts, kafka-reassignment among others)
  • Researched different AWS instance types and switched our Kafka brokers from r3.xlarge to d2.xlarge as the cost is significantly less and performance is much better
  • Responsible to configure the cluster in IBM cloud and maintain the number of nodes as per requirement.
  • Developed Kafka consumer to consume data from Kafka topics.
  • Developed shell scripts for running Hive scripts in Hive and Impala.
  • Responsible for optimization of data-ingestion, data-processing, and data-analytics.
  • Expertise is developing Pyspark application which build connection between HDFS and HBase and allows data transfer between them.
  • Worked on RDBMS like Oracle DB2 SQL Server and My SQL database.
  • Developed workflows to cleanse and transform raw data into useful information to load it to a Kafka Queue to be loaded into HDFS and noSQL database.
  • Responsible to do sanity testing of the system once the code is deployed in production.
  • Experienced in using IDEs like Eclipse and intelij to modify the code in GIt.
  • Involved in quality assurance of the data mapped into production.
  • Involved in code walk through, reviewing, testing and bug fixing.

Environment: Environment: Hadoop, MapReduce, HDFS, Sqoop, flume, kafka, Hive, Pig, HBase Eclipse, DBeaver, Datagrip, SQL Developer, intellij, GiT, SVN, JIRA, Unix

Big Data Engineer

Confidential, Chandler AZ

Responsibilities:

  • As a Big Data Developer, implemented solutions for ingesting data from various sources and processing the Data-at-Rest utilizing Big Data technologies such asHadoop, MapReduce Frameworks, HBase, Hive, Oozie, Flume, Sqoopetc.
  • Designed and Implemented real-time Big Data processing to enable real-time analytics, event detection and notification forData-in-Motion.
  • Hands-on experience with IBM Big Data product offerings such as IBM InfoSphereBigInsights, IBM InfoSphereStreams, IBMBigSQL.
  • Experienced in working with spark eco system using Spark SQL and Scala queries on different formats like Text file, CSV file.
  • Expertized in implementing Spark usingScalaandSpark SQLfor faster testing and processing of data responsible to manage data from different sources.
  • Experienced in creating data pipeline integratingkafka streamingwithspark streamingapplication usedscalafor writing applications.
  • UsedsparkSQLfor reading data from external sources and processes the the data usingScalacomputation framework.
  • Created many complex ETL jobs for data exchange from and to Database Server and various other systems including RDBMS, XML, CSV, and Flat file structures. Integrated java code inside Talend studio by using components like tJavaRow, tJava, tJavaFlex and Routines.
  • Experienced in using debug mode of talend to debug a job to fix errors.
  • Responsible for developing, support and maintenance for the ETL (Extract, Transform and Load) processes using Talend Integration Suite.
  • Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake Analytics . Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in InAzure Databricks.
  • Kafka Connect setup and ability to stream data from any source connector to
  • Kafka and stream to sink connectors from Kafka.
  • SAP Hana tables streaming into Kafka and then into snowflake, PostgreSQL.
  • KSQL implementation and running queries on the topics inside Kafka.
  • Setup monitoring for topics, Kafka connect, broker logs, consumer lag and other
  • JMX properties through Prometheus.
  • REST Proxy setup for Kafka cluster so anyone can post data to Kafka topics.
  • Schema Registry setup and working with AVRO, JSON data formats inside Kafka.
  • Well versed with all the scripts like consumer, producer, Kafka server, zookeeper,
  • Kafka connect distributed, ksql, schema registry, Rest Proxy setup.
  • Rabbit MQ AMQP, CSV file, SFTP, Oracle, SAP Hana, PostgreSQL, Snowflake,
  • Sales Force and many other systems as source and sink of data streams.
  • Added many open source-monitoring tools like Kafka manager, DROP, AK HQ to look into data inside topics without ssh into server.
  • Single point of contact for setting up Kafka brokers, zookeeper, Kafka connect, ksql, Rest API.
  • Performing upgrades to Kafka brokers. Using the latest version of 2.5.0 in the current production environment.
  • Connecting Apache DRUID to Kafka for performing analytics.
  • Can spin Kafka cluster out of Kubernetes containers.Developed software to process, cleanse, and report on vehicle data utilizing various analytics and REST API languages likeJava, ScalaandAkkaAsynchronous programming Framework.
  • Involved in Developing Assert Tracking project where we use to collect real-time vehicle location data using IBM streams fromJMS queueand processed that data in Vehicle Tracking using ESRI GIS Mapping Software, Scala andAkka Actor Model.
  • Experienced on loading and transforming of large sets of structured, semi structured and unstructured data from HBase through Sqoop and place in HDFS for further processing.
  • Installed and configured Flume, Hive, Pig, Sqoop and Oozie on the Hadoop cluster. Involved in creating Hive tables, loading data and running hive queries on the data.
  • Extensive Working knowledge of partitioned table, UDFs, performance tuning, compression-related properties, thrift server in Hive.
  • Worked with NoSQL database, HBase to create tables and store data. Developed optimal strategies for distributing the web log data over the cluster, importing and exporting the stored web log data into HDFS and Hive using Scoop.
  • Responsible for estimating the cluster size, monitoring and troubleshooting of the Spark databricks cluster.
  • Developed Java MapReduce programs on log data to transform into structured way to find user location, age group, spending time.
  • Used Flume to collect, aggregate, and store the web log data from different sources like web servers, mobile and network devices and pushed to HDFS.
  • Analyzed the web log data using the HiveQL to extract number of unique visitors per day, page views, visit duration, most purchased products on the website.
  • Creating Databricks notebooks using SQL, Python and automated notebooks using jobs.
  • Creating Spark clusters and configuring high concurrency clusters using Azure Databricks to speed up the preparation of high-quality data.

Environment: Hadoop 1x, Hive 0.10, Pig 0.11, Sqoop, HBase, UNIX Shell Scripting, Scala, Akka, IBM InfoSphere BigInsights, IBM InfoSphere Streams, IBM BigSQL, Java

Big Data Engineer

Confidential, Chicago, IL

Responsibilities:

  • UsedRest ApIto AccessHBasedata to perform analytics. Worked in Loading and transforming large sets of structured, semi structured and unstructured data.
  • Involved in collecting, aggregating and moving data from servers toHDFSusingApache Flume. WrittenHivejobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data.
  • Involved in creatingHivetables, loading with data and writing hive queries that will run internally in map reduce way. Experienced in managing and reviewing theHadooplog files.
  • MigratedETL jobs to Pig scriptsdo Transformations, even joins and some pre-aggregations before storing the data ontoHDFS.
  • Worked withAvro Data Serializationsystem to workwith JSON data formats. Worked on different file formats likeSequence files, XML files and Map filesusing Map Reduce Programs.
  • Involved inUnit testingand delivered Unit test plans and results documents usingJunitandMRUnit. Exported data fromHDFSenvironment into RDBMS usingSqoopfor report generation and visualization purpose.
  • Worked onOozieworkflow engine for job scheduling. Created and maintained Technical documentation for launching HADOOP Clusters and for executingPigScripts.
  • Developed multiple MapReduce jobs in Java for data cleaning and pre-processing. Designed and developed Oozie workflows for automating jobs. Created HBase tables to store variable data formats of data coming from different portfolios.
  • Writing Hadoop MR Programs to get the logs and feed into Cassandra for Analytics purpose. Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Implemented best income logic using Pig scripts. Moving data from Oracle to HDFS and vice-versa using SQOOP. Developed Pig scripts to convert the data from Avro to Text file format.
  • Developed Hive scripts for implementing control tables logic in HDFS. Developed Hive queries and UDF's to analyze/transform the data in HDFS. Designed and Implemented Partitioning (Static, Dynamic) Buckets in HIVE.
  • Worked with different file formats and compression techniques to determine standards. Installed and configured Hive and also written Hive UDFs.
  • Developed Oozie workflows and they are scheduled through a scheduler on a monthly basis. Designed and developed read lock capability in HDFS.
  • Involved in End-to-End implementation of ETL logic. Involved in designing use-case diagrams, class diagram, interaction using UML model. Designed and developed the application using various design patterns, such as session facade, business delegate and service locator.
  • Worked on Maven build tool. Involved in developing JSP pages using Struts custom tags, JQuery and Tiles Framework. Used JavaScript to perform client side validations and Struts-Validator Framework for server-side validation.
  • Good experience in Mule development. Developed Web applications with Rich Internet applications using Java applets, Silverlight, JavaFX. Involved in creating Database SQL and PL/SQL queries and stored Procedures.
  • Implemented Singleton classes for property loading and static data from DB. Debugged and developed applications using Rational Application Developer (RAD). Developed a Web service to communicate with the database using SOAP.
  • Developed DAO (Data Access Objects) using Spring Framework 3. Deployed the components in to WebSphere Application server 7. Actively involved in backend tuning SQL queries/DB script.
  • Worked in writing commands using UNIX Shell scripting. Used java in removing an attribute in JSON file where Scala was not supporting to create objects and again converted to Scala. Worked on Java & Impala and master clean-up of data.
  • Worked on accumulators to count the result after executing the job on multiple executors. Worked in intellij IDE for the development and debugging. Worked on Linux/Unix.
  • Wrote a whole set of programs for one of the LOB's in Scala and made unit testing. Created many SQL schemas and utilized them throughout the program wherever required. Made enhancements to one of the LOBs using Scala programming.
  • Ran spark-submit job and analyzed the log files. Used Maven to build .jar files, Used Sqoop to transfer data between relational databases and Hadoop.
  • Worked on HDFS to store and access huge datasets within Hadoop, Good hands on experience with git and GitHub, Created a feature node on GitHub.
  • Pushed the data GitHub and made a pull request, Experience in JSON and CFF.

Environment: Java EE 6, IBM WebSphere Application Server 7, Apache-Struts 2.0, EJB 3, Spring 3.2, JSP 2.0, WebServices, JQuery 1.7, Servlet 3.0, Struts-Validator, Struts-Tiles, Tag Libraries, ANT 1.5, JDBC, Oracle 11g/SQL, JUNIT 3.8, CVS 1.2, Rational Clear Case, Eclipse 4.2, JSTL, DHTML.

Hadoop Developer

Confidential

Responsibilities:

  • Involved in architecture design, development and implementation of Hadoop deployment, backup and recovery systems. Worked on the proof-of-concept POC for Apache Hadoop framework initiation.
  • Worked on numerous POCs to prove if Big Data is the right fit for a business case. Developed Map-reduce jobs for Log Analysis, Recommendation and Analytic.
  • Wrote Map-reduce jobs to generate reports for the number of activities created on a particular day, during a dumped from the multiple sources and the output was written back to HDFS.
  • Reviewed the HDFS usage and system design for future scalability and fault-tolerance. Installed and configured Hadoop HDFS, MapReduce, Pig, Hive, Sqoop.
  • Responsible for building scalable distributed data solutions using Hadoop. Involved in loading data from edge node to HDFS using shell scripting.
  • Created HBase tables to store variable data formats of PII data coming from different portfolios. Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Worked with using different kind of compression techniques to save data and optimize data transfer over network using LZO, Snappy, and Bzip etc.
  • Analyze large and critical datasets using Cloudera, HDFS, HBase, MapReduce, Hive, HiveUDF, Pig, Sqoop, Zookeeper, &Spark.
  • Developed custom aggregate functions using Spark SQL and performed interactive querying. Used Scoop to store the data into HBase and Hive.
  • Worked on installing cluster, commissioning & decommissioning of Data Node, Name Node high availability, capacity planning, and slots configuration.
  • Creating Hive tables, dynamic partitions, buckets for sampling, and working on them using HiveQL. Used Pig to parse the data and Store in Avro format.
  • Stored the data in tabular formats using Hive tables and Hive Serdes. Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Worked with NoSQL databases like HBase for creating HBase tables to load large sets of semi structured data coming from various sources.
  • Implemented a script to transmit information from Oracle to HBase using Sqoop. Implemented MapReduce programs to handle semi/unstructured data like XML, JSON, and sequence files for log files.
  • Fine-tuned Pig queries for better performance. Involved in writing the shell scripts for exporting log files to Hadoop cluster through automated process.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team. Installed Oozie workflow engine to run multiple Hive and pig jobs.
  • Wrote Pig Scripts to generate MapReduce jobs and performed ETL procedures on the data in HDFS. Processed HDFS data and created external tables using Hive, in order to analyze visitors per day, page views and most purchased products.
  • Exported analyzed data to HDFS using Sqoop for generating reports. Used Map-reduce and Sqoop to load, aggregate, store and analyze web log data from different web servers.
  • Developed Hive queries for the analysts. Experience in optimization of Map reduce algorithm using combiners and partitions to deliver the best results and worked on Application performance optimization for a HDFS/Cassandra cluster.
  • Implemented working with different sources using Multi Input formats using Generic and Object Writable. Implemented best income logic using Pig scripts and Joins to transform data to AutoZone custom formats.
  • Implemented custom comparators and partioners to implement Secondary Sorting. Worked on tuning the performance of Hive queries. Implemented Hive Generic UDF's to implement business logic.
  • Responsible to manage data coming from different sources. Configured Time Based Schedulers that get data from multiple sources parallel using Oozie work flows. Installed Oozie workflow engine to run multiple Hive and pig jobs.

Environment: Map-reduce, Hive, Pig, Sqoop, Oracle, MapR, Informatica, Micro-strategy, Cloud-era, Manager, Oozie, Zoo-keeper. Hadoop, HDFS, Yarn, Hive, Pig, HBase,, Sqoop, Kafka Streaming, Flume, Oracle 11g, Core Java, FiloDB, Spark, Akka, Scala, Hortonworks, Ambari, Azure data, Talend, Eclipse, Web Services (SOAP, WSDL), Oozie, Node.js, Unix/Linux, Aws, JQuery, Ajax, Python, Perl, Zookeeper.

Java Developer

Confidential

Responsibilities:

  • Developed the DAO layer using thehibernatefor real time performance used the caching system forhibernate. UsedSpringMVCFrameworkDependencyInjection for integrating various Java Components.
  • Experience in working withSpring MVCControllersandSpringHibernatetemplates. Worked onWebServicesusingREST. Worked on bothServiceandClientSide.
  • Hands on experience with data persistency usingHibernateandSpringFramework. Written stored procedures and inner joins using RDMSOracle. UsedMongoDBto store specifications documents for fulfillment centers.
  • Consumed Web Services and generated client usingJerseyandAxisFrameworksin RAD IDE. Configured theSpring,Hibernate,Log4jconfiguration files.
  • Writing test cases usingTestNGandMockitoFrameworks. Helped UI to integrate the java beans data usingJSTL, Spring tags,JSP,jQuery,JSON, Taglibs.
  • Involved in testing and deployment of application toTomcatApplicationServer. Designing the application with reusable J2EE design patterns likeSingletonClass,FrontController,SessionFacade,and SessionFactoryetc.
  • UsedANT,Mavento build & deploy applications, helped to deployment for CI usingJenkinsand Maven. WroteSQLqueries&StoredProceduresfor interacting with theOracledatabasefor promo code and offers.
  • Was part of production support team to resolve the production incidents Documentation of common problems prior to go-live and while actively in a Production Support role.
  • Understanding the complete requirement of how each page in the form should look like. Involved in the design and development of the pages (accidents coverage page and current vehicle info) usingjava, jspandjstl.
  • Troubleshoot various software issues using debugging process and code techniques. Prepared unit test cases and reviewed the test results.
  • Involved in the integration of the project. Involved inProject Design Documentation,design ReviewsandCode Reviews
  • Understanding the functional specifications and architecture ImplementedMVCarchitectureusingSpringand otherJ2EEdesign patterns for the application development. Developedstatic and dynamic Web Pages using JSP, HTML, JavaScript and CSS
  • Developing and coding J2EE Components withJSP, Java Beans, business objects with Hibernate and Servlets Coding usingJava, Spring, Hibernate, HTML, JSP.
  • DevelopedSQL Server stored proceduresto execute the back end processes using. Used Eclipse to develop the Application.
  • Integrated other sub-systems through,XML and XSL. UsedWeb Sphereas both the Application server in the development and production environment.
  • Responsible and mentored the team in complete software development lifecycle(SDLC)tasks - design, coding, testing, and documentation.
  • Involved in the design & development of business module applications using J2EE technologies likeServlets, spring and JDBC. Developedstruts frameworkto implement MVC architecture.
  • Involved in theJ2SEtechnologies likeJNDI, JDBC and RMI. Multithreading is used in the project.

Environment: Oracle Db, Windows NT/XP, Java J2EE, JSP, HTML, JavaScript and CSS, UML, Spring Framework, Git, Visual Studio Code, MySQL, JDBC, Java 7/8, Servlets, XML, Web Services, WSDL, Jersey, Axis, SOAP UI, RAD, Selenium, Apache HTTP Client, Oracle, TestNG, SQL, PL/SQL, JSTL, ANT, Maven, Jenkins, UML, WebSphere, Linux.

We'd love your feedback!