Hadoop Solution Architect Resume
SUMMARY:
​Over 15 years of IT experience in analysis, architect, design, development and project management of n - tier Enterprise solutions, including 10 years of experience with BPM/WebSphere EAI product family, specialization with IBM SOA/BPM Foundation stack and around 2 years of experience with Big Data Ecosystem in ingestion, storage, querying, processing and analysis of Big Data .
TECHNICAL SKILLS:
Application/Web Servers: IBM WebSphere Application Server ND 5.1/6.0/6.1/7.5, IBM WebSphere Process Server 6.x/7.5, IBM WebSphere Business Monitor 6.x/7.5, IBM HTTP v6.x/7.x Server, Resin, Tomcat and WebLogic v4.5.
Big Data: Apache Hadoop 1.2.1, Apache Spark 0.9.3, Pig Latin, Hive, HBase, Sqoop, Oozie, Zookeeper, Flume, and Scala
Integration Appliances: IBM DataPower XI52 (3.6.1).
Brokers and Messaging: IBM WPS/BPM 6.x/7.x, IBM WBM 6.x/7.x, IBM WebSphere MQ v5.3/6 (MQSeries), Message Broker MB v5.x/6.x, WBI Adapters Framework 2.6 Adapters, and WBI InterChange Server v4.3/4.2 (ICS).
Operating Systems: UNIX, Linux Redhat v5.7/v5.9, AIX v5.x/6.x, Windows, and Solaris v5.7.
Languages/Scripts: Java, Jython, Maven, ANT, XML, JavaScript, JSP, Servlets, C/C++, HTML/XML, PL/SQL, and sh/ksh.
Databases/NoSQL: Oracle 10, DB2 v9.x, SQL Server and HBase
Others: F5 load balancer, IBM Support Assistant ISA, WebSphere Integration Developer 6 x, WSAD 5.0, Eclipse, JBuilder 8.0, IBM ClearCase, SVN.
PROFESSIONAL EXPERIENCE:
Confidential
Hadoop Solution ArchitectResponsibilities:
- Hands on experience in installing, configuring and using Hadoop ecosystem components like MapReduce, HDFS, HBase, Zookeeper, Hive, Sqoop, Pig, Flume and Oozie.
- Experience in building, maintaining multiple Hadoop clusters of different sizes and configuration.
- Good Understanding of Hadoop architecture and Hands - on experience with Hadoop components such as JobTracker, TaskTracker, NameNode, DataNode and MapReduce concepts and HDFS Framework..
- Well versed with Developing and Implementing MapReduce programs using Hadoop to work with BigData.
- Experience with NoSQL databases like HBase
Confidential
Hadoop Solution ArchitectResponsibilities:
- Installed, administered and configured WPS 6.1.2/6.2
- Development of Service Data Objects, Business Objects, Interface Maps, Selectors, Business Process, Rule Groups and Relationships.
- Created modules involving import/export components to access modules from other modules using SCA, JMS, HTTP and Web Services Bindings.
- Created business process BPEL by importing business models from Business Modeler.
- Integration with WebSphere Technology Adapters (JDBC, JText, MQ Series, XML, Email, Web services)
- Integration with WebSphere Application Adapters (SAP, Siebel, PeopleSoft and Oracle)
- Confidential Service Registry and Repository 6.2/7.0
- Installed/Upgraded and configured WSRR on existing WAS system.
- Registring WebService artificats (WSDL, XSD, XML) and assigned to classification in WSRR
- Designed Business models, Classifications and life cycles (governance) using OWL and SACL in WSRR Studio.
- Developed and implemented SOA govenance (life cycle) in integrationg environment.
- Developed validation plug - in.
- Designed service artificats promotion plan from integration to qulaity contorl environment.
- Integrated WPS/ESB with WSRR
Confidential
Business Monitor
Responsibilities:
- Installed, administered and configured WBM 6.1.2/6.2/7.5
- Creatged Dashboards with KPI s, Dimentions/Cubes, Diagrams/Visual models
- Confidential Business Modeler 6.1.2/6.2
- Created and simulated the business models in different business scenarios.
- Confidential Datapower XI50
- Configured XSL transformation scripts for performing message transformations.
- Configured users, domains and groups
- Created and configured processing policies
- Confidential MQ 5.x, 6.x and Message Broker 5.0, 5.1, 6.0, 6.1
- Installation and configuration of WebSphere MQ on Windows and UNIX platforms.
- Developed and deployed Message sets and message flows.
- Confidential Interchange Server (Formerly CrossWorlds)
- Development of Business Objects, Maps, Polymorphic Maps, Collaboration templates and objects, Relationships and Database Connection Pools
- WBI Technology Adapters (JDBC, JText, MQ Series, XML, Server Access Interface, Email, Web services)
- WBI Application Adapters (SAP, Siebel, JD Edwards, PeopleSoft and Oracle)
- Data handlers used (Delimited, Fixed Width, NameValue, XML)
- ICS System installation and Administration
Hadoop Solution Architect
Confidential, Maryville, TN
Responsibilities:
- Installed, configured and maintained Apache Hadoop on cluster of 12 nodes on Ubuntu servers.
- Virtualized Hadoop in Ubuntu to provide a safer, scalable analytics.
- Worked in the architecture and solution redesign of new offerings of Confidential Solutions in the Public Cloud for our clients.
- Used Zookeeper to manage coordination among the nodes.
- Developed MapReduce jobs to analyze the data and provide heuristics and reports. The heuristics were used for improving campaign targeting and efficiency.
- Assisted in creation of ETL processes for transformation of data sources from existing RDBMS systems.
- Involved in loading and transforming large sets of Structured, Semi - Structured data and analyzed them by Hive queries and Pig scripts.
- Designed and develop a REST full API to provide access to data in HBase and HDFS.
- Designed and Implemented webservices and integrated with HBase CRUD operations.
- Developed Hadoop jobs for De - Normalizing / Structuring the Data and pushing into HBase.
- Created Oozie workflows and coordinators (schedulers) to integrate Hadoop jobs (MapReduce, Pig, Hive, Sqoop, Hbase)
- Developed Pig and Hive UDFs.
- Used Sqoop to import data from RDBMS to Hadoop Distributed File System (HDFS) and later analysed the imported data using Hadoop Components.
- Set up Sqoop Jobs and Flume to import data from various data sources and log files.
- Evaluated and Implemented the best compression format storing Snappy compressed data into a Sequence File from the available Hadoop Compression formats bzip2, gzip, LZ4, LZO, Snappy
- Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with reference tables and historical metrics.
- Provided design recommendations sponsors/stakeholders that improved review processes and resolved technical problems.
- Architect, develop, deploy, debug and maintain BigData applications.
- Assisted in Cluster maintenance, Cluster Monitoring and Troubleshooting, Manage and review data backups and log files.