Big Data Architect Resume
Overland Park, KS
SUMMARY:
- 14 years of experience architecting, designing and implementing solutions for complex business problems involving large scale data management and processing, real - time analytics, enterprise applications and data visualization.
- Strong expertise in Big Data ecosystems and technologies with extensive experience in developing applications using Java Enterprise Edition and frameworks such as spring.
- Highly proficient in programming using Java, Python, Scala and C. Experienced in leading technical teams and provide technical thought leadership.
- Strong experience installing, configuring, developing & testing applications to process large sets of structured and unstructured data using Hadoop ecosystem components such as Kafka, Flume (Data Ingestion), HDFS, HBase, OpenTSDB (Data Storage), Spark, Spark Streaming, Spark SQL, Mapreduce, Hive, Impala (Data Extraction & Processing for batch/real-time) & Sentry (Data Security).
- Good understanding and hands on experience with applying Machine Learning techniques in Big data environments using Spark ML (Data Science).
- Have good exposure to big-data and cloud deployment infrastructures including Cloudera (CDH), Hadoop clusters and Microsoft Azure.
- Strong experience in Object Oriented Analysis, Design and Development of multi-tier, web based & enterprise applications using frameworks such as JEE & Spring, and technologies such as Java, JSP, JSF, JPA, Hibernate, EJB, JMS, Web Services(REST & SOAP), XML Parsers (SAX, DOM), Maven, HTML, JavaScript.
- Expert in writing SQL stored procedures in Sybase & MS SQL Server for backend processes and interactive front-end report generation requirements, database designing, logical data modelling and performance tuning of queries.
- Good knowledge of NoSQL databases such as HBase, and familiarity with MongoDB, Cassandra DB, and In-memory Databases such as MemSQL. Proficient in UNIX Shell Scripting & Perl.
- Hands on experience working with interactive data visualization and dash boarding tools such as Qlik Sense and Tableau. Familiarity with JavaScript frameworks such as D3 JS to develop intuitive visualizations.
- Solid experience in designing & developing multi-tier enterprise applications using different design patterns such as Singleton, Strategy, Factory, Abstract Factory, Builder, Intercepting Filter, Session Façade, DAO, Front Controller, MVC, and IOC (Inversion of Control).
- Expertise in improving performance of web-based front end and Java-based middleware processing engines using profiling and APM tools such as AppDynamics, VisualVM & JProfiler.
- Significant exposure to RDBMS SQL Query optimizations by analyzing query plans and data distribution in the DB.
- Excellent team player with very strong analytical, organizational problem-solving skills.
- Skilled in handling multiple tasks, managing and meeting deadlines, coordinating project schedule, releases, meeting with stakeholders and addressing their concerns.
- Excellent written and oral communication skills.
TECHNICAL SKILLS:
Programming Languages: Java, Scala, Python, C
Big Data: Hadoop, HDFS, HBase, Spark, Flume, Kafka, Hive, Pig, Impala
Hadoop and cloud platforms: Cloudera CDH, Hortonworks HDP, Azure
Web Technologies: JSF, JSP, JSTL, Servlets, HTML, XML
Scripting Languages: Python, Shell Scripting, Perl, Javascript
Middleware technologies: EJB, JMS, JAX-WS, JAX-RS, Apache Camel
Application Servers: JBoss, Apache Tomcat
RDBMS: Sybase, MS SQL Server.
IDE s/Tools: Eclipse, Vi, Autosys, CVS, SVN
Build Tools: Maven, GIT, ANT.
Frameworks: Spring, JEE, Hibernate, JBoss FUSE
Operating Systems: UNIX, Linux, Windows.
PROFESSIONAL EXPERIENCE:
Confidential, Overland Park, KS
Big Data Architect
Responsibilities:
- Architected and developed a Big Data platform for processing large volume of real-time data.
- The platform handled multiple, heterogeneous data sources to ingest time series data using Kafka, Flume and Sqoop.
- The ingested data from Kafka and Flume was processed using DStreams in Spark Streaming and persisted to Open TSDB and HBase.
- In addition to data processing, predictive analysis was performed on the streaming data to generate insights using Machine Learning techniques leveraging Spark MLLib.
- Utilized Spark SQL, Impala and Hive queries to extract data and feed them to visualization tools (Qlik Sense) to generate dashboards.
Programming Languages: Java, Python, Scala
Technologies: Spark, Kafka, OpenTSDB, HBase, Flume, Hive, Sqoop, Oozie,MongoDB Data Visualization: Qlik Sense
Confidential
Associate Vice President
Responsibilities:
- Managed and provided technical guidance to an agile development team in charge of enhancements and maintenance tasks for a Books & Records system, in support of securities processing for the EMEA region.
- Developed a new set of Web service APIs to store Books and Records information by collecting data from multiple SOA data sources using REST Services and Kafka Messaging.
- Also responsible for maintaining and enhancing SQL queries needed for the extracts/reports that were generated from this system and sent to downstream data consumer systems for transaction processing.
Technologies: Java, Python, Scala, Web Services, & MS SQL Server
Confidential
Technical Manager/Lead
Responsibilities:
- Designed and developed a web-based client-server application using JEE & Spring framework to monitor any exceptions during Straight through Processing (STP) of trades captured in the back office trade settlement platform.
- The product also supported the generation of numerous transaction & position data reports that enabled users to make critical business decisions.
- Responsible for modularizing functional components such as Trade Processing & Settlement, Static Data processing, Position Management, Adaptors to Swift/Crest in Java and JEE technologies.
- Utilized SOA and JMS messaging to enhance the reusability & throughput of the system.
Programming Language: Java 7
Technologies : JEE6, Spring, JSF2, EJB 3.1, JMS, Web Services, JBoss7, Hibernate
Database : Sybase 15.7 & MySQL
Confidential
Senior Software Consulting Engineer
Responsibilities:
- Enhanced surveillance automation and integration of financial surveillance data of the Private Wealth Management business division of Confidential .
- Designed the logical model of the new database, formulated high level data flows and processing flows for the back-end architecture of the system.
- Implemented the database objects and developed stored procedures required for both backend processing and frontend data retrieval. a reporting framework customized for Compliance users using Core Java, IBM ILOG and Sybase Stored procedures resulting in a much superior performance compared to the legacy system.
Technologies : Sybase 12.5, Perl, Java 1.5, Servlets, Hibernate, XML, UNIX
Confidential
Senior engineer
Responsibilities:
- This was a mission-critical system that was historically plagued with multiple issues, the magnitude of which were widespread and often having detrimental impact on the financial side of the business unit.
- Designed, implemented & maintained a novel mapping layer which converted the hierarchal source accounts data to a flat data structure.
- Developed a new platform for reconciling reference data between different source systems and accounts system, thereby eliminating the possibilities of multiple processing failures and production issues.
Technologies : Sybase, Perl, C, UNIX, Java 1.4
Confidential
Responsibilities:
- Worked on an Exception Processing (Smartex) system that manages a large number of business exceptions and their daily workflow.
- This system had multiple real time, file based as well as End of-Day processes that raised exceptions.
- Implemented a STP based design for the real time processing of the fixed income and equity trades fail management process using Java, UNIX Shell scripting and Sybase stored procedures.
Technologies : Java 1.4, Sybase, Perl, C, UNIX,