We provide IT Staff Augmentation Services!

Big Data Admin Resume

2.00/5 (Submit Your Rating)

Atlanta, GA

SUMMARY

  • Sr Big Data Administrator, having over all 10+ years of experience on level 1, level2 and level3 support, Administration and Deployment.
  • Experience on Tibco Software Installation, troubleshooting, Performance fine tuning and maintenance.
  • Worked on different big data tool such as Hadoop, Elastic Search, Cassandra, Couchbase, Big SQL, Hive, Hbase, Spotfire, Flume, Sqoop, Ambari, Oozie and ZooKeeper.
  • Configure and monitor Hadoop clusters with different Hadoop distributions (IBM BigInsights, Apache)
  • Excellent understanding of software development methodologies and software development life cycle like Waterfall model, AGILE and SCRUM.
  • Expertise in three levels of Data Modeling (Conceptual, Logical, Physical).
  • RDBMS experience includes Oracle, MSQL, MySQL and programming using PL/SQL, SQL.
  • Have given extensive Production Support experience on the Tibco and Wesbphere MQ Products, in various operating systems like Windows, UNIX, Solaris and Linux. Capable of developing and incorporating well Integration solutions.
  • Hands on experience on Linux shell scripts, Windows PowerShell, Python and Ruby. Used Incident ticketing system, Change Approval tools, Web Track Tool and Test Director Bug reporting tools.
  • Prepare the Quality Assurance test plan sets for integration, regression, smoke and load testing of base product and functional testing of new or enhanced features.
  • Managing 12 offshore resources for NA, ASIA and Europe.
  • A Self - motivated and quick learner who is willing to adapt to new challenges & technologies. Potential to learn any technology within two weeks and fast learner.
  • ITIL V3 Foundation course certified.
  • CB020 Fundamentals of NoSQL Data Management.
  • CB030 Essentials of Couch base NoSQL Technology.
  • Hadoop Fundamentals I (BD001EN).

TECHNICAL SKILLS

Big Data: Couch Base, MongoDB, Redis, Hadoop, ElasticSearch, Cassandra, Yarn, Big SQL, Hive, Hbase, Sqoop, Flume, Kafka and Zoo Keeper.

TIBCO Messaging: TIBCO Rendezvous 7.x/8.x, TIBCO EMS 4.x/5.x.,6.x,7.x, MQ Series

BMC Monitoring: TIBCO Hawk, GEMS, Hermes, Tivoli, RTView, Team City for auto deployment.

ETL/Visualization/DWH: Qlikview, Tableau, Splunk

Languages: Java, J2EE, HTML, XML,XPATH, XSLT,XSD, SOAP, XSLT, WSDL, Unix shell scripting

Databases: Oracle 9i/10g/11G,R12, DB2,MS SQL

Operating Systems: Windows 8/ 7/, UNIX (ubuntu, aix, Linux), HP-UNIX, Macintosh OS

Web Servers: BEA Web Logic, IBM WebSphere, JBOSS, Microsoft IIS, Apache Web Server, Sun One Web Server (iPlanet)

Version Control Tools: Xmlcanon, ClearCase, PVCS, SVN, RFC

PROFESSIONAL EXPERIENCE

Confidential, Atlanta, GA

Big Data Admin

Environment: Red Hat Linux 6.4, EMS 6.1, Hadoop IBM Big Insights, Elastic Search, DSE Cassandra, Groovy, Power Shell, Chef, Jenkins, SVN, Team City, Hadoop, Hive, HBase, Zoo Keeper, Oozie, Flume, Git java tools, GEMS/Hermes Tool for EMS, Tibco BW 5.11,5.10, TRA 5.7, Admin 5.7, Stream Base, Spotfire, Active Spaces, Hawk 4.X.

Responsibilities:

  • Architect & Designed, Installed and Configured POC for Confidential Search and Browse and provided 400ms average for faster search of UPC Inventory around 150 million.
  • Created two instances and 2 Sharding for elastic search and stored 150 million data into index. Used sense client for Queries.
  • Created POC and perform benchmark for different NoSQL DB such as MongaDB, Redis, Couchbase, MemSQL.
  • Configured shards and running two node instances for Elastic Search with 40 million data of indexes
  • Used groovy scripts for upgrade and configured Sense Client for writing Elastic Search queries.
  • Configured Cassandra for replication factor 1 to make faster write consistency and running 4 node ring architecture.
  • Installed Datastax Devcenter and Opscenter and configured for application query and monitoring for developers.
  • Enable log trace to identify slow running queries from the application.
  • Configured Flume to get the real time data from Oracle Golden Gate.
  • Configured Sqoop to get the RDBMS data to Hadoop.
  • Involved in Designing, Planning, Administering, Installation, Configuring, Updating, Troubleshooting, Performance monitoring and Fine-tuning of Cassandra cluster.
  • Hands-on experience in setting up multi rack, multi data center Cassandra cluster in production and single data-center cluster in testing environment.
  • Installed Datastax Cassandra 4.5. In Production, Testing environments as per best practices.
  • Upgraded Datastax Cassandra cluster from 4.5. To 4.6.
  • Experience in monitoring and managing Cassandra cluster.
  • Installed Datastax Opscenter and Nagios for monitoring purposes.
  • Administered, monitored and maintained multi data-center Cassandra cluster using OpsCenter and Nagios in production.
  • Involved closely with developers for choosing right compaction strategies and consistency levels.
  • Experience in data migration from RDBMS to Cassandra.
  • Troubleshoot read/write latency and timeout issues using nodetool cfstats, tpstats, cfhistogram and netstats.
  • Expertise in Evaluation, benchmarking and tuning data model by running endurance tests using JMeter, Cassandra Stress Tool and OpsCenter.
  • Automated and deployed Cassandra environments using puppet.
  • Configure and tune the cluster and operating system software to ensure optimum performance and resource utilization.
  • Wrote Scripts to schedule and run repairs on all nodes for data consistency.
  • Created the necessary keyspaces and modeled column families based on the queries.
  • Worked with CQL to execute queries on the data persisting in the Cassandra cluster.
  • Designed and implemented comprehensive Backup plan and disaster recovery strategies.
  • Designed and implemented LDAP to enhance the security of the cluster.
  • Hands on experience in analyzing Log files to find the find the root cause.
  • Provided 24 x 7 on call support as part of a scheduled rotation with other team members.
  • Hands on experience on Administration tools used with Couchbase Server are the Couchbase Web Console, Command-line interface (CLI) and REST API.
  • Good understanding of Couchbase cluster including information and concepts needed to understand the fast and elastic nature, high availability, and high performance of the Couchbase Server database.
  • Hands on experience setting up Security which process a large amount of unstructured data coming in big volumes and Confidential high speed.
  • Day to day Deployment configuration take into account topics such as restricted access, node communication, swap configuration, and connection timeouts.
  • Hands on experience to monitor Couchbase servers including underlying processes, ports, and queuing.
  • Good exposure setting up XDCR replicates data from one cluster to another cluster primarily for disaster recovery.
  • Hands on experience on Couchbase web console is the main tool for managing the Couchbase environment.
  • Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using IBM Big Insights.
  • Expertise in HDFS Architecture and Cluster concepts (HDFS, MapReduce and YARN).
  • Experience in Hadoop Administration (HDFS, MAPREDUCE, HIVE, HBASE, PIG, SPARK, SQOOP, FLUME AND OOZIE).
  • Experience in installing Hadoop cluster using different distributions of Apache Hadoop, Cloudera and Hortonworks.
  • Hands on experience in provisioning and managing multi-tenant Hadoop clusters on public cloud environment such as Amazon Web Services (AWS)-EC2 and on private cloud infrastructure.
  • Expertise in configuring and implementing Schedulers.
  • Experience in benchmarking, performing backup and recovery of NameNode metadata and data residing in the cluster.
  • Good experience on Design, configure and manage the backup and disaster recovery for Hadoop data.
  • Good Experience in understanding the client's Big Data business requirements and transform it into Hadoop centric technologies.
  • Experience in performing minor and major upgrades. Hands on experience on Ambari to manage Hadoop components.
  • Experience in administering Linux systems to deploy Hadoop cluster and monitoring the cluster using Nagios and Ganglia tool.
  • Hands on experience in analyzing Log files for Hadoop eco system services and finding root cause.
  • Experience on Commissioning, Decommissioning, Balancing, managing Nodes and tuning server for optimal performance of the cluster.
  • Experience in HDFS data storage and support for running map-reduce jobs.
  • Knowledge on Hbase, Pig, Hive, SPARK and zookeeper.
  • Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/ mainframe and vice-versa.
  • Rack aware configuration for quick availability and processing of data.
  • Strong knowledge in configuring NameNode high availability and NameNode federation.
  • Strong knowledge in configuring Flume for efficiently collecting, aggregating large amounts of log.
  • Installed and configured Tibco Spotfire, Spotfire Web Player with HA.

Confidential, Atlanta, GA

Senior Oracle DBA

Environment: 10g RAC/9i, ASM, AIX, Sun Solaris, Red Hat Enterprise Linux, OEM, Explain Plan, Tkprof, SQL*LOADER, Oracle Express.

Responsibilities:

  • Involved in 24x7 supports for Production database and applications with on-call pager.
  • Installation and creation and maintenance of 10g RAC and 9i databases for Testing, Development and Production.
  • Provided support to customers to manage their operational data ranging from GB to TB in Oracle database, solving of space problem, data loading problem and other database maintenance operations.
  • Involved in upgrades from oracle 9i to 10g, 9i/10g.
  • Performed migration from 9i to 10g from AIX to Linux.
  • Developed startup/shutdown guides, trouble shooting documents, root cause analysis documents.
  • Worked on Data pump (Expdp / Impdp), Import / Export for logical database maintenance.
  • Performing Backup/Recovery of all the Oracle databases using RMAN, also setup RMAN catalog for the same.
  • Creating & Managing Database Structures, Storage Allocations, Table/Index segments, Constraints, Database Access, Roles and Privileges.
  • Creating roles and granting to users based on their group requirements.
  • Creating profiles and attaching them with users.
  • Creating and Managing Tablespaces with different block-sizes to prevent row chaining.
  • Developed database system build sheets, code review check lists, code promotion process documents, outage reporting documents.
  • Resolving SQL tuning issues and Tuning Considerations for Different Applications.
  • Monitoring the table growth, database growth, current users, and system resource usage and so on.
  • Experience in SQL*Loader, loading data from flat files and from text files.
  • Worked on Unix Shell scripting in csh, ksh for automating startup/shutdown of databases, getting system alerts etc
  • Cloning databases using scripts as well as RMAN. Installation, setup and configuration of Data guard. Used PL/SQL, SQL Loader as part of ETL process to populate the operational data store.
  • Implemented OLAP on Oracle platform using Oracle Express (OLAP server), Relational Access Administrator (RAM), Oracle Sales Analyzer (OLAP front-end) in Oracle 8i / 9i environment, Data Migration as part of enterprise data warehouse project. Logical and physical data modeling using Erwin, Dimensional / OLAP Data Modeling, Relational / OLTP modeling.
  • Performed scheduling of Change Control-activities and scheduling jobs.
  • Tuning SQL queries using TKPROF, EXPLAIN PLAN, and SQL TRACE.
  • Performed Database Reorganization regularly to remove row migrations and fragmentation using export and import utilities. Write UNIX Shell Scripts to schedule these activities.
  • Contact oracle metalink to resolve database issues and to raise SR's.
  • Installed and upgraded MySQL servers from 5.0, 5.1 to 5.5/5.6 and configuring replication setups (Master - Slave, Master - Master).
  • Configured and maintained MySQL clusters (NDB), MySQL Fabric on the production MySQL servers.
  • Tested the implementations of MEM (MySQL Enterprise Monitor) & MEB (MySQL Enterprise Backup) on the dev & QA environments, discussing to implement them in the production environment.
  • Administered, upgraded and monitored production MySQL databases in auto-sharding, in-memory HA environment via the MySQL NDB clusters
  • Has effectively communicated with the application teams to use the slave server (MySQL) for all of the select queries being executed for the application reporting needs.
  • Has extensively worked on configuring, monitoring and troubleshooting the SQL Server
  • HA solutions Always On, clustering, mirroring and replication. Installed the 4 node clusters on SQL Server 2012.
  • Installed, deployed, administered 22 SQL Servers MySQL servers on 64-bit/32-bit environment with hosting the high volume databases sizes up to 3.6 TB.
  • Monitored prod MySQL servers using shell scripts, configured automated dumps via the cron jobs.
  • Mastered MySQL Backup technologies mysqldump, xtrabackup, lvmsnapshot, coldbackups and implemented partial backups loading from xtrabackup full backups in Percona 5.5 and 5.6 and MySQl 5.6.
  • Implemented federated and archive engine tables in MySQL.

Confidential, Tampa, FL

Senior Tibco Administrator

Environment: Oracle 11g, Red Hat Linux 5.x, Tibco BW 5.7,5.8, TRA 5.6, Admin 5.6, Hawk 4.X, Bea WebLogic 10.x, Wesbpshere, Hermes 8.1, Webpshere MQ 7, Tibco BE 5.x, Tibco IProcess 11.x, Tibco IProcess Web Clients, Tibco iProcess ToolKits. AMX BPM 1.2.0.

Responsibilities:

  • Installed and setup AMX BPM 1.2.0 product for customer POC projects. Installed Business Studio 3.5.2, setup workspace, openspace eclipse based tools for iProcess migrations products. Using inbuilt Ant Scripts performed deployment and configurations.
  • Performed several data center migrations for tibco bw and ems on Hardware RHEL from 4.x to 5.x.
  • Installed Tibco Spotfire Server 3.0 for Customer POC projects and setup cloud space environments with library servers.
  • Setup Business Events E.E, RMS server and Virtual Rule Functions for Business Events Customer projects. Helped customer to install Business Events software eclipse based products and involved deployment of ear files to Tibco administrator.
  • Experience on Capacity, Planning, Estimation for customer applications while onboaring to GFTS Environment.
  • Helped application developers providing development guidelines and troubleshooting issues.
  • Configured Hermes Autodeployment metadata on xmlcanon and troubleshooting deployment issues and product bugs.
  • Evaluate new Tibco products on our POC Clustered environment with A/A and A/P Setup. Configured AMX BPM using third party Cluster HP Service Guard.
  • Helped other team to troubleshoot java related issues on customer applications, provided solution to setup tibco ems configuration on “Websphere & Web logic” administrator.
  • Created shell scripts using appmanage commands to deploy and install.
  • Used Java utility keytool/OpenSSL to verify SSL digital certificates. Set up both active/active jndi pair for Tibco ESB Infrastructure. Helped to Setup both Client/Server two way SSL authentication for ems instances.
  • Escalating Critical issues with Tibco Vendor. Setting up Active/Active BigIP network level load balancer for both BW/EMS instances.
  • Create Disaster recovery plan and help team every year on COB test for both EMS/BW instances as per Audit and Security.
  • Provided production on call support and help to troubleshoot issues on production environment for performance issues and fine tuning. Suggested workaround, code change and enhancement based on the type of problem. Help customer to avoid business impact and reduce the impact as quickly as possible.
  • Involved in critical production migration to plan strategy from RH Linux 3.x to 5.x. Performed several data center migrations.
  • Administered the Domain using Administrator 5.6. Configured Groups and Users. Configured BW Services and Adapters.
  • Configured Business Works Servers in Fault Tolerance/Load Balanced mode in NA & EMEA.
  • Deployed the Tibco Business Events/Works components on Tibco BW Servers in NA and EMEA.
  • Manually Deployed/Undeployed, Deleted, Started and stopped the services and adapters.
  • Used scripts to Deploy/Undeploy, Delete, and Start and Stop the services and adapters.
  • Used Tibco utilities like domain utility, AppManage, build ear and Obfuscate
  • Responsible for preparing and delivering the Deployment Documents.
  • Installed Tibco components (e.g.TRA, Business Works, BW SmartMapper, Admin, EMS, etc.)
  • Auto deployment Process on all environments using Hermes. Helped Customer to setup java thread count, global variable.
  • Maintain customer projects Meta data files using xml canon database.
  • Installed and configured xmlcanon to maintain for tibco project libraries, xml, Meta data and canonical data.

Confidential, Dallas, TX

Tibco Administrator

Environment: Oracle 9i, IBM AIX 5.1, PeopleTools 8.19, PeopleSoft HRMS 8.3 SP1 Bundle 6, Bea WebLogic 5.1 Tuxedo 6.5/Jolt 1.2.

Responsibilities:

  • Performing TIBCO administration and Disaster Recovery functions. Set up the Load Balancing and Fault Tolerance for the BW Engines Confidential runtime
  • Used Message Selectors on Bridges to route the traffic as per Business Requirements.
  • Involved in generating the Tibco Hawk Rule Bases for monitoring the BW engines, TIBCO Adapters and Log files.
  • Experience with Tibco Rendezvous how use RVD and RVRD. Installing, configuring, and testing TIBCO EMS, Rendezvous, TRA, Administrator, Business works, Smartmapper, Hawk, Adapters, etc.
  • Tibco implementation and deployment with UNIX.
  • Provided WBMQSeries v6.0 administrative duties such as installing WB MQSeries Domain’s, IT (test), development Environments, Pre-Prod, and Prod and ensures functionality on AIX, UNIX, and Windows.
  • MQ Series Administrative duties such as Creating and configuring Queue Managers, Channels, Queues, Process definitions, clustering in IT, Pre- Production, Production.
  • Designed, implemented, and managed TIBCO domains including resource management, security policy management, and application management
  • Configured TIBCO EMS highly available/fault tolerant servers, queues, topics, routes, zones, users and groups.
  • Configured, deployed, and migrated TIBCO projects across different lifecycle environments using TIBCO Administrator GUI and scripted deployments.
  • Developed and manage Hawk rule bases and Work with technical support.
  • Experience TIBCO EMS(JMS) /BW and Adapters, as well as TIBCO administrator
  • Troubleshooting and tuning Tibco installations, Business works processes and other processes.
  • Maintain, test, and execute disaster recovery procedures for TIBCO environment.
  • Provided technical expertise and guidance on TIBCO administration approaches, process re-engineering and design.
  • Experience project from initiation and requirements-gathering stage all the way to completion.
  • Delivered expertise and support in resolving application issues.
  • Provided a continual example of high level of service to all co-workers and customers.
  • Provided on-call support 24/7 for TIBCO environment and Working Experience independently and within the team.
  • Modified unix scripts using appmanage commands to deploy and install.

Confidential

Tibco/Java Developer

Environment: Tibco Business Works, Hawk, Tibco MQ Adapter, Tibco Administrator, IBM websphere MQ 5.3, IBMWAS 5.1,5.0, Tibco Staffware Workflow suite(Tibco IProcess(now)), IBM Content Manager 8.1, IBM DB2.

Responsibilities:

  • Installed staffware engine on aix environment. Created users, groups, roles, and supervisor. Assign users to groups, identify and assign roles and assign values to attributes. Define supervisors for user and group queues.
  • Prediction the cases, allow to jump different set of outstanding steps in different forms.
  • Installing Tibco Business Works, Hawk, and Tibco Administrator, Tibco Staffware workflow suite and MQ Adapter and IBM websphere MQ.
  • Involved in development and third level production support for Confidential workflow implementation.
  • Configured the TIBCO Active Database adapters to interact with Oracle database to maintain information for large and complex structures and created views and table joins as per requirements
  • Used TIBCO Rendezvous Reliable message delivery as well as Certified Message (RVCM) to transport messages.
  • Supported TIBCO MQ adapters to maintain information for large and complex structures and created views and table joins as per requirements.
  • Deployed all the components to the designated machines, configured setup failure and alert options in the deployment configuration using Designer GUI.
  • Implemented Error handling in business process and conducted Unit testing, Component testing and supported system testing.
  • Monitored and managed adapters and process engines using TIBCO Administrator and TIBCO Hawk.
  • Web Service Integration was done for Confidential to get their data from Third-Party.

We'd love your feedback!