Storage Administrator Resume
Madison, NJ
SUMMARY
- Over 7+ Years of IT experience in Administration on various platforms like cloud, Hadoop, EMC SAN Storage and Incident management.
- Have extensive 3+ Years of Experience working on Hadoop eco - system.
- Experience in Configuring, Installing and Managing Apache Hadoop and Cloudera Hadoop.
- Extensive experience with Installing New Servers and rebuilding existing Servers.
- Experience in using Automation Tools like Puppet for Installing, Configuring and Maintaining Hadoop clusters.
- Experience in using Cloudera Manager 3.x, 4.x, 5.x and 5.5.x for Installation and Management of Hadoop Cluster.
- Experience in DRBD implementation for Name Node Metadata backup.
- Experience in HDFS High Availability.
- Experience in configuring, installing, benchmarking and managing Cloudera distribution of Hadoop on AWS, Virtual and Cloud servers.
- Expertise in writing Shell Scripts and Perl Scripts and debugging existing scripts.
- Experience in Performance Management of Hadoop Cluster.
- Experience in using Flume, KAFKA to load log files into HDFS.
- Expertise in using Oozie for configuring job flows.
- Experience in OS/Apache/RDBMS tuning.
- Managing the configuration of the cluster to meet the needs of data analysis whether I/O bound or CPU bound.
- Developed Hive Queries and automated those queries for analyzing on Hourly, Daily and Weekly basis.
- Coordinating Cluster Services through Zookeeper.
- Importing and exporting data into HDFS and Hive using Sqoop.
- Experiencein Importing andExportingpreprocessed data into commercial analytic databases like RDBMS, Netezza
- Experience in Storage provisioning, monitoring and troubleshooting EMC VNX, VMAX, Clariion CX4, CX3 series Arrays, Hitachi HDS USP-VM/USP-V and Microsoft Storesimple
- Creating the new fabric configurations and adding it to the current zones into the configurations as required. Experience in troubleshooting end-end issues.
- Zoning and Fabric Management for all SAN related network equipment’s.
- Worked on the CTA appliance for archiving the data to Centerra.
- Managing high and critical Incidents to ensure timely completion
- Overseeing all Incidents and user service requests for timely completion
TECHNICAL SKILLS
Hadoop Ecosystem: HDFS, Hive, Pig, Flume, Oozie, Zookeeper, and Sqoop, Hue, Impala, solr, Kafka, Flume and Spark.
Automation Tool: Puppet, Cloudera Manager, MAPR Control System (MCS).
Network administration: TCP/IP fundamentals, wireless networks, LAN and WAN.
Languages: C, SQL, PIG LATIN, UNIX Shell Scripting, UML.
Monitoring and Alerting: Nagios, Ganglia. Cloudera Navigator
PROFESSIONAL EXPERIENCE
Confidential, Los angeles, CA
Hadoop Admin
Responsibilities:
- Experience in configuring, installing, benchmarking and managing Apache, Horton works and Cloudera distribution of Hadoop on AWS cloud and Virtual servers.
- Setting up Hadoop cluster Cloudera CDH 5.3.1, .5.4, 5.5.1 and 5.5.2.
- Designed and implemented disaster recovery of the CLDB data.
- Importing and exporting data using Sqoop from Netezza, SQL and Oracle DBs.
- Designed the scheme and implement hive table for the most widely used database in the company.
- Data cleansing and exporting that to the data-warehouse.
- Setting up security for the Hive databases.
- Providing 24*7 supports for the team by maintaining the health of the cluster.
- Highly involved in operations and troubleshooting Hadoop clusters.
- Upgrading the Cluster with the latest software available for bug fix.
- Developing ETL process to pull data into Hadoop Cluster from different sources (FTP, DATA WAREHOUSE).
- Manage the day to day operations of the cluster for backup.
- Involved in implementing security on Cloudera Hadoop Cluster using Kerberos by working along with operations team to move unsecured cluster to secured cluster.
- Implemented Hive Scripts according to the requirements.
- Jobs management using Fair scheduler.
- Cluster coordination services through Zookeeper.
- Installed multiple Hadoop and HBase clusters.
- Monitored cluster job performance and capacity planning.
- Implemented shell scripts for log-Rolling day to day processes and made it automated.
- Upgrading the Cluster with the latest software available for bug fix.
- Coordinating with other teams for data import and export.
- Implemented Oozie work-flow for ETL Process.
Environment: MapReduce, HDFS, Hive, Impala, Java, SQL, Cloudera Manager 5.5.1, Pig, Sqoop, Oozie, flume Kafka and Sentry.
Confidential, Madison, NJ
Hadoop Admin
Responsibilities:
- Installed multiple Hadoop and HBase clusters.
- Installed Cloudera Manager 4.x and 5 on CDH 4 and 5 versions.
- Installed ganglia to monitor Hadoop daemons and Implemented the changes in configuration parameters and in parallel monitored the changes in Ganglia.
- By using flume collected web logs from different sources and dumped them into HDFS.
- Implemented Oozie work-flow for ETL Process.
- Exporting data from RDBMS to HIVE, HDFS and HIVE, HDFS to RDBMS by using SQOOP.
- Implemented shell scripts for log-Rolling day to day processes and made it automated.
- Implemented DRBD for Name node Metadata Backup.
- Coordinating FLUME, HBASE nodes and master using zookeeper.
- Automated Installing Hadoop cluster using puppet.
- I was the part of CKP (customer knowledge platform) project.
- Involved in Adhoc meeting to understand the client’s requirements.
- Involved in Scrum meeting to provide day to day updates.
- Implemented Oozie work-flow for ETL Process.
- Implemented Hive Scripts according to the requirements.
- Implemented designs to overcome the Read-Write complications of billions of records.
- Jobs management using Fair scheduler.
- Cluster coordination services through Zookeeper.
- Importing and exporting data into HDFS and Hive using Sqoop.
- Loading log data directly into HDFS using Flume.
Environment: Java 6, Eclipse, Oracle 10g, Sub Version, Hadoop, Hive, HBase, Linux, MapReduce, HDFS, Hive, Java (JDK 1.6), Hadoop Distribution of HortonWorks, Cloudera, DataStax, IBM DataStage 8.1, Oracle 11g/10g, PL/SQL, SQL*PLUS, Toad 9.6, Windows NT, UNIX Shell Scripting.
Confidential, NJ
Cloud admin
Responsibilities:
- Assisted in creation of ETL process for transformation of data sources from existing RDBMS Systems.
- Involved in various POC activity using technology like Map reduce, Hive, Pig, and Oozie.
- Involved in designing and implementation of service layer over HBase database.
- Importing of data from various data sources such as Oracle and Comptel server intoHDFS using transformations such as Sqoop, Map Reduce.
- Analyzed the data by performing Hive queries and running Pig scripts to know user behavior like frequency of calls, top calling customers.
- Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
- Developed Hive queries to process the data and generate the data cubes for visualizing.
- Designed and developed scalable and custom Hadoop solutions as per dynamic data needs.
- Coordinated with technical team for production deployment of software applications for maintenance.
- Provided operational support services relating to Hadoop infrastructure and application installation.
- Supported technical team members in management and review of Hadoop log files and data backups.
- Participated in development and execution of system and disaster recovery processes.
- Formulated procedures for installation of Hadoop patches, updates and version upgrades.
- Automated processes for troubleshooting, resolution and tuning of Hadoop clusters.
Environment: Hadoop, Map Reduce, HDFS, Hive, Oozie, Java (JDK 1.6), Cloudera, NoSQL, Oracle 11g/10g, Toad 9.6, Windows NT, UNIX (Linux), Agile.
Confidential
Storage Administrator
Responsibilities:
- Exposure to Storage Technologies - SAN, NAS, DAS, CAS, Disaster Recovery and Storage Virtualization.
- Well acquainted with different SAN Topology and protocols like SCSI, FC, FCIP, iSCSI
- Administering and monitoring the SAN Storage environment
- Maintain Hitachi and EMC Storage Provisioning. (Allocation and Reclaim).
- LUN creation, LUN expansion, Mapping and Masking.
- Performance issues and troubleshooting and fix the issues.
- Storage management using respective tools namely Navicli, Navisphere Manager, Celerra Manager, Symcli, and Unisphere.
- Creating Storage groups and assigning LUN’s and Servers.
- Reclaiming the unused storage from servers
- Archiving the data to secondary storage through CTA application (Cloud Tiering appliance)
- Storage replications through Snapshot & Clones, Sancopy Mirror view
- Creating initiator groups, port groups, storage groups and mask view
- Worked as an Incident manager for Confidential, and gave end to end IT support (both Infrastructure and Application support).
- Managing high and critical Incidents to ensure timely completion
- Managing user escalation
- Overseeing all Incidents and user service requests for timely completion
- Central Communication point for Major Incidents managed by their organization
- Respond to user escalations and engage functional escalation and service delivery Management as required
- Responsible for escalating Incidents and User Service Requests within their organization
- Assist the queue managers with the correct rerouting of the misrouted tickets
- Participate in incident management meetings Identify process improvements
- Provide incident report to problem management
- Ensure their organization is aware of current incident process and adhere to it