We provide IT Staff Augmentation Services!

Hadoop Administrator Resume

0/5 (Submit Your Rating)

GA

OBJECTIVE:

  • To be part of an organization with an objective to accept the challenges and to work towards achieving the goals of organization that will build on my skills and provide me with ample scope for growth.

PROFESSIONAL SUMMARY:

  • Around 9 years of IT experience including 3 years of experience with Hadoop Ecosystem in installation and configuration of different Hadoop eco - system components in the existing cluster.
  • Experience in deploying and managing the multi-node development, testing and production Hadoop cluster with different Hadoop components (HIVE, PIG, Spark, SQOOP, OOZIE, FLUME, RANGER,KNOX, HBASE, ZOOKEEPER) using Apache Ambari.
  • Experience on Horton works and Cloudera manager Strong knowledge on Hadoop HDFS architecture and Map-Reduce framework.
  • Experience in improving the Hadoop cluster performance by considering the OS kernel, Storage, Networking, Hadoop HDFS and Map-Reduce by setting appropriate configuration parameters.
  • Experience in administering the Linux systems to deploy Hadoop cluster and monitoring the cluster using Ambari.
  • Experience in upgrading Hadoop cluster from current version to minor version upgrade as well as to major versions.
  • Experience in using Zookeeper for coordinating the distributed applications.
  • Experience in managing Hadoop infrastructure like commissioning, decommissioning, log rotation, rack topology implementation.
  • Experience in managing the cluster resources by implementing fair and capacity scheduler.
  • Experience in scheduling jobs using OOZIE workflow.
  • Scheduling jobs using crontab.
  • Experience in benchmarking, performing backup and disaster recovery of Name Node metadata and important sensitive data residing on cluster.
  • Strong knowledge in configuring Name Node High Availability.
  • Experience in configuring HadoopSecurity (Ranger and Knox gateway).
  • Experience in handling multiple relational databases: MySQL, SQL Server.
  • Assisted Developers with problem resolution.
  • Ability to play a key role in the team and communicates across the team.
  • Global Service Delivery experience by bringing together resources to accomplish organizational goals using ITIL framework.
  • Effective problem solving skills and outstanding interpersonal skills. Ability to work independently as well as within a team environment. Driven to meet deadlines. Ability to learn and use new technologies quickly.
  • Worked on setting up Name Node high availability for major production cluster and designed Automatic failover control using zookeeper and quorum journal nodes. Authorized to work in the US for any employer
  • Setting up automated 24x7 monitoring and escalation infrastructure for Hadoop cluster using Ambari
  • Experienced in Linux Administration and TSM Administration

TECHNICAL SKILLS:

Hadoop Ecosystems: HDFS, Hive, Sqoop, Spark, Zookeeper, HBase, Oozie, Kerberos, Ranger, Knox

Operating System: Windows, Linux, AIX, Ubuntu, AWSRDBMS: MYSQL, Oracle, DB2, MSSQL SERVER

Languages: C, C++, a,Bash shell scripting, Python, SQL

Other Tools: Service Now, VPN, Win SCP, Putty,Edit++ and Notepad++

PROFESSIONAL EXPERIENCE:

Confidential, GA

Hadoop Administrator

Responsibilities:

  • Responsible for architecting Hadoop clusters Translation of functional and technical requirements into detailed architecture and design.
  • Installed and configured multi-nodes fully distributed Hadoop cluster of large number of nodes.
  • Addressing and Troubleshooting issues on a daily basis.
  • File system management and monitoring
  • Provided Hadoop, OS, Hardware optimizations.
  • Installed and configured Hadoop ecosystem components like Map Reduce, Hive, Pig, Sqoop, HBase, Zookeeper and Oozie.
  • Involved in testing HDFS, Hive, Pig and Map Reduce access for the new users.
  • Cluster maintenance as well as creation and removal of nodes using Apache Ambari
  • Worked on setting up high availability for major production cluster and designed automatic failover control using zookeeper and quorum journal nodes.
  • Implemented capacity scheduler to allocate fair amount of resources to small jobs.
  • Performed operating system installation, Hadoop version updates using automation tools.
  • Configured Oozie for workflow automation and coordination.
  • Implemented rack aware topology on the Hadoop cluster.
  • Importing and exporting structured data from different relational databases into HDFS and Hive using Sqoop.
  • Configured Zookeeper to implement node coordination, in clustering support.
  • Rebalancing the Hadoop Cluster.
  • Allocating the name and space Quotas to the users in case of space problems.
  • Installed and configured Hadoop security tools Knox, Ranger and enabled Kerberos
  • Managing cluster performance issues. creating snapshots and restoring snapshots.
  • Good experience in troubleshoot production level issues in the cluster and its functionality.
  • Backed up data on regular basis to a remote cluster using DistCp.
  • Regular Commissioning and Decommissioning of nodes depending upon the amount of data.
  • Maintaining Cluster in order to remain healthy and in optimal working condition.
  • Handle the upgrades and Patch updates.

Environment: Hortonworks (HDP 2.5), Ambari 2.4, HDFS, Java, Shell Scripting, Python, Hive, Spark, Sqoop, Linux, SQL, Cloudera, Zookeeper, AWS, HBase, Oozie, Kerberos, Ranger in

Confidential, MA

Hadoop Administrator

Responsibilities:

  • Handle the installation and configuration of a Hadoop cluster.
  • Build and maintain scalable data using the Hadoop ecosystem and other open source components like Hive and HBase.
  • Monitor the data streaming between web sources and HDFS.
  • Close monitoring and analysis of the Map Reduce job executions on cluster at task level.
  • Inputs to development regarding the efficient utilization of resources like memory and CPU utilization based on the running statistics of Map and Reduce tasks.
  • Changes to the configuration properties of the cluster based on volume of the data being processed and performance of the cluster.
  • Setting up Identity, Authentication and Authorization.
  • Maintaining Cluster in order to remain healthy and in optimal working condition.
  • Handle the upgrades and Patch updates.
  • Set up automated processes to analyze the System and Hadoop log files for predefined errors and send alerts to appropriate groups.
  • Inputs to development regarding the efficient utilization of resources like memory and CPU utilization.
  • Based on the running statistics of Map and Reduce tasks.
  • Balancing HDFS manually to decrease network utilization and increase job performance.
  • Commission and decommission the Data nodes from cluster in case of problems.
  • Set up automated processes to archive/clean the unwanted data on the cluster, in particular on Name node and Secondary name node.
  • Discussions with other technical teams on regular basis regarding upgrades, Process changes, any special processing and feedback.

Environment: Hortonworks (HDP 2.2), HDFS, Hive, Spark, Sqoop, SQL, Cloudera, Linux, Java, AWS, Zookeeper, HBase, Oozie.

Confidential

Tivoli Administrator

Responsibilities:

  • Support customer accounts on Backup & Storage technologies.
  • Planning TSM backups with required retention periods and defining the policy domains and management class accordingly, binding of client’s data to required management class to store the data in predefined storage pools like Disk, sequential storage pools and copy data from primary storage pools to copy storage pool for offsite.
  • Configuring TSM operations like expiration, migration, reclamation, collocation and media management.
  • Define and configure the client and administrative schedules, checking the status.
  • Checking error reports on the servers and health check of the servers, if any recovery LOG or Database or Storage pool related issues, troubleshooting based on the criticality.
  • Ensure that all Backup server, tape library hardware and software are maintained to current levels, including system firmware code and that all critical hardware and corresponding appropriate software is placed on service/maintenance contracts.
  • Documentation of infrastructure, software, systems configuration, process and policies.
  • Detect/diagnose and resolve hardware issues (server, tape library, etc) and interface with vendor, manufacturer for H/W and S/W as necessary.
  • Creating daily / Monthly reports on backup status for all customer accounts,
  • Working on EMC Avamar like installation, backup, restore and configuring the policy and schedules.
  • Working on Data Domain 990, creating and managing NFS and CIFS shares for backup and troubleshooting the issues.
  • Working on DD OS code upgrade as per the suggestion from Vendor.
  • Basic knowledge on EMC SAN (Celera) like creating New File systems and exporting NFS and CIFS shares and providing required access.
  • Basic knowledge on Symantec Net Backup appliance like configuring backup, creating new policies and working on master and media server issues.

Confidential

Tivoli Administrator

Responsibilities:

  • Planning TSM backups with required retention periods and defining the policy domains and management class accordingly, binding of client’s data to required management class to store the data in predefined storage pools like Disk, sequential storage pools and copy data from primary storage pools to copy storage pool for offsite.
  • Configuring TSM operations like expiration, migration, reclamation, collocation and media management.
  • Define and configure the client and administrative schedules, checking the status.
  • Checking error reports on the servers and health check of the servers, if any recovery LOG or Database or Storage pool related issues, troubleshooting based on the criticality.
  • Ensure that all Backup server, tape library hardware and software are maintained to current levels, including system firmware code and that all critical hardware and corresponding appropriate software is placed on service/maintenance contracts.
  • Documentation of infrastructure, software, systems configuration, process and policies.
  • Detect/diagnose and resolve hardware issues (server, tape library, etc) and interface with vendor, manufacturer for H/W and S/W as necessary.
  • Working on EMC Avamar like installation, backup, restore and configuring the policy and schedules.
  • Working on Data Domain 990, creating and managing NFS and CIFS shares for backup and troubleshooting the issues.
  • Working on DD OS code upgrade as per the suggestion from Vendor.
  • Basic knowledge on EMC SAN (Celera) like creating New File systems and exporting NFS and CIFS shares and providing required access.
  • Basic knowledge on Symantec Net Backup appliance like configuring backup, creating new policies and working on master and media server issues.
  • Checking administrative schedules like DB BACKUP, BACKUP STG.
  • Configuring SERVER process like Expiration, Storage Pool Migration, Collocation and Reclamation of TAPE Storage Pool Volumes.
  • Configuring TSM Server scripts to make the administration easy.
  • Configuring library and library paths, drives and drive paths, device class.
  • Audit Library if any mismatch in LIBRARY inventory and TSM inventory.
  • AUDIT VOLUME if any inconsistency in VOLUMES DATA, RESTORE VOLUME.
  • Maintaining minimum number of scratch volumes.
  • Upgrading the library and Drive firmware to latest codes as suggested by Vendor.
  • Working on installation of client agents for backup and restores.
  • Configuring the policy sets and groups with respective client type and managing schedules.
  • Working on backup issues and client management activities like move and retire.

Confidential

Tivoli Administrator

Responsibilities:

  • Performing health status checks on TSM server and rectifying errors on the same.
  • Extending TSM database and storage pools whenever required.
  • Creating/modifying policy domains, storage pools and management class on request.
  • Creating new nodes and associating them to backup schedule.
  • Backup status checking and troubleshooting client backup failures.
  • Restoring user data as per request using general restore and point in time restore.

We'd love your feedback!