We provide IT Staff Augmentation Services!

Hadoop Admin Resume

0/5 (Submit Your Rating)

Charlotte, NC

SUMMARY

  • 6 years of professional IT experience which includes around 3 years of hands on experience in Hadoop Administration using Cloudera (CDH) and Horton works (HDP) Distributions on large distributed clusters.
  • Hands on Experience in Installing, Configuring and using Hadoop Eco System Components like HDFS, Hadoop Map Reduce, Yarn, Zookeeper, Sentry, Sqoop, Flume, Hive, HBase, Pig, Oozie.
  • Good working experience onHadooparchitecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm.
  • Experience in Importing and Exporting Data between different Database Tables like MySQL, Oracle and HDFS using Sqoop.
  • Had good working experience onHadoop architecture, HDFS, Map Reduce and other components in the Cloudera - Hadoop echo system.
  • Experience in writing scripts for Automation.
  • Experience in Benchmarking, Backup and Disaster Recovery of Name node Metadata.
  • Experience in performing minor and major Upgrades of Hadoop Cluster (Hortonworks Data Platform 1.7 to 2.1, CDH 5.5.5 to 5.8.3)
  • Experience with multiple Hadoop distribution s like Apache, Cloudera, MapR and Hortonworks.
  • Experience in securing Hadoop clusters using Kerberos and Sentry.
  • Experience with distributed computation tools such as Apache Spark Hadoop.
  • Experience as Deployment Engineer and System Administrator on Linux (Centos, Ubuntu, Red Hat).
  • Well versed in installing, configuring and tuning Hadoop distributions: Cloudera, Hortonworks on Linux systems.
  • Experience with Red hat Packet Manager packaging and RPM deployments.
  • Experience with Nagios and writing plugins for Nagios to monitor Hadoop clusters.
  • Experience in supporting users to debug their job failures.

TECHNICAL SKILLS

Hadoop Ecosystem: Map Reduce, Yarn, Hive, Pig, Sqoop, Oozie, Sentry, Spark, Kafka Zookeeper

Hadoop Management: Cloudera Manager, Ambari

Hadoop Paradigms: Map Reduce, Yarn, High Availability

Other Relevant Tools: SVN, Tableau, MS Office Suite

RDBMS: Oracle 10g, MS SQL Server 2000/2003/2008 R2/2012, DB2, Teradata, MySQL.

Programming Languages: Linux, Unix Shell scripting, SQL

Monitoring and Alerting: Nagios, Ganglia

Operating Systems: Centos 5,6, Red hat 6, Ubuntu Server 14.04(Trusty), Windows Server 2012

PROFESSIONAL EXPERIENCE

Confidential, Charlotte,NC

Hadoop Admin

Responsibilities:

  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning, Troubleshooting, Manage and review data backups, Manage &review log files.
  • Responsible for day-to-day activities which includes HDFS support and maintenance, Cluster maintenance, creation/removal of nodes, Cluster Monitoring/ Troubleshooting, Manage and reviewHadoop log files, Backup and restoring, capacity planning
  • Worked with Hadoop developers and operating system admins in designing scalable supportable infrastructure for Hadoop.
  • Involved in troubleshooting issues on the Hadoop ecosystem, understanding of systems capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
  • Responsible for scheduling jobs in Hadoop using FIFO, Fair scheduler and Capacity scheduler
  • Periodically reviewed Hadoop related logs and fixing errors and preventing errors by analyzing the warnings.
  • Installed & Tested Hadoop security monitoring tools such as Imperva, Dataguise, Guardium(IBM) for auditing the activities performed by users on Hadoop Platforms
  • Designed and tested Hadoop audit processes from Guardium to Analytics tool for Operations team
  • Designed and tested Hadoop policy in Guardium to improve data accuracy and system performance
  • Closely worked with IBM to resolve issues and improve Hadoop monitoring process
  • Conducted feasibility analysis of Guardium Database Blocking preventative control including architectural changes and initial design of blocking use cases.
  • Performed native logging and vendor tool assessments to determine alignment of DBAM monitoring and functional requirements.

Environment: Hadoop, HDFS, MapReduce, Yarn, Hive, Hue, Sqoop, Kafka, Zookeeper, Sentry, Kerberos Cloudera CDH 5.8.3, Redhat 6.8 Linux, Cloudera Navigator, Oracle 11g, Informatica, Guaudium (IBM)

Confidential, Dallas,TX

Hadoop Admin

Responsibilities:

  • Responsible for installing and upgrading the Hadoop environment and integrating with other components.
  • Worked with Hadoop developers and operating system admins in designing scalable supportable infrastructure for Hadoop
  • Responsible for Operating system and Hadoop Cluster monitoring using tools like Nagios, Ganglia, Cloudera Manager
  • HA Implementation of Namenode replication to avoid single point of failure.
  • Involved in troubleshooting issues on the Hadoop ecosystem, understanding of systems capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Involved in setup, configuration and management of security for Hadoop clusters using Kerberos and integration with LDAP/AD at an Enterprise level
  • Operating system and Hadoop Cluster monitoring using tools like Nagios, Ganglia.
  • Responsible for scheduling jobs in Hadoop using FIFO, Fair scheduler and Capacity scheduler
  • Possess good Linux and Hadoop System Administration skills, networking and familiarity with open source configuration management and deployment tools such as Salt & Ansible

Environment: Hadoop distributions (HDP 2.1), Red hat Linux 6.x, Shell Scripts, Nagios, Ganglia monitoring, Kerberos, Shell scripting, Hive, Pig, Scoop, Flume, HBase, Zookeeper, Oozie, YARN, Cloudera Manager, etc.

Confidential

System Administrator

Responsibilities:

  • Responsible for handling the tickets raised by the end users which includes installation of packages, login issues, access issues
  • User management like adding, modifying, deleting, grouping
  • Responsible for preventive maintenance of the servers on monthly basis.
  • Configuration of the RAID for the servers
  • Resource management using the Disk quotas
  • Documenting the issues on daily basis to the resolution portal.
  • Responsible for change management release scheduled by service providers.
  • Generating the weekly and monthly reports for the tickets that worked on and sending report to the management
  • Managing Systems operations with final accountability for smooth installation, networking, and operation, troubleshooting of hardware and software in LINUX environment.
  • Identifying operational needs of various departments and developing customized software to enhance System's productivity

Confidential

Linux Administrator

Responsibilities:

  • Creating and managing user accounts, assisted several users with resolving issues
  • Setting ACLs and Directory, File System permissions
  • Troubleshoot Unix/Linux operation related issues and network connectivity issues
  • Troubleshoot SElinux, configured, Local Firewall (Iptables) Administration
  • Installed, configured, YUM repository and maintained Linux physical servers using kickstart and tcpdump/Network sniffer tools
  • Managing and creating file systems using fdisk and LVMs
  • Configured SSH/FTP access/Samba Share/Web Server/NFS Server/MTA/iSCSI/Kernel
  • Congfiguration and Automation Puppet in managing Linux Infrastructure
  • Scheduling jobs and administrative tasks using Cron
  • Perform daily performance monitoring on Linux servers for CPU, memory and disk utilization using top, vmstat, mpstat, iotop, htop, free and sar command
  • Diagnose and debug performance bottlenecks; identify missing indexes; perform backups/restores.
  • Monitor health and performance of production systems using Nagios and HP-Openview
  • Managed Jboss servers ( Maintenance, monitoring, Application deployment)
  • Planned, scheduled and Implemented OS patches on RHEL.

We'd love your feedback!