We provide IT Staff Augmentation Services!

Hadoop/linux Admin Resume

0/5 (Submit Your Rating)

Minneapolis, MN

SUMMARY

  • Around (7Years) of Information and Technology experince.
  • Experience in Hadoop Administration (3 Yrs). And (4yrs) of work Experience in Red hat Linux, SUSE Linux, UNIX Administration, VMware on Cisco UCS, and HP1Servers, SUSE Manager for Mission Critical and Enterprise applications.
  • Experience in installing, configuring, and using Hadoop ecosystem components like Hadoop HDFS, Yarn, MapReduce, HBase, Oozie, Hive, Sqoop, Pig, Flume, SmartSense, Storm, Kafka, Ranger, Spark, Falcon and Knox.
  • Experience in deploying Hadoop cluster on Public and Private Cloud Environment like Cloudera, Hortonworks, Amazon AWS, RackSpace, ECS & ISILON.
  • Experience in managing and reviewing Hadoop log files.
  • Experience in setting up the High - Availability Hadoop Clusters.
  • Ability to prepare documents including Technical Design, testing strategy, and supporting documents.
  • Experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4), Yarn distributions (CDH 5.X).
  • Experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Hortonworks (HDP 2.2, HDP 2.3, HDP 2.4, HDP 2.5, HDP 2.6).
  • Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
  • Experience on Design, configure and manage the backup and disaster recovery for Hadoop data.
  • Experience in analyzing Log files for Hadoop and eco system services and finding root cause.
  • Experience in understanding the security requirements for Hadoop and integrating with Kerberos authentication infrastructure- KDC server setup, creating realm /domain, managing.
  • Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of the cluster.
  • As an admin involved in Cluster maintenance, trouble shooting, Monitoring and followed proper backup& Recovery strategies.
  • Experience in HDFS data storage and support for running map-reduce jobs.
  • Installing and configuring Hadoop eco system like Sqoop, pig, hive.
  • Knowledge on HBase and zookeeper.
  • Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa.
  • Experience on Ambari, Nagios and Ganglia tool.
  • Scheduling all Hadoop/hive/Sqoop/HBase jobs using Oozie.
  • Good knowledge in evaluating big data analytics libraries (MLlib) and use of Spark-SQL for data exploratory
  • Experienced in Linux Storage Management. Configuring RAID Levels, Logical Volumes.

TECHNICAL SKILLS

Hadoop Framework: Hadoop Map Reduce, HDFS, Hive, Pig, HBase, Zookeeper, Oozie, Sqoop, Ranger, Storm, Kafka, Spark, Flume, Knox and Hue.

Database: Oracle 9i/10g, DB2, SQL Server and MYSQL

Network Security: Kerberos

Monitoring Tools: Cloudera Manager, Ambari, Nagios, Ganglia.

Operating System: Red Hat Linux (5/6), Unix, Windows 98 Windows 2000 and NT, Solaris 10,11

Programming languages: C, C++, Java, Python, Linux shell scripts, VB.NET

Hardware: Cisco UCS C240 M3 and C240 M4, HP Gen 8 Blade and Rack mount Servers, Sun Fire 280R/V 480/4800/3800/12 K/15k, Sun Enterprise 6500/5000/450/420 R, HP 9000 K, L, N class server, rp8xxx/7xxx, IBM RS/6000, Series, HP/IBM/Blade Servers, IBM Blade Center Platform.

Virtualization: VMware vSphere 6.0/5.5/5.1/5.0 /4.1/4.0 , VcenterServer 6.0/5.5/5.1/5.0 /4.1/4.0 , ESXI 6.0/5.5/5.1/5.0 /4.1/4.0 , ESX4.0,VMware Update

Storage: Netback up, SAN EMC Symmetric 800/ DMX1000, 2000 & 3000, EMC Clarion, 700, NetApp, NAS 2000/ 3000 series

Cluster: Oracle Real Application Cluster.

PROFESSIONAL EXPERIENCE

Confidential, Minneapolis, MN

Hadoop/Linux Admin

Responsibilities:

  • Configuring, Maintaining and Monitoring Hadoop Cluster using Cloudera Manager.
  • Monitoring Hadoop Productions Clusters using Cloudera Manager and 24x7 on call support.
  • Performed both major and minor upgrades to the existing Cloudera Hadoop cluster.
  • Upgraded Cloudera manger from 5.8 to 5.12.
  • Applied patches and bug fixes on Hadoop Clusters.
  • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce the impact and documenting the same and preventing future issues.
  • Installed non-Hadoop services on the production servers.
  • Troubleshooting HBase issues.
  • Kernel Patching on data nodes using BMC tools.
  • Request vendors (HP&Dell) to replace failures hardware on servers.
  • File system creation and extension.
  • Commissioning and decommission of nodes on Hadoop.
  • Involved in all maintenance activities of Hadoop Productions Clusters.
  • Debugging issues and staring for non-Hadoop services.
  • Troubleshooting Cluster issues and preparing run books.
  • Reviewing and on boarding applications to Cluster.
  • Worked on Providing User support and application support on Hadoop Infrastructure.
  • Implemented schedulers on the Resource Manager to share the resources of the cluster.
  • Developed custom aggregate functions using Spark SQL to create tables as per the data model and performed interactive querying
  • Experience developing iterative algorithms using Spark Streaming in Scala to build near real-time dashboards

Environment: HDFS, Map reduce Yarn, HBase, Hive, Kafka, Spark, Kerberos, Pig, Sqoop, Solr, Cloudera mangers services using Cloudera Manager.

Confidential, Austin, TX

Hadoop Admin

Responsibilities:

  • Provided infrastructure support for multiple clusters like Production (Prod), Pre-Production (Pre-prod), Quality (QA) and Disaster Recovery (DR)
  • Installed and configured Hadoop cluster across various environments through Cloudera Manager
  • Installed and configured MYSQL and Enabled High Availability.
  • Installed and configured Sentry server to enable schema level Security.
  • Installed and configured Hadoop services HDFS, Yarn, MapReduce, Spark, HBase, Oozie, Hive, Sqoop, Flume, Kafka and Sentry.
  • Configured Fair schedulers in cluster, created resource pools, and dynamic resource allocation of resources during regular monitoring of resource intensive jobs
  • Involved in implementing High Availability and automatic failover infrastructure to overcome single point of failure for Name node utilizing zookeeper services
  • Day to day responsibilities includes solving Hadoop developer issues and providing instant solution to reduce the impact and documenting the same and preventing future issues.
  • Interacting with Cloudera support and log the issues in Cloudera portal and fixing them as per the recommendations.
  • Experienced in upgrades, patching, Rolling Upgrades activities without any data loss and with proper backup plans.
  • Integrated external components like Tibco and Tableau with Hadoop using Hive server2.
  • Implemented HDFS snapshot feature, migrated data across clusters using DISTCP.
  • Performed both major and minor upgrades to the existing Cloudera Hadoop cluster.
  • Integrated Hadoop with Active Directory and enabled Kerberos for Authentication.
  • Build a new sandbox cluster for the testing purpose and move data from secure cluster to insecure sandbox cluster by using a tool DISTCP (distributed copy).
  • Installed Kafka cluster with separate nodes for brokers.
  • Performed Kafka operations on regular basis.
  • Expertise in Performance tuning and optimized Hadoop clusters to achieve high performance.
  • Implemented schedulers on the Resource Manager to share the resources of the cluster.
  • Monitoring Hadoop Clusters using Cloudera Manager and 24x7 on call support.
  • Expertise in implementation and designing of disaster recovery plan for Hadoop Cluster.
  • Extensive hands on experience in Hadoop file system commands for file handling operations.
  • Worked on Providing User support and application support on Hadoop Infrastructure.
  • Prepared System Design document with all functional implementations.
  • Worked with SQOOP import and export functionalities to handle large data set transfer between traditional databases and HDFS.
  • Experience in working with Amazon EC2, S3, Glaciers.
  • Experience in creating life cycle policies in AWS S3 for backups to Glaciers
  • Created monitors, alarms and notifications for EC2 hosts using Cloudwatch.

Environment: Hadoop Hdfs, Mapreduce, Hive, Pig, Oozie, Sqoop, Cloudera Manager, Storm, AWS S3, Ec2, IAM, Zookeeper, spark

Confidential, Indianapolis, IN

Hadoop/Linux Admin

Responsibilities:

  • Manage several Hadoop clusters in production, development, Disaster Recovery environments.
  • Work with engineering software developers to investigate problems and make changes to the Hadoop environment and associated applications.
  • Expertise in recommending hardware configuration for Hadoop cluster
  • Installing, Upgrading and Managing Hadoop Cluster on Hortonworks
  • Trouble shooting many cloud related issues such as Data Node down, Network failure and data block missing.
  • Managing and reviewing Hadoop and HBase log files
  • Proven results-oriented person with a focus on delivery
  • Built and configured log data loading into HDFS using Flume.
  • Performed Importing and exporting data into HDFS and Hive using Sqoop.
  • Managed cluster coordination services through Zoo Keeper.
  • Provisioning, installing, configuring, monitoring, and maintaining HDFS, Yarn, HBase, Flume, Sqoop, Oozie, Pig, Hive, Ranger, Falcon, Smartsense, Storm, Kafka.
  • Recovering from node failures and troubleshooting common Hadoop cluster issues.
  • Scripting Hadoop package installation and configuration to support fully-automated deployments.
  • Supporting Hadoop developers and assisting in optimization of Map reduce jobs, Pig Latin scripts, Hive Scripts, and HBase ingest required.
  • Implemented Kerberos for authenticating all the services in Hadoop Cluster.
  • System/cluster configuration and health check-up.
  • Continuous monitoring and managing the Hadoop cluster through Ambari.
  • Created user accounts and given users the access to the Hadoop cluster.
  • Resolving tickets submitted by users, troubleshoot the error documenting, resolving the errors.
  • Performed HDFS cluster support and maintenance tasks like Adding and Removing Nodes without any effect to running jobs and data.

Environment: Hadoop HDFS, Map Reduce, Hive 10.0, Pig, Puppet, Zookeeper, HBase, Flume, Ganglia, Sqoop, Linux, CentOS, Ambari

Confidential

Linux/Unix Admin

Responsibilities:

  • Install and configure RHEL 5.x and 6.x on virtual machines as well as physical server.
  • Worked on building new SUSE &Red hat Linux servers, support lease replacements and implementing system patches using the HP Server Automation tool.
  • Created Virtual Machines, cloning virtual machines, converting P2V (used standard VMware converter tool), VM
  • Creating VM Session on ESX servers through Virtual Infrastructure Client.
  • Experience in installation and implementation of SLES 10, SLES 11 and Red Hat Operating Systems.
  • Configuration, implementation and administration of Clustered servers on SUSE Linux environment.
  • Experienced in system administration, System planning, co-ordination and group level and user level management.
  • Experiencewith adobeproducts on Microsoft devices such as the Surface Book, and Surface Pro 4z
  • Proficient in using UNIX performance tools such as top, vmstat, iostat, netstat, sar etc.
  • Experience in configuration of NIS, NIS+, DNS, DHCP, NFS, LDAP, SAMBA, SQUID, and postfix, send mail, ftp, remote access, security management and Security trouble shooting skills.
  • Knowledge and understanding of Red Hat Satellite and BMC Blade logic UNIX/Linux management tools
  • Expertise in creating and managing Logical Volumes in SLES Linux.
  • Performed automated installations of Operating systems SUSE Linux using automats.
  • Possesses an expert level knowledge of UNIX Operating Systems and tools
  • System administration experience in a UNIX production business environment with substantial knowledge of the UNIX operating system
  • Experience on backup and recovery software like Net backup on Linux environment.
  • Systems and network Planning and Administration.
  • Supported production systems 24 x 7 on a rotational basis.
  • Worked on resolving production issues and documenting Root Cause Analysis and updating the tickets using BMC Remedy.
  • Experience in working with high performance production environment
  • Troubleshoot various systems problems such as application related issues, network related issues, hardware related issues etc.

Environment: Red-Hat Linux Enterprise servers (HP ProLiant DL 585, BL 465/485, ML Series), Solaris 8,9,10, AIX 5.3, 6.1, SAN(Neap), BladeLogic, Veritas Cluster Server 5.0, Windows 2003 server, Shell programming, Jboss 4.2, JDK 1.5,1.6,, VMware Virtual Client 3.5, VMware Infrastructure 3.5.

Confidential

Linux Systems Engineer

Responsibilities:

  • Installation, Configuration & Upgrade of Linux, Solaris, AIX, Oracle Linux, HP-UX, Linux operating systems
  • Installation of patches and packages
  • Upgraded Solaris 8 to Solaris 9 and 10. Configuring SAN Sun Storage. Through Brocade Silkworm Switches. Fiber Optics Switch for redundancy with multi-path.
  • Experience with VMware Virtualization using ESX hypervisor of vSphere 4.0.
  • Configured Solaris Jumpstart Server. Maintained Server 6.0. Backup data of Servers using Net Backup. Maintained Security of the Solaris Servers.
  • Involved in Implementing and Administrating enterprise level data backup and recovery.
  • Designed and Implemented Backup solution for the Network.
  • Installed and configured Lucene/Solar on Linux servers for Oracle database and Middleware applications.
  • Installed and configured file and Application servers running on Sun servers
  • Configuring and maintaining network services such as LDAP, DNS, NIS, NFS, Web, Mail, FTP
  • Managed Network troubleshooting applications TCP/IP including Ethernet, IP addressing & Sub netting, routing
  • Worked on creating user accounts, user administration, local and global groups on Solaris platform.
  • Experience with Samba, auto’s, Kerberos, LDAP, SSL certs, Apache HTTPD experiences a plus.
  • Preparing servers for Oracle RAC installation which includes tuning the kernel, agent, installation, adding NAS storage on 2,3, 4 node clusters.
  • Maintained high availability of data using Oracle Real Application cluster Server.
  • Experience with Shell scripting, bash scripting, Perl, Python languages.
  • Created User Accounts and Network Shares. Configured SUN Workstations as Domain Clients.
  • Administrated NFS, NIS, DHCP and DNS Samba services running on AIX, Sun Solaris and Red Hat Linux.
  • Experience with VERITAS Volume Manager, VERITAS File System, VERITAS Net Backup, VERITAS Clustering with SAN, NAS environment.
  • Worked with Storage team to configure EMC SAN, NAS and iSCSI configuration.
  • Managed Users for AIX, Solaris, HP-UNIX and Linux Servers and assigned rights to access network resources.

Environment: Red Hat Linux, VERITAS NetBackup, Korn Shell, Bash Scripting, Veritas Volume. Manager, web servers, LDAP directory, Active Directory, Web logic servers, SAN Switches, Apache, Tomcat WebSphere, WebLogic application server.

We'd love your feedback!