Hadoop Administrator Resume
NC
SUMMARY:
- Around 8+ years of professional experience including around 5 years of Unix Administrator and 3 plus years in Big Data analytics as Hadoop Administrator. Experience in all the phases of Data warehouse life cycle involving Requirement Analysis, Design, Coding, Testing, and Deployment.
- Experience in working with business analysts to identify study and understand requirements and translated them into ETL code in Requirement Analysis phase. Experience in architecting, designing, installation, configuration and management of Apache Hadoop Clusters & Cloudera Hadoop Distribution. Experience in managing the Hadoop infrastructure with Cloudera Manager. Practical knowledge on functionalities of every Hadoop daemons, interaction between them, resource utilizations and dynamic tuning to make cluster available and efficient.
- Experience in understanding and managing Hadoop Log Files. Experience in understanding hadoop multiple data processing engines such as interactive SQL, real time streaming, data science and batch processing to handle data stored in a single platform in Yarn.
- Experience in Adding and removing the nodes in Hadoop Cluster. Experience in managing the hadoop cluster with IBM Big Insights, HDP. Experience in extracting the data from RDBMS into HDFS Sqoop. Experience in collecting the logs from log collector into HDFS using up Flume.
- Experience in setting up and managing the batch scheduler Oozie. Experience in analyzing data in HDFS through Map Reduce, Hive and Pig. Design, implement and review features and enhancements to Cassandra. Deployed a Cassandra cluster in cloud environment as per the requirements
- Experience on UNIX commands and Shell Scripting. Extensively worked on the ETL mappings, analysis and documentation of OLAP reports requirements. Solid understanding of OLAP concepts and challenges, especially with large data sets. Proficient in Oracle … SQL, MYSQL and PL/SQL.
- Experience in integration of various data sources like Oracle, DB2, Sybase, SQL server and MS access and non - relational sources like flat files into staging area. Experience in Data Analysis, Data Cleansing (Scrubbing), Data Validation and Verification, Data Conversion, Data Migrations and Data Mining. Excellent interpersonal, communication, documentation and presentation skills.
- Hadoop /Big Data Technologies HDFS, Map Reduce, HBase, Pig, Hive, Sqoop, Yarn, Flume and Oozie Programming Languages Java, SQL, PL/SQL, Shell Scripting, Python Frameworks MVC, Spring, Hibernate. Web Technologies HTML, XML, JavaScript, Ajax, SOAP and WSDL Databases Oracle … SQL Server, MYSQL Database Tools TOAD, Chordiant CRM tool, Billing tool, Oracle Warehouse Builder (OWB). Operating Systems Linux, Unix, Windows, Mac, Cent OS Other Concepts OOPS, Data Structures, Algorithms, Software Engineering, ETL (8 years)
TECHNICAL SKILLS:
Hadoop /Big Data Technologies: HDFS, Map Reduce, HBase, Pig, Hive, Sqoop, Yarn, Flume and Oozie
Programming Languages: Java, SQL, PL/SQL, Shell Scripting, Python
Frameworks: MVC, Spring, Hibernate.
Web Technologies: HTML, XML, JavaScript, Ajax, SOAP and WSDL
Databases: Oracle 10g/11g, SQL Server, MYSQL
Database Tools: TOAD, Chordiant CRM tool, Billing tool, Oracle Warehouse Builder (OWB)
Operating Systems: Linux, Unix, Windows, Mac, Cent OS
Other Concepts: OOPS, Data Structures, Algorithms, Software Engineering, ETL
PROFESSIONAL EXPERIENCE:
Hadoop Administrator
Confidential, NC
Responsibilities:
- Installed, Configured and Managed Hadoop Cluster using Puppet. Supported data analyst in running Pig and Hive queries. Managed the configuration of the cluster to meet the needs of analysis whether I/O bound or CPU bound.
- Documented the configuration of the System. Implemented HA for Name Node Replication to avoid single point of failure. Used Flume in Loading log data into HDFS.
- Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes. Decommissioning of failed nodes and commissioning of nodes as the cluster uses grows and to accommodate more data on HDFS.
- Installed and configured Hive, remote Hive Metastore and Apache Pig for both development and production jobs as required.
Environment: Hadoop, Spark, MapReduce, Hive, HDFS, PIG, Sqoop, Oozie, Flume, Impala, ZooKeeper, Cassandra, Oracle, NoSQL and Unix/Linux
Hadoop Administrator
Confidential, Dallas, TX
Responsibilities:
- Handle the installation and configuration of a Hadoop cluster.
- Build and maintain scalable data pipelines using the Hadoop ecosystem and other open source components like Hive and Impala.
- Handle the data exchange between HDFS and different Web Applications and databases using Flume and Sqoop.
- Monitor the data streaming between web sources and HDFS.
- Close monitoring and analysis of the Map Reduce job executions on cluster at task level.
- Inputs to development regarding the efficient utilization of resources like memory and CPU utilization based on the running statistics of Map and Reduce tasks.
- Changes to the configuration properties of the cluster based on volume of the data being processed and performance of the cluster.
- Setting up Identity, Authentication, and Authorization.
- Maintaining Cluster in order to remain healthy and in optimal working condition.
- Handle the upgrades and Patch updates.
- Set up automated processes to analyze the System and Hadoop log files for predefined errors and send alerts to appropriate groups.
- Balancing HDFS manually to decrease network utilization and increase job performance.
- Commission and decommission the Data nodes from cluster in case of problems.
- Set up automated processes to archive/clean the unwanted data on the cluster, in particular on Name node and Secondary name node.
- Set up and manage High Availability Name node and Name node federation using Apache 2.0 to avoid single point of failures in large clusters.
- Discussions with other technical teams on regular basis regarding upgrades, Process changes, any special processing and feedback.
Environment: Hadoop, Spark, MapReduce, Hive, HDFS, PIG, Sqoop, Oozie, Cloudera, Flume, Impala, ZooKeeper, CDH5.4.5, Cassandra, Oracle, NoSQL and Unix/Linux.
Hadoop Administrator
Confidential, Dallas, TX
Responsibilities:
- Installed and configured Hadoop and responsible for maintaining cluster and managing and reviewing Hadoop log files.
- Load data from various data sources into HDFS using Flume.
- Worked on Cloudera to analyze data present on top of HDFS.
- Worked extensively on Hive and PIG.
- Worked on large sets of structured, semi - structured and unstructured data.
- Use of Sqoop to import and export data from HDFS to RDBMS and vice-versa.
- Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
- Participated in design and development of scalable and custom Hadoop solutions as per dynamic data needs.
- Coordinated with technical team for production deployment of software applications for maintenance.
- Good knowledge on reading data from Cassandra and also writing to it.
- Provided operational support services relating to Hadoop infrastructure and application installation.
- Handled the imports and exports of data onto HDFS using Flume and Sqoop.
- Supported technical team members in management and review of Hadoop log files and data backups.
- Participated in development and execution of system and disaster recovery processes.
- Formulated procedures for installation of Hadoop patches, updates and version upgrades.
- Automated processes for troubleshooting, resolution and tuning of Hadoop clusters.
- Set up automated processes to send alerts in case of predefined system and application level issues.
- Set up automated processes to send notifications in case of any deviations from the predefined resource utilization.
Environment: Redhat Linux/Centos 4, 5, 6, Logical Volume Manager, Hadoop, VMware ESX 5.1/5.5, Apache and Tomcat Web Server, Oracle 11, 12, Oracle Rac 12c, HPSM, HPSA.
Hadoop Admin
Confidential, GA
Responsibilities:
- Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
- Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
- Managing and scheduling Jobs on a Hadoop cluster.
- Deployed Hadoop Cluster in the following modes.
- Pseudo - distributed, Fully Distributed.
- Implemented Name Node backup using NFS. This was done for High availability.
- Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop.
- Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.
- Created Hive External tables and loaded the data in to tables and query data using HQL.
- Wrote shell scripts for rolling day-to-day processes and it is automated.
- Collected the logs data from web servers and integrated in to HDFS using Flume.
- Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce jobs given by the users.
Environment: Solaris 9/10, Red Hat Linux 4/5, BMC Tools, NAGIOS, Veritas NetBackup, Bash Scripting, Veritas Volume Manager, web servers, LDAP directory, Active Directory, BEA Web logic servers, SAN Switches, Apache, Tomcat servers, WebSphere application server.
Linux Administrator
Confidential, Mooresville, NC
Responsibilities:
- Installing and upgrading OE & Red hat Linux and Solaris 8/ & SPARC on Servers like HP DL 380 G3, 4 and 5 & Dell Power Edge servers.
- Experience in LDOM's and Creating sparse root and whole root zones and administered the zones for Web, Application and Database servers and worked on SMF on Solaris 10.
- Experience working in AWS Cloud Environment like EC2 & EBS.
- Implemented and administered VMware ESX 3.5, 4.x for running the Windows, Centos, SUSE and Red hat Linux Servers on development and test servers.
- Installed and configured Apache on Linux and Solaris and configured Virtual hosts and applied SSL s.
- Implemented Jumpstart on Solaris and Kick Start for Red hat environments.
- Experience working with HP LVM and Red hat LVM.
- Experience in implementing P2P and P2V migrations.
- Involved in Installing and configuring Centos & SUSE 11 & 12 servers on HP x86 servers.
- Implemented HA using Red hat Cluster and VERITAS Cluster Server 5.0 for Web Logic agent.
- Managing DNS, NIS servers and troubleshooting the servers.
- Troubleshooting application issues on Apache web servers and also database servers running on Linux and Solaris.
- Experience in migrating Oracle, MYSQL data using Double take products.
- Used Sun Volume Manager for Solaris and LVM on Linux & Solaris to create volumes with layouts like RAID 1, 5, 10, 51.
- Re - compiling Linux kernel to remove services and applications that are not required.
- Performed performance analysis using tools like prstat, mpstat, iostat, sar, vmstat, truss, Dtrace.
- Experience working on LDAP user accounts and configuring ldap on client machines.
- Upgraded Clear-Case from 4.2 to 6.x running on Linux (Centos &Red hat)
- Worked on patch management tools like Sun Update Manager.
- Experience supporting middle ware servers running Apache, Tomcat and Java applications.
- Worked on day to day administration tasks and resolve tickets using Remedy.
- Used HP Service center and change management system for ticketing.
- Worked on the administration of the Web Logic 9, JBoss 4.2.2 servers including installation and deployments.
- Worked on F5 load balancers to load balance and reverse proxy Web Logic Servers.
- Shell scripting to automate the regular tasks like removing core files, taking backups of important files, file transfers among servers.
Environment: Solaris 8/9/10, VERITAS Volume Manager, web servers, LDAP directory, Active Directory, BEA Web logic servers, SAN Switches, Apache, Tomcat servers, WebSphere application server.
Linux/Systems Administrator
Confidential, Jacksonville, FL
Responsibilities:
- Installing, configuring and updating Solaris 7, 8, Red Hat 7.x, 8, 9, Windows NT/2000 Systems using media and Jumpstart and Kick start.
- Installing and configuring Windows Active directory server 2000 and Citrix Servers.
- Published and administered applications via Citrix Meta Frame.
- Creating and Authenticating Windows user accounts on Citrix server.
- Creating System Disk Partition, mirroring root disk drive, configuring device groups in UNIX and Linux environment.
- Working with VERITAS Volume Manager 3.5 and Logical Volume Manager for file system management, data backup and recovery.
- User administration which included creating backup account for new users and deleting account for the retired or deleted users. Implementing backup solution using Dell T120 autoloader and CA Arc Server 7.0 Managing Tape Drives and recycling it after specific period of time as per the firm's policies.
- Working with DBA's for writing Scripts to take database backup and scheduling backup using cron jobs. Creating UNIX and PERL scripts for automated data backup, status of the storage. Installing and configuring Oracle 8i database and Sybase server on Solaris after creating the file systems, users and tuning the kernel. Installed and Configured SSH Gate for Remote and Secured Connection. Setting up labs from scratch, testing hardware, installing and configuring various hardware devices like printers, scanners, modems, network and communication devices. Configuration of DHCP, DNS, NFS and auto mounter.
- Creating, troubleshooting and mounting NFS File systems on different OS platforms. Installing, Configuring and Troubleshooting various software's like Windd, Citrix - Clarify, Rave, VPN, SSH Gate, Visio 2000, Star Application, Lotus Notes, Mail clients, Business Objects, Oracle, Microsoft Project. Troubleshooting and solving problems related to users, applications, hardware etc. Working 24/7 on call for application and system support. Experience in working and supported SIBES database running on Linux Servers.
Environment: HP Proliant Servers, SUN Servers (6500, 4500, 420, Ultra 2 Servers), Solaris 7/8, VERITAS Net Backup, VERITAS Volume Manager, Samba, NFS, NIS, LVM, Linux, Shell Programming.