We provide IT Staff Augmentation Services!

Bi And Unix System Administrator Resume

0/5 (Submit Your Rating)

Dallas, TX

SUMMARY

  • Over 9+ years of IT experience including 4+ years of experience with Hadoop Administration and 5+ years of experience in Business Intelligence and Unix administration.
  • Experience in Hadoop Administration (HDFS, MAP REDUCE, HIVE, PIG, SQOOP, FLUME AND OOZIE, HBASE) NoSQL Administration.
  • Hands on experience in installing, configuring, and using Hadoop ecosystem components like Map Reduce, Hive, Pig, Sqoop, HBase, Zookeeper and Oozie.
  • Expertise in different Hadoop Distributions like Horton works Data Platform (HDP 2.1 - 2.3), Cloudera Distribution Hadoop (CDH 4, CDH 5).
  • Experience with Installed Hadoop patches, major and minor version upgrades of Hadoop Apache, Cloudera and Horton works distributions.
  • Experience monitoring workload, job performance and collected metrics for Hadoop cluster when required using Ganglia, Nagios and Cloudera Manager.
  • Experienced of Hadoop 1.x and Hadoop 2.X installation, configuration and Hadoop Clients online as well as using package implementation.
  • Capacity Planning SqoopHands of experience in various aspects of Database Systems as well as installing, configuring & maintaining the Hadoop clusters.
  • Good understanding of network protocols, routing, switching equipment, Networking Operating Systems, Servers and overall different network architectures.
  • Have handled issues related to cluster start, node failures and several java specific errors on the system.
  • Adept in Hadoop cluster expansion and planning.
  • Have experience in architecture, design and development of Big Data platform including large clusters
  • Hadoop ecosystem projects and custom map-reduce user management, cluster management, Setup NOSQL databases, Security design and Implementation
  • Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa.
  • Experienced of Service Monitoring, Service and Log Management, Auditing and Alerts, Hadoop Platform Security, and Configuring Kerberos
  • Experience in importing and exporting the logs using Flume.
  • Optimizing performance of HBase/Hive/Pig jobs
  • Experience in Installation, configuration of AIX IBM pSeries Machines.
  • Experience in HACMP Configuration & Management and knowledge on VIO
  • TL Upgrade through NIM server on AIX Operating system.
  • Handling booting related issues, Installing OS on LPAR using NIM, server restore using mksysb image.
  • Using maintenance method to fix booting related issues, running fsck on rootvg file systems, repairing jfs or jfs2 log devices, running bosboot.
  • Faulty disk replacement, running diag against hardware of the server to diagnose any issues.
  • Exporting volume group from one node to other and adding new disks to volume group, Replacing failed devices.
  • Taking server backups such as mksysb, savevg and other file or directory level backup using backup utility. File system management, creating file systems, checking for error report on servers.
  • Have Administration experience in Cognos 8.2/8.4 and applied data level and user level security for Framework Manager Packages, reports.
  • Experience in installation of Cognos 8.4 suite of tools and configuring LDAP for the secured authentication

TECHNICAL SKILLS

Big Data Technologies: Hadoop distributions (CDH4.7, 5.3 and HDP 2.1), Red hat Linux 6.x, 7.x Solaris 11,Shell Scripts, Nagios, Ganglia monitoring, Kerberos, Shell scripting, Python Scripting, Java, Hive, Pig, Scoop, Flume, HBase, Zookeeper, Oozie, YARN, Cloudera Manager, Ambary Console, Hue, AWS, EC2.

Servers: Apache Tomcat server, Apache HTTP web server. Scripting Lang Shell Scripting, Python.

Languages: C, SQL, PLSQL, Java, PHP. Operating System AIX 5L, 6.1, Redhat Linux, UNIX.

PROFESSIONAL EXPERIENCE

Confidential, Irving, TX

Sr. Hadoop Administrator

Responsibilities:

  • Experience in supporting multiple large scale development, QA and production Hadoop cluster environments.
  • Installed and configured Cloudera Manager, Hive, Pig, Sqoop and Oozie on the CDH4 cluster.
  • Installed Name node, Secondary name node, Yarn (Resource Manager, Node manager, Application master), Data node.
  • Monitored multiple Hadoop clusters environments using Cloudera Manager. Monitored workload, job performance and collected metrics for Hadoop cluster when required.
  • Experience configuring spouts and bolts in various Apache Storm topologies and validating data in the bolts.
  • Implemented Storm builder topologies to perform cleansing operations before moving data into Cassandra.
  • Configured Spark streaming to receive real time data from the Kafka and store the stream data to HDFS using Scala.
  • Maintaining and troubleshooting Hadoop core and ecosystem components (HDFS, Map/Reduce, Name node, Data node, Job tracker, Task tracker, Zookeeper, YARN, Oozie, Hive, Hue, Flume, HBase, and Fair Scheduler).
  • Worked on installing cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, and slots configuration in Mapr Control System (MCS)
  • Adding new nodes to an existing cluster, recovering from a Name node failure
  • Performed a upgrade in development environment from CDH 4.2 to CDH 4.6.
  • Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons and Maintaining and monitoring Hadoop clusters using Cloudera Manager
  • Recovering from node failures and troubleshooting common Hadoop cluster issues
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
  • Have done load balancing on the Hadoop cluster and Fetched data from HDFS using Pig, Hive into HBase
  • Design & Develop ETL workflow using Oozie for business requirements which includes automating the extraction of data from MySQL database into HDFS using Sqoop scripts.
  • Set up automated processes to archive/clean the unwanted data on the cluster, in particular on Name node and Secondary name node.
  • Configuring the replication factor and block size for HDFS in a Hadoop cluster
  • Checking HBase region consistency and table integrity problems and repairing a corrupted HBase table
  • Developed the Sqoop scripts in order to make the interaction between Hive and MySQL Database
  • Loaded data into the cluster from dynamically generated files using Flume and from relational database management systems using Sqoop
  • Provisioning, configuring, monitoring and maintaining HDFS, Yarn, HBase, Flume, Sqoop, Oozie, Pig, Hive
  • Used Oozie for job scheduling in Hadoop cluster

Confidential, Seattle, WA

Hadoop Administrator

Responsibilities:

  • Installed and configured Hive, Pig, Sqoop, Flume, Cloudera manager and Oozie on the Hadoop cluster
  • Planning for production cluster hardware and software installation on production cluster and communicating with multiple teams to get it done.
  • Installed Hadoop patches, updates and version upgrades when required
  • Involved in implementing High Availability and automatic failover infrastructure to overcome single point of failure for Name node utilizing zookeeper services.
  • Worked with big data developers, designers and scientists in troubleshooting map reduce, hive jobs and tuned them to give high performance.
  • Monitor Hadoop cluster connectivity and security. Manage and review Hadoop log files and File system management and monitoring. HDFS support and maintenance.
  • Design & Develop ETL workflow using Oozie for business requirements which includes automating the extraction of data from MySQL database into HDFS using Sqoop scripts.
  • Automated end to end workflow from Data preparation to presentation layer for Artist Dashboard project using Shell Scripting.
  • Developed Map reduce program were used to extract and transform the data sets and result dataset were loaded to Cassandra.
  • Monthly Linux server maintenance, shutting down essential Hadoop name node and data node.
  • Made Hadoop cluster secured by implementing Kerberos.
  • Checking HBase region consistency and table integrity problems and repairing a corrupted HBase table
  • Developed the Sqoop scripts in order to make the interaction between Hive and MySQL Database
  • Loaded data into the cluster from dynamically generated files using Flume and from relational database management systems using Sqoop.
  • Collaborated with the infrastructure, network, database, application and BI teams to ensure data quality and availability.
  • Balancing Hadoop cluster using balancer utilities to spread data across the cluster equally.
  • Implemented data ingestion techniques like Pig and Hive on production environment.
  • Commissioning and decommissioning of Hadoop nodes.
  • Involved in Cluster Capacity planning along with expansion of the existing environment.
  • Regular health checkups of the system using Hadoop metrics - Scripted
  • Providing 24X7 support to Hadoop environment

Confidential, Wilmington, DE

BI Administrator

Responsibilities:

  • Configures and supports the Cognos Web & Application server environments for non-prod through to Production via SLDC.
  • Experience in Service provisioning and implementing the Cognos authentication/authorization model for Users, Groups, Roles, and Capabilities.
  • Configuring Windows IIS and IBM HTTP web services.
  • Experience implementing security in the Cognos Connection Portal for Public folders, Configuration, Security, Status, etc.
  • Hands on experience creating objects throughout the full spectrum of the Cognos Connection Portal.
  • Knowledge of the Cognos 8-10 cube building processes and its deployment features.
  • Experience working with product vendors to provide application support and resolving multiple PMR tickets simultaneously.
  • Working knowledge of the various monitoring features and metrics incorporated in the Cognos Connection Portal
  • Ability to monitor multiple Cognos environments simultaneously.
  • Experienced with Change Control procedures and software version control.
  • Basic knowledge of LDAP design and experience with LDAP administration.
  • Ability to work on simultaneous projects and capable of rapidly responding to production related issues, daily tasks and responsibilities.
  • Available for routine production support duties during off-hours and weekends.
  • This position includes performing the product upgrades and patching as well as the deployment of application code and the required security within Cognos features and functions.
  • Working knowledge of how to perform the integration of the software components into a working environment being considerate of the desktop/server components as well as surrounding systems that utilize Cognos.
  • Experienced with Cognos tools and diagnostics to assist the application teams as needed.
  • Experienced with performing these functions on both a windows and UNIX platform.
  • Interfaces with both the application teams and vendor to gather and present the required details pertaining to requirements, problems and diagnostics for root cause analysis to peers, partners, coworkers, vendors, and management.
  • Identify risk impact and provide proactive plans to ensure platform and host performance.

Confidential, Dallas, TX

BI and Unix System Administrator

Responsibilities:

  • Installing the patches and bringing up the AIX servers to up to date patches.
  • Server build, installing Operating system, migrating OS etc. Installing new software / file sets, packages and other third party applications when required by business.
  • Creating Users, Groups and working user security settings, home directories and password administering.
  • Using alternate disk installation method when upgrading or installing TL on AIX Operating system.
  • Configuring NIM Master, creating NIM resources for network installation on clients.
  • Handling booting related issues, Installing OS on LPAR using NIM, server restore using mksysb image.
  • Using maintenance method to fix booting related issues, running fsck on rootvg file systems, repairing jfs or jfs2 log devices, running bosboot.
  • Working on Access control lists, configuring sudo and identifying group of users and granting access to some set of commands etc.
  • Configuring secure shell, generating public and private key authentications, password less logins etc.
  • Building NFS Server, Configuring NFS exports, configuring NFS auto mounts and its troubleshooting.
  • Scheduling cron and checking its success or failure from their logs files.
  • Creating new logical volumes, creating file systems, working with file system changes such as changing mount point, changing file system size.
  • Excellent trouble shooting skills on Logical volume manager, good hands on LVM high level commands.
  • Taking server backups such as mksysb, savevg and other file or directory level backup using backup utility. File system management, creating file systems, checking for error report on servers.
  • HACMP - Monitoring servers, Starting and Stopping Cluster Services, moving resource groups across the nodes, increasing file system in cluster file system, synchronizing cluster resources and cluster verification.
  • VIO Server operations, creating virtual adapters and configuring Ethernet or LUNs to LPARs and performing other troubleshooting techniques. Creating ether channel with link aggregation etc.
  • Configures and supports the Cognos Web & Application server environments for non-prod through to Production via SLDC.
  • Configuring Windows IIS and IBM HTTP web services.
  • Experience implementing security in the Cognos Connection Portal for Public folders, Configuration, Security, Status, etc.
  • Hands on experience creating objects throughout the full spectrum of the Cognos Connection Portal.

Confidential

System Administrator

Responsibilities:

  • Installation, configuration and disk management.
  • Attending daily issues such as file system management, housekeeping, checking disk usages etc.
  • Analyzing log files, checking error report and taking pro-active steps.
  • User and password administration, OS Hardening and working with security related settings.
  • Using alternate disk installation method when upgrading or installing TL or ML on AIX Operating.
  • Interacting with other teams such as DB team and Web team to solve and fix any related issues.
  • Daily health checks and keeps track of change records.
  • Disk and volume management AIX LVM. Replacing faulty physical volumes from rootvg or other user defined volume group.
  • Configuring redundant logical volumes, file systems to make sure high availability.
  • Fixing booting related issues, working with boot list in SMS modes, bosboot to fix booting problems.
  • Working with Network File systems - Configuration, troubleshooting network shared issues.
  • Scheduling jobs using Crontab - editing, removing and checking cron logs.
  • Troubleshooting all aspects of the operating environments.
  • Performance tuning, networking, system security, IO monitoring and analysis.
  • Plan and implement system upgrades, patches, software installations, hardware additions. Stay current with technical developments in the area of expertise.
  • Configuration of new devices using cfgmgr - Disk Replacement in case of any disasters.

We'd love your feedback!