We provide IT Staff Augmentation Services!

Oracle Dba Resume

0/5 (Submit Your Rating)

Minneapolis, MN

SUMMARY

  • Over 7 years of professional IT experience including 4 years’ experience with Hadoop Ecosystem in installation and configuration of different Hadoop eco - system components in the existing cluster.
  • Expertise in Big Data technologies as administrator, proven capability in project based teamwork and also as an individual contributor with good communication skills.
  • Experience on Cloudera CDH and Apache distributions of HADOOP clusters.Installing, Configuring, and Deploying the Cluster - Cloudera, with AD LDAP
  • Experience in installation, configuration, supporting and monitoring Hadoop clusters.
  • Experience in working with MapReduce using Apache Hadoop for working with Big Data.
  • Hands on experience in maintaining Hadoop ECO system components such as Yarn, Hive, Impala, HBase, Hue and Sentry.
  • Involved in installing Hadoop Ecosystem components.Using Sqoop importing data to the cluster.
  • Knowledge in capacity scheduling and configuration.
  • Experience in designing, developing and implementing connectivity products that allow efficient exchange of data between our core database engine and the Hadoop ecosystem.
  • Possess good knowledge in support and maintenance of Windows servers.Analyzed the hardware and software requirements of Active Directory.
  • Installation and configuration of Linux server (RadHat)
  • Experience in Hadoop Administration (HDFS, MAP REDUCE, HIVE, PIG, SQOOP, FLUME, OOZIE, and HBASE) and NoSQL Administration.
  • Experience in working with Hadoop clusters using AWS EMR, Cloudera (CDH5), and HortonWorks Distributions.
  • Hands on experience in installing, configuring, and using Hadoop ecosystem components like HadoopMap Reduce(MR),HDFS,HBase,Oozie,Hive,Sqoop,Spark,Kafka,Cassandra,Scala,Pig,Knox,SPARK STREAMING and Flume.
  • Hands-on implementation experience in Big Data Management Platform (BMP) using HDFS, Map Reduce, Hive, Pig, Oozie, Apache Kite and other eco-systems as a Data Storage and Retrieval systems.
  • Performed importing and exporting data into HDFS and Hive using Sqoop.
  • Experience in managing and reviewing Hadoop log files.
  • Experience in analyzing client’s Hadoop infrastructure and understanding the performance bottlenecks and providing the performance tuning accordingly.
  • Good experience installing, configuring, testing Hadoop ecosystem components.
  • Capacity Planning, Hands on experience in various aspects of Database Systems as well as installing, configuring & maintaining the Hadoop clusters.
  • Well-experienced in Hadoop cluster expansion and planning.
  • Experience in designing both time driven and data driven automated workflows using Oozie.
  • Experience in installation, configuration, supporting and managing - Cloudera Hadoop platform along with CDH5 clusters.
  • Worked on Disaster Management with Hadoop Cluster.
  • Experienced of Service Monitoring, Service and Log Management, Auditing and Alerts, Hadoop Platform Security, and Configuring Kerberos
  • Experience in understanding the security requirements for Hadoop and integrating with Kerberos authentication infrastructure- KDC server setup, creating and managing the realm domain

TECHNICAL SKILLS

Languages: Shell Scripting (C, Korn, Bash and Bourne), Python, MySql, Perl, WLST

Big Data Framework and Eco Systems: Hadoop, MapReduce, Hive, Pig, Kafka, HDFS, Zookeeper, Sqoop, Spark, Scala, HUE, Cloudera Manager, Oozie and Flume

Linux / Unix: RedHat, CentOS, Ubuntu

Databases: Oracle 10g/11g, MySQL, DB2

Operating Systems: Windows XP/2000/NT, Linux (Red-Hat, CentOS), Machitosh, UNIX

Networking & Protocols: TCP/IP, Telnet, HTTP, HTTPS, FTP, SNMP, LDAP, DNS, DHCP

PROFESSIONAL EXPERIENCE

Confidential, Minneapolis, MN

Hadoop Administrator

Responsibilities:

  • Involved in design and planning phases of Hadoop Cluster planning
  • Responsible for Regular health checkups of the Hadoop cluster using custom scripts
  • Installed and configured multi-node fully distributed Hadoop cluster of large number of nodes.
  • Provided Hadoop, OS, and Hardware optimizations.
  • Installed and configured Cloudera Manager for easy management of existing Hadoop cluster.
  • Monthly Linux server maintenance, shutting down essential Hadoop name node and data node.
  • Collaborated with the infrastructure, network, database, application and BI teams to ensure data quality and availability
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
  • Experienced in managing and reviewing the Hadoop log files.
  • Balancing Hadoop cluster using balancer utilities to spread data across the cluster equally.
  • Implemented data ingestion techniques like Pig and Hive on production environment.
  • Routine cluster maintenance on every weekend to make required configuration changes, installation etc.
  • Implemented Kerberos Security Authentication protocol for existing cluster.
  • Worked extensively with sqoop for importing metadata from Oracle. Used Sqoop to import data from SQL server to Cassandra.
  • Implemented Spark2 data processing project to handle data from RDBMS and streaming sources.
  • Monitoring and Debugging Hadoop jobs/Applications running in production.
  • Worked on Providing User support and application support on Hadoop Infrastructure.
  • Kerberos keytabs creation for ETL application use cases before on boarding to Hadoop.
  • Responsible for adding User to Hadoop cluster
  • Worked on Evaluating, comparing different tools for test data management with Hadoop.
  • Helped and directed testing team to get up to speed on Hadoop Application testing.
  • Hands on Experience with ServiceNow architecture (Ticketing Change management System)
  • Worked and Supported Hadoop clusters using AWS EMR, Cloudera (CDH5), and HortonWorks Distributions

Environment: Hadoop HDFS, MapReduce, Cloudera, Sentry, Spark2, Ansible, Hive, Kafka, Oozie, Sqoop

Confidential, New york, NY

Hadoop Administrator

Responsibilities:

  • Cluster maintenance, Monitoring, commissioning and decommissioning of data nodes, troubleshooting,manage and review log files.
  • Deployed Spark Cluster and other services in AWS using console.
  • Installation of new components and removal of them through Cloudera Manager.
  • Configured Zookeeper to coordinate the servers in clusters to maintain the data consistency.
  • Periodically reviewed Hadoop related logs and fixed errors.
  • Commissioned new cluster nodes for increased capacity and decommissioned servers with hardware Problems.
  • Responsible for adding new eco system components, custom configurations based on the requirements and Hadoop daemons.
  • Developed Python, Shell Scripts and Power shell for automation purpose.
  • Implemented Kerberos Security Authentication protocol for existing cluster.
  • Worked with Sentry, configuration to provide centralized security to Hadoop services.
  • Copied data from local file system to HDFS using Apache NiFi
  • Worked on performance tuning of Apache NiFi workflow to optimize the data ingestion speeds.
  • Working experience on maintaining MySQL databases creation and setting up the users and maintain the backup of databases.
  • Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes.
  • Performing Linux systems administration on production and development servers (Red Hat Linux, CentOS and other UNIX utilities)

Environment: Hadoop HDFS, MapReduce, Cloudera, Sentry, Spark2, Ansible, Hive, Kafka, Oozie, Sqoop

Confidential

Hadoop Administrator

Responsibilities:

  • Helped the team to increase cluster size from 35 nodes to 113 nodes. The configuration for additional data nodes was managed using Puppet.
  • Installed, configured and deployed a 60 node Cloudera Hadoop cluster for development, production.
  • Worked on setting up high availability for major production cluster and designed automatic failover.
  • Performed both major and minor upgrades to the existing CDH cluster.
  • Performance tune Hadoop cluster to achieve higher performance.
  • Used Bash and Python, including Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs and scheduling Lambda functions for routine AWS tasks .
  • Configured Hive metastore with MySQL, which stores the metadata of Hive tables.
  • Configured Flume for efficiently collecting, aggregating and moving large amounts of log data.
  • Benchmarking Hadoop clusters using DFSIO, Teragen, and Terasort.
  • Enabled Kerberos for Hadoop cluster Authentication and integrate with Active Directory for managing users and application groups.
  • Wrote Nagios plugins to monitor Hadoop Name Node Health status, number of Task trackers running, number of Data nodes running.
  • Involved in processing large volumes of data in Teradata.
  • Developed multiple Map reduce jobs in java for data cleaning and preprocessing.
  • Moved data from HDFS to RDBMS and vice-versa using SQOOP.
  • Developed HIVE queries and UDFs to analyze the data in HDFS.
  • Performed Analyzing/Transforming data with Hive and Pig.
  • Implemented Hadoop Float equivalent to the Teradata Decimal

Environment: ApacheHadoop, HDFS, CLOUDERA Manager, Kerberos, Java, MapReduce, Hive, AWS, Sqoop, Oozie and MySQL.

Confidential

Oracle DBA

Responsibilities:

  • Hands on experience in administration and maintenance of Oracle Applications 11i/R12, Database 10g/11g.
  • Good experience in maintenance activities like Patching, cloning.
  • Experience in AD Utilities like adadmin, adpatch, adctrl, adautocfg etc.
  • Raising a space request in case there is no space left on database mount points providing the status of space addition for the shift to the managers
  • Involved in the daily operations of RMAN backup, monitoring, patching, cloning.
  • DBA activities such as restart database, applying patches, tablespace maintenance, database upgrades, configure databases for adequate performance and availability etc.
  • Support and administration Standby database
  • Experience in sysadmin activities like creating application users, assigning responsibilities, defining menus, responsibilities, concurrent managers, etc.
  • Good exposure on backup and recovery tasks. Good knowledge of HOT, COLD and RMAN backups.
  • Flexible to work on any backup strategy.
  • Applying RDBMS patch by Opatch utility and Application patches by adpatch.
  • Good experience in installing the Oracle RDBMS software’s.
  • Concurrent Manager Setup and troubleshooting Oracle Applications 11i/R12.
  • Good exposure on Oracle logical backup strategies like exp/imp, datapump.
  • Troubleshooting & Diagnosis Sysadmin (including User management, Concurrent Manager Admin, APPL TOP Management, Report/forms server management).
  • Responsible for OBIEE, BI Apps install and configuration as well as OBIEE System administration
  • Involved E-Business Suite R12 Upgrade upgrade from 11.5.10.x to 12.1.1 including database upgrade from 10.2.0.4 to 11.2.0.2,

Environment: Oracle 10g, 11g, ERP R12, RHEL 7

We'd love your feedback!