Oracle Dba Resume
1.00/5 (Submit Your Rating)
Janesville, WI
SUMMARY
- Around 8+ years of experience flawless preparation of presentations, preparing facility reports and maintaining the utmost confidentiality.
- In Linux with Hadoop big data, Windows, Active Directory, Exchange and Office 365.
- Expert in implementation of the Big Data landscape.
- Hands on experiences and thorough understanding of UNIX/Linux operating systems.
- Extensive hands on administration with Cloudera Hadoop.
- Extensive hands on administration with Hortonworks
- Experience in configuring and working with MapR distribution of Hadoop
- Strong knowledge in use of open source tools such as; Hadoop technology (Hadoop, Kabana, Sqoop, Hive, Oozie, Ambari etc.), Python and bash.
- Expert in enabling security on the Hadoop clusters using Kerberos etc.
- Experience in understanding and managing Hadoop Log Files
- Experience in adding and removing the nodes on Hadoop Cluster
- Experience in managing the Hadoop cluster with IBM Big Insights, Hortonworks
- Experience in extracting the data from RDBMS into HDFS Sqoop
- Experience in collecting the logs from log collector into HDFS using up Flume
- Experience in analyzing data in HDFS through Map Reduce, Hive and Pig.
- Experience in setting up and managing the batch scheduler Oozie.
- Strong knowledge of automation such as developing Sqoop scripts and batch processing
- Practical knowledge on functionalities of every Hadoop daemon, interaction between them, resource utilizations and dynamic tuning to make cluster available and efficient
- Experience in understanding Hadoop multiple data processing engines such as interactive SQL, real time streaming, data science and batch processing to handle data stored in a single platform in Yarn.
- Good understanding of No SQL databases such as HBase, and Mongo DB.
PROFESSIONAL EXPERIENCE
Confidential, Janesville, WI
Hadoop Admin
RESPONSIBILITIES:
- Working with data delivery teams to setup new Hadoop users such as setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig, Spark and MapReduce access for the new users
- Worked on Providing User support and application support on Hadoop Infrastructure.
- Worked on HUE integration with LDAP for authentication/authorization.
- Load data from various data sources into HDFS using Flume
- Used Oozie to orchestrate the map reduce jobs.
- Involved in creating Hive tables, loading with data and writing hive queries which will run internally in MapReduce way.
- Close monitoring and analysis of the Map Reduce job executions on cluster at task level.
- Configured, installed, monitored MapR Hadoop on 10 AWS ec2 instances and configured MapR on Amazon.
- Experience in managing the Hadoop cluster with IBM Big Insights.
- Experience in setting up and managing the batch scheduler Oozie and also worked on spark.
- Experience in analyzing data in HDFS through Map Reduce, Hive and Pig.
- Used Sqoop to efficiently Transfer data from DB2 to HDFS, Oracle Exadata to HDFS
- Responsible for implementation and ongoing administration of Hadoop infrastructure.
- Apache Hadoop migration from 1.0 to 2.0
- Environment: Linux, Map Reduce, HDFS, Hive, Pig, Shell Scripting.
- Installed and configured Flume, Hive, Pig, Sqoop and Oozie on the Hadoop cluster.
- Launching and Setup of HADOOP/ HBASE Cluster, which includes configuring different components of HADOOP and HBASE Cluster.
- Experienced in loading data from the UNIX file system to HDFS.
- Created HBase tables to load large sets of structured, semi - structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios.
- Worked on writing transformer/mapping Map-Reduce pipelines using Java.
- Involved in creating Hive tables, loading them with data and writing Hive queries that will run internally in Map Reduce way.
- Migrated ETL jobs to Pig scripts do Transformations, even joins and some pre-aggregations before storing the data onto HDFS
- Worked on different file formats like Sequence files, XML files and Map files using Map Reduce Programs.
Confidential, HARTFORD, CT
Hadoop Admin
RESPONSIBILITIES:
- Ongoing administration of Big Data platforms on Cloudera and Hortonworks.
- Installed, configured and deployed a 50 node MapR Hadoop Cluster for Development and Production
- Test, validate new datasets in the HDFS and published results. Mentored other admins on Hadoop Administration.
- Maintaining and troubleshooting Hadoop core and ecosystem components (HDFS, Map/Reduce, Name node, Data node, Job tracker, Task tracker, Zookeeper, YARN, Oozie, Hive, Hue, Flume, HBase, and Fair Scheduler).
- Implemented Name Node backup using NFS. This was done for High availability.
- Set up automated processes to archive/clean the unwanted data on the cluster, in particular on Name node and Secondary name node.
- Worked on Hadoop clusters capacity planning and management.
- Monitoring and Debugging Hadoop jobs/Applications running in production.
- Worked on Providing User support and application support on Hadoop Infrastructure.
- Worked on HUE integration with LDAP for authentication/authorization.
- Load data from various data sources into HDFS using Flume
- Used Oozie to orchestrate the map reduce jobs.
- Involved in creating Hive tables, loading with data and writing hive queries which will run internally in MapReduce way.
- Close monitoring and analysis of the Map Reduce job executions on cluster at task level.
- Configured, installed, monitored MapR Hadoop on 10 AWS ec2 instances and configured MapR on Amazon
- EMR making AWS S3 as default filesystem for the cluster
- Experience in managing the Hadoop cluster with IBM Big Insights.
- Experience in setting up and managing the batch scheduler Oozie and also worked on spark.
- Experience in analyzing data in HDFS through Map Reduce, Hive and Pig.
- Used Sqoop to efficiently Transfer data from DB2 to HDFS, Oracle Exadata to HDFS
- Responsibilities:
- Responsible for implementation and ongoing administration of Hadoop infrastructure.
- Apache Hadoop migration from 1.0 to 2.0
- Working with data delivery teams to setup new Hadoop users such as setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig, Spark and MapReduce access for the new users.
- Successfully managing Hadoop Clusters, with a stress on production systems-from installation, to configuration management, service watching, troubleshooting and support integration.
- Worked on Azure to configure the clusters and to build the environments by using using Json.
- Worked on Hadoop CDH upgrade from CDH3.x to CDH4.x
- Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.
- Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools.
- Worked on Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
- Worked on Screen Hadoop cluster job performances and capacity planning
- Worked on automated monitoring Hadoop cluster connectivity and security
- Managed and reviewed Hadoop log files.
- Configured Flume for efficiently collecting, aggregating and moving large amounts of log data.
- Worked on large sets of structured, semi-structured and unstructured data
- Worked on File system management and monitoring.
- Worked on ecosystem components (HDFS, Map/Reduce, Name node, Data node, Job tracker, Task tracker, Zookeeper, YARN, Oozie, Hive, Hue, Flume, HBase, and Fair Scheduler).
Confidential
Oracle DBA
Responsibilities:
- Support and Maintenance of Oracle Application 11i (11.5.10.2) and R12(12.1.3)
- Cloning of Applications Instances using Rapid Clone.
- Applied application patches using ad patch and troubleshooting patch issues.
- Utilized AD Utilities and admin, adctrl and adautocfg.
- Performed CPU patching for Test and Prod instances for every quarter.
- Working on 24x7 Supports.
- Performed Schema refreshes.
- Performed DB refreshes from production to test.
- Deployed the code migrations.
- Troubleshoot problems with 11i web server, forms server, concurrent managers and database.
- Interacted with Oracle Support for technical assistance requests.
- Creating Users, Responsibilities, Site level profiles from front end.
- Application user creation and responsibilities assignment.
- Disable/Enable Concurrent Program and Forms
- Check database alert log and trace file manually to detect problems.
- Monitoring the space, session, concurrent managers.
- Performed hot and offline (cold) backup for databases
- Monitoring application and database tier through front end.
- Scheduling and troubleshooting Concurrent Request and Managers.
- Using Expdp & Impdp for logical backups of database.
- Performed Auto config in Apps Environment.
- Monitoring CPU utilizations and checking for long running/runaway process
- Monitoring, Detecting Lock Contention and resolving it by Killing User Sessions.
- Creating Oracle Objects (Table, View, Sequence, Index, Database Link etc.).
- Used oracle utilities like EXPLAIN PLAN, TKPROF to tune application & database
- Applied One-off, Mini packs, Family packs and maintenance packs on the instances.
- Applied patches to the database by using OPATCH utility.
- Killing runaway oracle apps processes and resolving table lock conflicts.
- Trouble shooting regarding to patching issues.
Confidential
Oracle DBA
Responsibilities:
- Worked on Providing User support and application support on Hadoop Infrastructure.
- Worked on HUE integration with LDAP for authentication/authorization.
- Utilized AD Utilities and admin, adctrl and adautocfg.
- Performed CPU patching for Test and Prod instances for every quarter.
- Working on 24x7 Supports.
- Performed Schema refreshes.
- Performed DB refreshes from production to test.
- Deployed the code migrations.
- Responsible for implementation and ongoing administration of Hadoop infrastructure.
- Apache Hadoop migration from 1.0 to 2.0
- Working with data delivery teams to setup new Hadoop users such as setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig, Spark and MapReduce access for the new users.