We provide IT Staff Augmentation Services!

Oracle Developer Resume

0/5 (Submit Your Rating)

Dallas, TX

SUMMARY

  • Over 9 years of extensive experience in the field of information technology industry, providing strong business solutions with excellent technical, communication and customer service expertise.
  • Involved in working on all phases of software development life cycle (SDLC) from requirements gathering to programming, testing and maintenance.
  • Working experience in Hadoop eco - system technologies like Apache Pig, Apache Hive, Apache Sqoop, Apache Flume, Apache Hbase, Apache spark and Cloudera Manager.
  • Strong Experience in AWS Cloud services like EC2 and S3.
  • Strong experience in guiding and building CI/CD pipelines.
  • Strong coding experience in Python,Unix Shell.
  • Expertise in Creating, Debugging, Scheduling and Monitoring jobs using Airflow and Oozie.
  • Experience working with Hive data warehouse system, developing data pipelines, implementing complex business logic and optimizing hive queries.
  • Good knowledge on relational databases like MySQL, Oracle and NoSQL databases like Hbase.
  • Very strong in data modeling techniques in normalized (OLTP) modeling.
  • Experience in Performance tuning Informatica mappings and workflows.
  • Provide metrics and project planning updates for the development effort in AgileProjects.
  • Strong knowledge and use of development, methodologies, standards and procedures.
  • Strong leadership qualities with excellent written and verbal communications skills.
  • Ability to multi-task and provide expertise for multiple development teams across concurrent project tasks.
  • Excellent interpersonal skills and an innate ability to provide motivation, and open to new and Innovative ideas for best possible solution.

TECHNICAL SKILLS

OPERATING SYSTEMS LANGUAGES: Sun Solaris 5.6, UNIX, Red hat LINUX 3, WINDOWS-NT, 95, 98, 2000, XP C, C++, PL/SQL, Shell Scripting, HTML, XM, Java, Python, HQL, PIG,USQL, PowerShell

DATABASES: Oracle 7.3, 8, 8i, 9i, 10g, 11g, SQL Server CE, HBase, Cassandra Document DB

TOOLS: & UTILITIES: TOAD, SQL developer, SQL Navigator, Erwin, SQL* Plus, PL/SQL Editor, SQL* Loader, Informatica, Autosys, Airflow, Subversion, Git-Bucket, Jenkins.

Hadoop Distributions: Cloudera, Amazon Web Services, Horton Works, Azure

PROFESSIONAL EXPERIENCE

Confidential, Seattle, WA

Hadoop Engineer

Responsibilities:

  • Developed SQOOP scripts to migrate data from Oracle to Big data Environment
  • Migrated the functionality of Informatica jobs to HQL scripts using HIVE
  • Developed ETL jobs using PIG, HIVE and SPARK
  • Extensively worked with Avro and Parquet files and converted the data from either format
  • Parsed Semi Structured JSON data and converted to Parquet using Data Frames in PySpark.
  • Created Python UDF that are used in Spark
  • Created Hive DDL on Parquet and Avro data files residing in both HDFS and S3 bucket
  • Created Airflow Scheduling scripts in Python
  • Worked extensively Sqooping wide range of data sets
  • Extensively worked in Sentry Enabled system which enforces data security
  • Involved in file movements between HDFS and AWS S3
  • Extensively worked with S3 bucket in AWS
  • Created Oozie workflows for scheduling
  • Created tables and views in RedShift Database
  • Imported data from S3 buckets to Redshift
  • Created data partitions on large data sets in S3 and DDL on partitioned data.
  • Converted all Hadoop jobs to run in EMR by configuring the cluster according to the data size.
  • Self driven Multiple small projects with quality output
  • Extensively used Stash Git-Bucket for Code Control
  • Monitor and Troubleshoot Hadoop jobs using Yarn Resource Manager
  • Monitor and Troubleshoot EMR job logs using Genie
  • Provided mentorship to fellow Hadoop developers
  • Provided Solutions to technical issues in Big data
  • Explained the issues in laymen terms to help BSAs understand
  • Worked simultaneously on multiple tasks.

Environment: SQOOP, ETL, PIG, HIVE,SPARK, Python,HDFS,AWS-S3, Airflow,RedShift, EMR, Bit-Bucket, YARN, Genie, Unix.

Confidential, Los Angeles, CA

Hadoop/Big data Developer

Responsibilities:

  • Installed and configured Hadoop and Hadoop stack on a 7 node cluster.
  • Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables.
  • Involved in data ingestion into HDFS using Sqoop and Flume from variety of sources.
  • Responsible for managing data from various sources.
  • Got good experience with NoSQL database Hbase.
  • Designed and implemented MapReduce-based large-scale parallel relation-learning system.
  • Worked with NoSQL databases like Hbase in creating Hbase tables to load large sets of semi structured data coming from various sources.
  • Evaluated the use of Zookeeper in cluster co-ordination services.
  • Installed and configured Hive and also wrote Hive UDAFs that helped spot market trends.
  • Used Hadoop streaming to process terabytes data in XML format.
  • Involved in loading data from UNIX file system to HDFS.
  • Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce jobs given by the users.
  • Involved in creating Hive tables, loading the data using it and in writing Hive queries to analyze the data.
  • Gained very good business knowledge on health insurance, claim processing, fraud suspect identification, appeals process etc.

Environment: HDFS, Pig, Hive, Hbase, MapReduce, Java, Sqoop, Flume, Oozie, Linux, UNIX Shell Scripting and Big Data.

Confidential, Dallas, TX

Hadoop Developer

Responsibilities:

  • Worked on analyzing Hadoop cluster using different big data analytic tools including Pig, Hive and Map Reduce.
  • Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Used Flume to collect, aggregate, and store the web log data from different web servers and pushed to HDFS.
  • Written complex HiveQL and Pig Latin queries for data analysis to meet business requirements.
  • Working experience on designing and implementing complete end-to-end Hadoop Infrastructure including Pig, Hive, Sqoop, Oozie and Zookeeper.
  • Experience in importing and exporting terabytes of data using Sqoop from Relational Database Systems to HDFS.
  • Installed and configured HadoopMapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and preprocessing.
  • Worked on different set of tables like External Tables and Managed Tables.
  • Worked with Hue GUI in scheduling jobs with ease and File browsing, Job browsing, Metastore management.
  • Experience in providing support to data analyst in running Pig and Hive queries.
  • Good experience in Hive partitioning, bucketing and perform different types of joins on Hive tables and Implementing Hive Serdes like REGEX, JSON and Avro.
  • Expert in writing HiveQL queries and Pig Latin scripts.
  • Good understanding of ETL tools and how they can be applied in a Big Data environment.
  • Experience in developing customized Hive UDFs and UDAFs in Java, JDBC connectivity with hive Development and execution of Pig scripts and Pig UDF's.
  • Experience in using Sqoop to migrate data to and fro from HDFS and My SQL or Oracle and deployed Hive and HBase integration to perform OLAP operations on HBase data.
  • Assisted in exporting data into Cassandra and writing column families to provide fast listing outputs.
  • Moving the data from Oracle, Teradata and MS SQL Server in to HDFS using Sqoop and importing various Formats of flat files in to HDFS.
  • Used SparkSQL for Scala interface that automatically that automatically converts RDD case classes to schema RDD.
  • Using SparkSQL read and write table which are stored in hive.
  • Sql, streaming and complex analytics in the company are handled with use of Spark.
  • Streaming data to Hadoop using Kafka.
  • Responsible for design and creation of Hive tables and worked on various performance optimizations like Partition, bucketing in Hive. Handled incremental data loads from RDBMS into HDFS using Sqoop.
  • Used Oozie scheduler to automate the pipeline workflow and orchestrate the Sqoop, Hive and Pig jobs that extract the data on a timely manner.
  • Responsible for design and creation of Hive tables, partitioning, bucketing, loading data and writing hive queries.
  • Worked with the Data Science team to gather requirements for various data mining projects.
  • Exported analyzed data to downstream systems using Sqoop for generating end-user reports, Business Analysis reports and payment reports.

Environment: Apache Hadoop, Apache Kafka, HDFS, Hive, Pig, Map Reduce, Java, Sqoop, Cassandra, Apache Spark, Flume, Oozie, MySQL, UNIX, Core Java.

Confidential, Dallas, TX

Oracle Developer

Responsibilities:

  • Gathering requirements and system specifications from the business users.
  • Developed PL/SQL Packages, Procedures, Functions, Triggers, Views, Indexes, Sequences and Synonyms.
  • Extensively involved in tuning slow performing queries, procedures and functions.
  • Extensively used collections and collection types to improve the data upload performance.
  • Involved in working with ETL team in loading data from Oracle10g into Teradata
  • Co-ordinate with QA Team regularly for test scenarios and functionality.
  • Organized knowledge sharing sessions with PS team.
  • Identified and created missing DB Links, Indexes, and analyzed tables which helped improve performance of poor running SQL queries.
  • Involved in both logical and physical model design.
  • Extensively worked with DBA Team for refreshing the pre-production databases.
  • Worked closely with JBOSS team in providing the data needs.
  • Worked on APEX tool which is used to create and store Customer Store information.
  • Created index organized tables
  • Closely worked with SAP systems.
  • Simultaneously worked on multiple applications.
  • Involved in estimating the effort required for the database tasks
  • Involved in fixing Production bugs which involves in and out of assigned projects
  • Explained the issues in laymen terms to help understand the BSAs
  • Executed Jobs in Unix Environment
  • Involved in many dry run activities to make sure we have smooth production release
  • Involved extensively in creating a release plan during the project Go-Live
  • Coordinated with the DBA team to gather statspack for a time frame which gives us the database load and Database activities happening during that particular time frame.

Environment: PL/SQL,SQL, ETL, Oracle10g, JBOSS, APEX, SAP, Unix.

Confidential

Oracle Developer

Responsibilities:

  • Worked on designing the content and delivering the solutions based on understanding the requirements.
  • Wrote web service client for tracking operations for the orders which is accessing web services API and utilizing in our web application.
  • Developed User Interface using JavaScript, JQuery and HTML.
  • Used AJAXAPI for intensive user operations and client-side validations.
  • Worked with Java, J2EE, SQL, JDBC, XML, JavaScript, web servers.
  • Utilized Servlet for the controller layer, JSP and JSP tags for the interface
  • Worked on Model View Controller Pattern and various design patterns.
  • Worked with designers, architects, developers for translating data requirements into the physical schema definitions for SQL sub-programs and modified the existing SQL program units.
  • Designed and Developed SQL functions and stored procedures.
  • Involved in debugging and bug fixing of application modules.
  • Efficiently dealt with exceptions and flow control.
  • Worked on Object Oriented Programming concepts.
  • Added Log4j to log the errors.
  • Used Eclipse for writing code and SVN for version control.
  • Installed and used MS SQL Server 2008 database.
  • Spearheaded coding for site management which included change of requests for enhancing and fixing bugs pertaining to all parts of the website.

Environment: Java, JavaScript, JSP, JDBC, Servlets, MS SQL, XML, Windows XP, Ant, SQL Server database, Red Hat Linux, Eclipse luna, SVN.

We'd love your feedback!