Pl/sql Developer Resume
SUMMARY
- Overall 10+ years of work experience in IT industry.
- Extensive hands on experience on PL/SQL, Informatica, Oracle, SQL Server, Postgres, Python, Unix and strong experience on Business intelligence tool MicroStrategy.
- Experience building Analytics Platforms using Python.
- Very strong Python coding experience with solid understanding of Python internals.
- Numpy, Scipy and Pandas expertise.
- Profiling Python code for optimization and memory management and implementing multithreading functionality.
- Manipulating and Cleaning data using missing value treatment in Pandas and performed standardization.
- Prepared shell scripts to perform the file monitoring, file operations and to ftp the files from different systems and sending the files to the target system after loading process completed.
- Oversee entire data lifecycle from acquisition to reporting through implementation of various ETL techniques and data management methodologies.
- Create, manage, and modify logical and physical data models using a variety of data modelling philosophies and techniques including Inmon or Kimball
- Collaborates with cross - functional team in support of business case development and identifying modelling method(s) to provide business solutions
- Performs data analysis and design, creates and maintains large, complex logical and physical data models, and metadata repositories using ERWIN
- Involved in designing the Data models like star schema to in corporate the Fact and Dimension tables.
- Expert in building Early warning systems (C-PEWS, MC-EWS) for analysing and taking decisions to the business owners and it will also help to take immediate actions on the regular business transactions for the lower level management.
- Designed Data model, schema design and data processing flow for C-PEWS and MC-EWS applications.
- Proficient in Data warehouse concepts (OLAP), Oracle PL/SQL and Business Intelligence concepts.
- ProvideestimationsforETL deliverablesand oversee the progress for quality ETL Deliverables.
- Work with the business stakeholders, analysts and other programmers to develop a solution or enhance existing solution
- Implemented change Data capture methods for Incremental loads
- Implemented Slowly Changing Dimensions type 1, type 2 and type 3 methods on SMART by using Informatica.
- Created severalRepository Mappings and Queriesfacilitating rapid analysis, trouble shooting, code verification and deployment.
- Extensive knowledge on developing Informatica Mappings, Mapplets, Workflows and scheduling jobs in (serial and parallel mode).
- Performed data manipulations using various Informatica Transformations likeFilter, Expression, Lookup (Connected and Un-Connected), Aggregate, Update Strategy, Normalizer, Joiner, Router, Sorter and Union.
- Deal with databases like Oracle, Sqlserver and experience in integration of various data sources like Oracle, Flat Files, and Sqlserver.
- Tracing the query Execution plan and identifying the cost and rule-based optimizers used for the query.
- Checking the possibility of reducing the no of I/O cycles and cost of the query using the proper oracle hints in the required areas.
- Deal with databases like Oracle, Sqlserver and experience in integration of various data sources like Oracle, Flat Files, and Sqlserver.
- Loaded the data into the tables by using TOAD and SQL Loader.
- Identify the long running sessions by using TOAD and kill the long running sessions.
- Maintained Oracle indexes up to date to improve the performance.
- Worked on ref cursors, strong and weak ref cursors.
- Built anonymous PL/SQL scripts as part of load processing.
- Generated spool files to share the data with external users.
- Gathering statistics and using different types of hints to improve the performance of the existing query’s.
- Extensive experience in Oracle backend objects like Database Triggers, Stored Procedures, Views, Materialized Views, Synonyms, Constraints, Sequences, Functions, Packages and exception handing using PL/SQL.
- Converted normal load process into bulk collect with exception handling and error logging..
- Extensive knowledge on PL/SQL object-oriented Programming.
- Expert in building data PIPELINEs with Oracle PIPELINE options.
- Exporting data into files by using Oracle UTL FILEPackages.
- Importing flat files data by using Oracle External tables.
- Good Knowledge on Oracle Performance Tuning, Indexing, Partitioning techniques and Ref Cursors.
- Good Knowledge on Oracle Bulk Collect, Collections (V-arrays, Nested Tables, Associative Arrays)
- Extensive knowledge on SQL, UNIX commands and Shell scripting.
- Expert in Data Warehouse development starting from inception to implementation and ongoing support, strong understanding of BI application design and development principles.
- Strong knowledge on Table partitioning, indexing, materialized views.
- Worked on Error logging and Materialized view logging for Incremental updates.
- Scheduling the Unix scripts by using Crontab scheduler.
- Prepared shell scripts and SQL scripts to auto generate Control files without any manual Interaction for writing control files to load data by using SQL Loader.
- Knowledge of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Spark
- Experience in using RDD caching for Spark streaming.
- Experience in using Spark SQL with data source Hive.
- Migrating the coding from Hive to Spark and Python using Spark SQL, RDD.
- Implemented Spark using Python and Spark-SQL for faster testing and processing of data
- Converting MAPR programs into Spark transformations using Spark RDD in Python.
- Proficient inAWSservices like VPC, EC2, S3, ELB, AutoScalingGroups (ASG), EBS and RDS.
- Experienced in creating multiple VPC’s and public, private subnets as per requirement and distributed them as groups into various availability zones of the VPC.
- Configured S3 buckets with various life cycle policies to archive the infrequently accessed data to storage classes based on requirement.
- Possess good knowledge in creating and launching EC2 instances using AMI’s of Linux, Ubuntu, RHEL, and Windows and wrote shell scripts to bootstrap instance.
- Used IAM for creating roles, users, groups and also implemented MFA to provide additional security toAWSaccount and its resources.
- Created EBS volumes for storing application files for use with EC2 instances whenever they are mounted to them.
- Experienced in creating RDS instances to serve data through servers for responding to requests.
- Good Experience on migrating applications from internal data center toAWS.
- Extensive experience on Ora2pg (Oracle to Postgres) migration tool.
- Strong knowledge on Language PLPGSQL and Language SQL.
TECHNICAL SKILLS
ETL Tools: Informatica, Data Stage
Languages: SQL, PL/SQL, T-SQL, NOSQL, HQL, Python, Sqoop, Scala, PySpark
Reporting Tools: MicroStrategy
Scripting languages: Python, Unix Shell, Java scripting, Sed & Awk, VBA., XML, JSON.
Systems: Windows, UNIX, Linux
Data Bases: Oracle11g,12C, 19C, MYSQL, SQL server 2012, Teradata14.0 Oracle Exadata, Hive, PostgreSQL
Data Modelling & DB Tools: Job Scheduler
Version tools: AWS technologies
Erwin Data Modeler, TOAD, PL/SQL Developer, DB Viewer: Crontab, Control-M, Tivoli workload scheduler (TWS).
Git, Tortoise (SVN), Bit bucket: AWS, EC2, S3, Lambda, RDS, EMR
PROFESSIONAL EXPERIENCE
Confidential
PL/SQL Developer
Environment: PL/SQL, Informatica 9.6.1, Python, TOAD, Oracle Exadata, 19C, Unix, Micro Strategy 9.3
Responsibilities:
- Working as a Technical lead for the Altruista Clinical data Processing.
- Mentoring the team in development work and supporting to resolve technical issues.
- Developed Altruista Clinical Data model structure to support SMART Analytics.
- Loaded the data into the tables using TOAD and SQLLoader.
- Identify the long running sessions by using TOAD and kill the long running sessions.
- Created external tables to import flat files data.
- Implemented business rules by using constraints like check, referential integrity, unique, Initially deferred deferrable constraints and data base triggers.
- Implemented oracle Error logging by using audit tables and oracle built in packages dbms errlog.
- All New sources data loading into HIVE by using sqoop and Talend. Hive using as a Staging area and generating adhoc reports from hive server.
- From hive to SMART data loading using sqoop, unix scripts and using Talend.
- Reading and writing data into Parquet files for Columnar storage by using Pyspark.
- Reading and writing data into JSON files for API out put reads by using Pyspark.
- Reading and writing data into XML files by using Pyspark.
- Reading data from CSV files by using Pyspark parallel data processing and perform ETL operations and convert the Pyspark DataFrames into Pandas DataFrames
- SFTP the files from Ingenix server to SMART server and loading the files into SMART by using SQL Loader.
- Prepared UNIX shell scripts to run the SQL loader and also to send back the bad records list to INGENIX team through SFTP.
- Performed Customer Requirements Gathering, Requirements Analysis, Design, Development, Testing, End User Acceptance Presentations, Implementation and Postproduction support of BI Projects.
- Automated and improved existing manual processes. Optimizing the server platforms using Caching, Scheduling, and Clustering.
- Worked with back end Database Administrators to provide requirements for necessary back end tables.
- Extensively worked in both Ad-hoc and Standard Reporting Environments and Involved in creating Reports scalable to large volumes of data. Managed projects through entire Systems Development Life Cycle, including creating
- Timely/thorough status reports and leading status meetings.
- Handling the incidents and given the solutions.
- Migrated the existing unix and SQL scripts to Informatica jobs whenever required.
- Developed new Informatica jobs for extracting data from source systems in place of existing java code and sql scripts.
- Improving the performance of current sql scripts by using Bulk collect.
- Converting full load data cycles into incremental loads depends on the feasibility.
- Developed various parallel and sequential jobs and creating done files for completed jobs and sending the done files into MicroStrategy server to trigger the reports based on even trigger.
- Involved in various back end ETL development tasks and Production Support.
- Fixed Data Integrity issues on ETL side and providing Ad-hoc Solutions.
- To Develop the technical components like PL/SQL Procedures, Unix Shell Scripts, sqlloader control files to achieve the exact business needs
- Involved in writing Procedures, Functions, Triggers, Packages, Cursors, Views and data loading using SQL Loader utility for new/enhance requirements.
- Work on the ad-hoc requests like analysis of data inconsistency, data being populated incorrectly, code fix and improving the existing processes.
- Performed data manipulations using various Informatica Transformations likeFilter, Expression, Lookup (Connected and Un-Connected), Aggregate, Update Strategy, Normalizer, Joiner, Router, Sorter and Union.
- Created dataflow by using informatica to extract data from Teradata source.
- Loading XML file and JSON files by using python.
- Modifying the existing Teradata queries and SQL scripts as part of performance tuning.
- Created mappings, Mapplets according to Business requirement using Informatica big data version and deployed them as applications and exported to power center for scheduling.
- Developed Workflows using taskdeveloper,Worklet designer and workflow designer in Workflow manager and monitored the results using workflow monitor.
- Generating the JSON files to send it to the API teams and getting the input JSON files from Source API teams and loading the file by using python.
- Designed various mappings and Mapplets using different Transformations Techniques such asKey Generator, Match, Labeler, Case Converter, StandardizerandAddress Validator.
- Developednew mapping designs using various tools in Informaticalike Source Analyzer, Warehouse Designer, Mapplet Designer and Mapping Designer.
- Loading Excel formatted Revenue files data into Pandas Data Frames and generated trend reports by using linear regression methods and matplotlib’s.
- Loading data from web applications to Database by using Python Django framework and generate the statistical reports.
- Cleaning the unstructured data and formatting into the structured data by using Unix and python scripts and loading into database.
- Build modules and package in python for data manipulation and to perform ETL operations as per the requirement.
- Developed Merge jobs in Python in order to extract and load data into MySQL database and used Test driven approach for developing applications.
- Migrating the existing python scripts to improve the performance by using multi-threading approaches when I/O operations are performing
- Improved the performance of the existing ICD-10 code migration scripts by using Python Multiprocessing
- Migrating the couple of ETL informatica jobs into to Python and Unix as part of cost reduction.
- Recently started migrating the C-PEWS application from Oracle to PostgreSQL
- Converting existing oracle Data types int to PostgreSQL format, also converting existing Bitmap indexes PostgreSQL does not support bitmap index.
- Changing the existing oracle Sql scripts into PostgreSQL format without changing the business logic.
- Replaced all the existing oracle Sequences with PostgreSQL sequences to support.
- Oracle joins converted into ANSI standard joins to support PostgreSQL, PostgreSQL not supported outer joins with(+) we have converted them into ANSI format.
- Migrated applications from internal datacentre toAWS
- ConfiguredAWSCLI and performed necessary actions on theAWSservices using shell scripting.
- Created SSH connection between existing on premises server to AWS cloud.
- Created data backups into AWS S3 buckets depends on the custom rules defined.
- Maintained versioning of scripts though AWS S3 versioning.
- Maintained storage classes and transition the objects between storage classes by applying S3 lifecycle rules.
Confidential
ETL Developer
Environment: Oracle 11g,12C, Unix, Informatica, TOAD
Responsibilities:
- To understand the business requirements by doing a complete analysis and preparing possible approaches for the requirement.
- Used various Transformations like Aggregation, Expression, Filter, Joiner, Lookup, Router, Union Transformation, etc.
- To Develop the technical components like PL/SQL Procedures, Unix Shell Scripts, sqlloader control files to achieve the exact business needs
- Analysis of the Data Quality, impact analysis, monitoring the data loads and resolving the production issues.
- Created Materialized views with enable query rewrite options to reduce the number of query hits on the fact claim table and to run the query’s faster on the aggregated data.
- To run the monthly loads into data warehouse and monitor it and resolve the errors if it errors out
- Involved in writing Procedures, Functions, Triggers, Packages, Cursors, Views and data loading using SQL Loader utility for new/enhance requirements.
- Identify the long running sessions by using TOAD and kill the long running sessions.
- Import and export the data by using TOAD.
- Using TOAD to perform the data validations and analysis.
- Analyzed the complex business requirements by converting functional specifications to technical specification to design or develop ETL Process.
- Performed detailed analysis of business requirements to determine the architectural requirements and working on the ETL Framework that supports data manipulation, data quality and vulnerability of data.
- Designed the transformation rules for all target systems that supports the application in terms of develop, debug, maintain and support test applications for business units or functions.
- Provided Metrics work stream to facilitate the ETL of Vulnerability Metric Automation.
- Reviewed metrics and load performance trends identifying root causes and process improvements, recommend new approaches and techniques that drive continuous improvement in application development process.
- Worked with the Informatica Admin team to setup an informatica environment for the application.
- Responsible for developing ETL Processes within the system and maintained the copy/run book for the process.
- Engaged with the DBA Teams for implement the strategic DDL changes across the environments i.e. IT/UAT/PROD.
- Developed Stored Procedures to automate the metric reports based on dates i.e. Weekly, Monthly and Quarterly Reports.
- Implemented the SCD type mappings through Informatica power Center 9 for maintaining transactional and historical data.
- Written the complex queries against the source tables, applied the transformation logics like Lookup, Joiner, Router, Sequence Generator and Update Strategy.
- Developed Mapplets, Reusable Transformations, User Defined Functions and Reusable Sessions which is extensively used across the workflows.
- Developed the test scripts, to check for data validation, compared test data results with pre-prod data.
- Created Unix Shell Scripts for File validation Process and used Email Task to trigger emails to the team about the job status.
- Worked on the workflow/Job shell Script which Kicks off the job from the command line.
- Worked on Agile Environment and used Version-One as a bug /defects tracking tool.
- Implemented the Analytical Formulas in informatica through transformations for developing the ETL Jobs.
- Loading XML data sets by using informatica.
- Automated and improved existing manual processes. Optimizing the server platforms using Caching, Scheduling, and Clustering.
- Worked with back end Database Administrators to provide requirements for necessary back end tables.
- Extensively worked in both Ad-hoc and Standard Reporting Environments and Involved in creating Reports scalable to large volumes of data. Managed projects through entire Systems Development Life Cycle, including creating
- Timely/thorough status reports and leading status meetings.
- Handling the incidents and given the solutions.
- Involved in various back end ETL development tasks and Production Support.
- Fixing Data Integrity issues on ETL side and providing Ad-hoc Solutions.
- Work on the ad-hoc requests like analysis of data inconsistency, data being populated incorrectly, code
- Performed Goldengate initial load using (expdp and impdp) from source to target.
- Knowledge on Oracle Support Process for Technical requests using Metal ink and other support procedures.
- Participated in 24 X 7 on-call rotation, off-hour production problem resolution activities.
- Excellent communication skills, with strong troubleshooting and problem-solving ability.
Confidential
PL/SQL Developer
Environment: Oracle 11g,SQL, PL/SQL, Informatica 8.6,Toad
Responsibilities:
- Coding of Application program from specifications using SQL.
- Developed complex SQL queries using joins, sub queries and correlated sub queries.
- Preparing analysis documents for the existing code.
- Writing Procedures, Functions, Triggers, Packages, Cursors, Views and data loading using SQL Loader utility for new/enhance requirements.
- Improving performance for existing process.
- Involved in writing data Migration scripts to load data successfully from client database to phrased database.
- Wrote UNIX shell scripts for data files FTP and to run SQL *Loader scripts.
- Wrote SQL *Loader scripts for importing data from Flat Files, CSV files and files with delimiters.
- Developed Test Plans and participate in unit testing thru production implementation.
- Experience in taking logical backup of the database using DATAPUMP (Expdp/Impdp).
- Implementation of Flashback Technology.
- Managed Database Structures, Converted Logical data to Physical data, Storage Allocation, Table/Index segments, Rollback and undo segments, Constraints, Database Access, Roles and Privileges and Database Auditing.
- Used Source Qualifier & Lookup SQL overrides, Persistent caches, Incremental Aggregation, Partitions, and Parameter Files etc. for better performance.
- Create ETL mappings with complex business logic for high volume data loads using various transformations such as Aggregator, Sorter, Filter, Normalizer, SQL Transformation, Lookup, Joiner, Router, Update Strategy, Union, Sequence Generator, transformation language likes transformation expression, constants, system variables, data format strings etc.