Aws Rds Engineer Resume
San Francisco, CA
SUMMARY
- 8 years of experience in Datamining with large datasets of Structured and Unstructured data, Data Acquisition, Data Validation, Predictive modeling, Data Visualization.
- 2+ years of experience on Azure Data Factory, Managing Database, Azure Data Platform services (Azure Data Lake (ADLS), Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Data bricks etc.
- 3+ years of experience in Amazon Web Services.
- Extensive experience in using Microsoft BI studio products like SSIS, SSAS, SSRS for implementation of ETL methodology in data extraction, transformation, and loading.
- Demonstrated history of working in the Health Care Industry, Expertise on ICD9, ICD10 and HIPAA - EDI-5010 Implementation, Member Enrollment, Billing, & Claims processing, Business Process Modeling, Data Modeling, Data Mapping, Data Warehousing, Business Intelligence, Pharmacy Benefit Management (PBM), and Application Development & Enhancement.
- Experience with Oracle technologies including OBIEE, Hyperion Interactive reporting and eBusiness Suite.
- Migration data from MSSQL to Hadoop.
- Experience in managing Hadoop clusters using Cloudera Manager Tool.
- Experienced ininstalling and configuring SQL Serverand had experience innew features like Query store, Always Encrypted, Row level security, stretch database, Temporal table, SQL Always on with Round Robin load balancing and Enhanced In-Memory OLTP.
- Excellent in High Level Design of ETL DTS Packages & SSIS Packages for integrating data using OLE DB connection from heterogeneous sources like (Excel, CSV, Oracle, flat file, Text Format Data) by using multiple transformations provided by SSIS such as Data Conversion, Conditional Split, Bulk Insert, Merge, and union all.
- Expertise in working with Hive optimization techniques like Partitioning, Bucketing, vectorizations and Map side-joins, Bucket-Map Join, skew joins, and creating Indexes.
- Experience in Managing Security of T-SQL Server 2019/2016/ 2014/2012/2008 R2 Databases by creating Database Users, Roles and assigning proper permissions according to the business requirements.
- Extracted data from different sources like Oracle Tables, views and flat files, Hyperion implemented Business logic and loaded into target warehouse (Oracle).
- Experience in report writing using T-SQL Server Reporting Services (SSRS) and creating various types of reports like Drill Down, Drills Through, Parameterized, Cascading, Conditional, Table, Matrix, Chart and Sub Reports.
- Experience in developing, monitoring, extracting, and transforming data using DTS/SSIS, Import Export Wizard, and Bulk Insert.
- Utilized Power BI (Power View) to create various analytical dashboards that depicts critical KPI’s such as legal case matter, billing hours and case proceedings along with slicers and dicers enabling end-user to make filters.
- Implemented Power BI gateway different source to matching analysis criteria and future modeling for Power Pivot and Power View.
- Excellent knowledge in creating Databases, Tables, Stored Procedures, DDL/DML Triggers, Views, User defined data types, functions, Cursors, and Indexes using T-SQL.
- Experience on working with JNLP files, modified java scripts in it.
- Experience in Data Conversion and Data Migration using SSIS and DTS services across different database like Oracle, MS access and flat files.
- Experience in Performance Tuning and Query Optimization.
- Transformed data from one server to other servers using tools like Bulk Copy Program (BCP), Data Transformation Services (DTS) and SQL Server Integration Services (SSIS)
- Experience in creating packages to transfer data between ORACLE, MS ACCESS and Flat Files to SQL Server using DTS/SSIS.
- Experience in Birst BI Data Modeling (Admin), Live Access, Reporting (Designer /Visualizer /Dashboards) and space management.
- Hands on experience in creating DataMart’s in Star Schema and Snowflake Schemas.
- Knowledge of scripting languages like Python and Java, and Strong knowledge of GCP.
PROFESSIONAL EXPERIENCE
Confidential, San Francisco, CA
AWS RDS Engineer
Responsibilities:
- Created and configured Oracle DB instance in AWS RDS and created Roles for databases.
- Created S3 buckets, managed policies for S3 buckets, and Utilized S3 bucket and backup on AWS.
- Worked on FGA for new RDS Oracle databases and created IAM policies for users.
- Developed a Lambda Function script for copying Audit files to S3 buckets.
- Developed Lambda functions to zip and Unzip G-zip and zip files.
- Developed a shell script for Audit enable/disable on RDS Oracle Tables in multiple databases.
- Processed JSON, Excel, CSV, and data files from the S3 bucket to Oracle.
- Created JSON tables to process data from JSON files to Oracle tables and worked on CLOB datatype.
- Developed and configured dataflow pipelines between the MuleSoft environment and AWS RDS.
- Created Tables, Materialized Views, Functions, Packages, Directories, Indexes, Synonyms, and Database Links.
- Worked on Audit test by truncating the tables in the Oracle database and validated results.
- Provided support for High Availability Environment and monitored those databases for performance tuning.
- Worked on Log Mining function and documented in confluence page.
- Worked on documenting Audit functions on GitHub.
- Published RDS capabilities and limitations documentation into GitHub.
Environment: AWS RDS, AWS S3, MuleSoft, Oracle PLSQL, GitHub.
Confidential
Sr Data Engineer
Responsibilities:
- Involved in Big data requirement analysis, develop and design solutions for ETL and Business Intelligence platforms.
- Demonstrable expertise in core IT processes, utilizing ETL tools to query, validate, and analyze data.
- Experience in development and design of various scalable systems using Hadoop technologies in various environments.
- Extensive experience in analyzing data using Hadoop Ecosystems including HDFS, MapReduce, Spark, Hive & PIG.
- Acted as SME for Data Warehouse related processes.
- Performed Data analysis for building Reporting Data Mart.
- Worked with Reporting developers to oversee the implementation of report/universe designs.
- Tuned performance of Informatica mappings and sessions for improving the process and making it efficient after eliminating bottlenecks.
- Created AWS infrastructure for Salesforce syncs to/from Redshift
- Worked on AWS services such as AWS DynamoDB, AWS Lambda, AWS EMR, AWS IAM, S3 instances but not limited to this.
- Designed and deployed data pipelines using AWS services such as EMR, AWS DynamoDB, Lambda, Glue, EC2, S3, RDS, EBS, Elastic load Balancer (ELB), Auto-scaling groups.
- Migrated an existing on-premises application toAWS. Used AWS services like DynamoDB, EC2andS3for small data sets processing and storage,Experiencedin Maintaining the Hadoop cluster on AWS EMR.
- Used Spark SQL to load data and created schema RDD on top of that which loads into hive tables and managed structured using Spark SQL
- Worked on AWS EC2 Instances creation, setting up AWS VPC, launching AWS EC2 instances different private and public subnets based on the requirements for each of the applications.
- Involved in converting the hql’s in to spark transformations using spark RDD with support of python and Scala
- Expertise in Informatica cloud apps Data Synchronization (ds), Data Replication (dr), Task Flows & Mapping configurations.
- Worked on migration project which included migrating web methods code to Informatica cloud.
- Installed Kafka on Hadoop cluster and configured producer and consumer in java to establish connection from source to HDFS with popular hashtags.
- Load real time data from various data sources into HDFS using Kafka.
- Worked on reading multiple data formats on HDFS using python.
- Implemented Spark using Python (PySpark) and SparkSQL for faster testing and processing of data.
- Load the data into Spark RDD and do in memory data Computation.
- Involved in converting Hive/SQL queries into Spark transformations using APIs like Spark SQL, Data Frames, and python.
- Analyzed the SQL scripts and designed the solution to implement using python.
- Exploring the Spark by improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, SparkSQL, Data Frame, Pair RDD & Spark YARN.
- Performed transformations, cleaning and filtering on imported data using Spark Data frame API, Hive, MapReduce, and loaded final data into Hive.
- Involved in converting Map Reduce programs into Spark transformations using Spark RDD on python.
- Developed Spark scripts by using python Shell commands as per the requirement.
- Worked with NoSQL databases like HBase in creating tables to load large sets of semi structured data coming from source systems.
- Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
- Extract Real time feed using Kafka and Spark Streaming and convert it to RDD and process data in the form of Data Frame and save the data as Parquet format in HDFS.
- Design and develop the HBase target schema.
- Experience in Oozie and workflow scheduler to manage Hadoop jobs with control flows.
- Worked on visualizing the reports using Tableau
Environment: AWS, Hadoop, Apache Spark, HDFS, Java, Map Reduce, Hive, HBase, Sqoop, SQL, SSIS, SSRS, Knox, Oozie, Cloudera Manager, Cloudera.
Confidential, Santa Clara - California
Data Engineer
Responsibilities:
- Installed, configured, monitored, and maintainedHadoopcluster on Big Data platform.
- Provided support for the further evolution of the data architecture for new and existing data warehouses.
- Created multipleHivetables, implemented partitioning, dynamic partitioning, and buckets inHivefor efficient data access.
- End-to-end performance tuning of Hadoop clusters andHadoop MapReduceroutines against large data sets.
- Created User accounts and given the users the access to the Hadoop Cluster.
- Created and maintained technical documentation for launching Hadoop Clusters and for executingHive queriesandPig Scripts.
- Imported data using Sqoop into Hive and HBase from existing SQL Server.
- Responsible to Install, configure, upgrade, and maintain database instances.
- Performed database/infrastructure physical design & applied software patches to databases as per application/client requirements.
- Developed and implemented Hive scripts for transformations such as evaluation, filtering, and aggregation.
- Participated in developing security and monitoring for data architecture.
- Migrated and upgraded from SQL Server .
- Implemented & managed database clustering, failover, and load balance technologies as per client requirement.
- Handled Fine Tuning Operating System in-coordination with Server and Network Administrator to utilize the maximum resources for Database and Applications.
- Provide 24x7 Support.
- Monitored data activities such as database status, logs, space utilization, extents, Checkpoints, locks, and long transactions.
- Performance monitoring and tuning using Automatic Workload Repository (AWR) & Automatic Database Diagnostic Monitoring (ADDM)
- Maintain virtual machines and manages users and their permissions.
- Demonstrate technical leadership and influence outside of the immediate team by contributing innovative team solutions for complex problems. Contribute to strategic direction for the team and mentor junior staff on database technical details.
- Create various database objects and create users with appropriate roles and levels of security.
- Scheduled full, differential, and transactional log backups for user created databases in production environments.
- Maintaining Java-based applications & websites.
- Worked on JNLP files to create a database link between on premise database and Birst Cloud application.
- Scripted, scheduled, and monitored several types of database backups and refreshes in use, including streaming backups.
- Responsible for troubleshooting all SQL Server databases issues on production environments.
- Responsible for setting up SQL Server jobs to monitor disk space, CPU activity and Index fragmentation.
- Developed new processes to facilitate import and normalization, including data file for counterparties.
- Written Complex Queries,Stored Procedures, Batch Scripts, Triggers, Indexes and Functions.
- DesignedETLpackages dealing with different data sources SQL Server, Flat Files, Excel Files etc. and loaded the data into target data sources by performing various kinds of transformations using SQL Server Integration ServicesSSIS.
- Integrating data from T-SQL databases to Birst UI and Birst Cloud.
- Developed Power BI reports and dashboards from multiple data sources using data blending.
- Explored data in a variety of ways and across multiple visualizations using Power BI. Strategic expertise in design of experiments, data collection, analysis, and visualization.
- Designed Power BI data Visualization utilizing cross tabs, maps, scatter plots, pie, bar, and density charts.
- Performing Birst Advanced data modeling techniques in creating Scripted Sources, Hierarchies, Levels, Variables, Custom Attributes & Measures and Packages.
- Design/developed Birst reports/dashboards using Designer/Visualizer/Dashboard 7.0 reports using different report designs like results view, pivot control, sort, filter, drill behaviors and charts.
- Developed reports in Birst Reporting tool.
Environment: Hive, Business Objects, MS SQL Server 2019/2016, PL-SQL, Birst Enterprise Business Intelligence, SQL Server Integration Services (SSIS), Microsoft Visual Studio, Power BI, SSMS, MS Excel, SharePoint, T-SQL, JIRA.
Confidential, Norfolk - Virginia
MSBI Developer
Responsibilities:
- Involved in analysis of Report design requirements and actively participated and interacted with Team Lead, Technical Manager, Business Owners, Stake Holders, and Lead Business Analyst to understand the Business requirements.
- Implemented complex business requirement in backend using efficient stored procedures and flexible functions and facilitated easy implementation to the front-end application.
- Involved in converting Hive/SQL queries into Spark transformations using APIs like Spark SQL, Data Frames, and python.
- Worked on Normalization and De-Normalization techniques for bothOLTPandOLAPsystems in creatingDatabase Objectslike tables, Constraints Primary key, Foreign Key, Unique, Default, Indexes.
- Worked on FACETS Data tables and created audit reports using queries. Manually loaded data inFACETSand have good knowledge onFACETSbusiness rules.
- Developed Complex Stored Procedures, Views, and Temporary Tables as per the requirement.
- Excellent report creation skills using Microsoft SQL Server 2008/2012/2014/2016 Reporting Services.
- Did the forward andbackwarddata mapping between the fields in mainframe andFACETS.
- Expertise in defining OBIEE repository, creating Physical through Business Model and Presentation Layers, creating hierarchies, implementing data security
- (SSRS) with proficiency in using Report Designer as well as Report Builder.
- Migrated data from various sources (text-based files, Excel spreadsheets, Sybase, Oracle) to SQL Server databases using SQL Server Integration Services (SSIS).
- Performance monitoring and tuning withEXPLAIN PLAN,TKPROF,STATSPACK,SQL TRACE,10046 TRACE, AWR, ADDM.
- Created SSIS package to load data from Flat File (2GB -- 4GB) to Flat File and Flat File to SQL Server 2012 using transformations including Lookup, Derived Columns and Condition Split.
- Administration of Hyperion IR servers, setting up securities, creating jobs.
- Developed Hyperion Reports in Sales and Manufacturing business area.
- Transformed Hyperion IR reports to OBIEE dashboards before retiring the product at client.
- Deploy and Schedule SSIS packages using SQL server agent for incremental load of data warehouse.
- Helping Scrum master across the company to customize JIRA for their requirements and for move tasks from one activity to another.
- Implemented automated jobs for Database Backup on a weekly and monthly basis.
- Implemented complex conceptual database design and database architect into SQL Server 2014/2012/2008 R2 using various constraints and triggers.
- Daily database administration tasks including backup and recovery usingRMAN,Import/Export.
- Load data into Oracle using SQL *Loader, RMAN backups, Export/Import database objects/schema, Enforce database accuracy and integrity.
- Increased query performance, necessary for statistical reporting by more than 25% after performing monitoring, tuning, and Optimizing Indexes tasks by using Performance Monitor, Profiler, and Index tuning wizard.
- Created Error and Performance reports on SSIS Packages, Jobs, Stored procedures, and Triggers.
- Experience in resolving on-going maintenance issues and bug fixes; monitoring Informatic sessions as well as performance tuning of mappings and sessions.
- Executed workflows using Tidal scheduler.
- Used Scrum Agile Methodology in my work (Daily Scrum Meeting, Planning Poker, Sprint Backlog, 1on1 meeting)
- Utilized Agile Scrum practices to help the team increase their velocity by 63% within the first year of agile adoption.
- Responsible for maintaining SharePoint platform and reporting any issues to the IT director.
- Created reports from data warehouse using SSRS i.e., Drill Down, Drill Through, Sub Reports, Charts.
- Created ad-hoc reports using SQL server 2014/2012/2008 R2 Reporting Services (SSRS).
- Deployed 2008 R2 Reporting Services (SSRS) RDL files to Reports server.
Environment: Oracle, MS SQL Server 2014/2012/2008 R2, SQL Server Reporting Services 2014/2012(SSRS), Hyperion Interactive Reporting, PowerShell Scripting, SQL Server Integration Services 2014/2012(SSIS), Microsoft Visual Studio, SSMS, MS Excel, SharePoint, T-SQL, JIRA, Tidal.
Confidential
SQL Server Developer
Responsibilities:
- Maintain database dictionaries, monitor overall standards and procedures, and integrate enterprise systems through database design.
- Develop, review, and optimize queries for data retrieval and update.
- Developed shell scripts in PERL for the application server.
- Written and developed shell scripting for application performance, system maintenance and backup.
- Performed T-SQL programming for writing stored procedures, user-defined functions, Triggers, Cursors, Views on SQL 2008 servers.
- Implemented the data access layer using Entity Framework Code First Approach
- Installed & configured Web server IIS 4.0 and SQL Server 7.0 and hosting websites.
- Develop and create data dictionary, tables, views, indexes, and functions advanced queries for databases using Query Analyzer and Enterprise Manager.
- Developed necessary data access programs using JDBC and Data Express and developed the web pages using HTML and Java Scripting.
- Used Entity objects such as Data Reader, Dataset and Data Adapter, for consistent access to SQL data sources.
- Created alerts, notifications, and emails for system errors, insufficient resources, fatal database errors, hardware errors, and security breach.
- Configured securities and access to product databases in SQL Server 2008, Windows 2003 Server.
Environment: SQL Server 2008 Enterprise Version, SQL Query Analyzer, T-SQL, Java Script, C++ Microsoft IIS 4.0, VB 6.0, HTML, XML, Windows 2003, PERL, Shell scripting.