Apache Nifi Consultant Resume
Alexandria, Va
SUMMARY
- Over 10 years of extensive experience in Data Analysis with a strong background in Database and Data warehousing and more than 9+ years of experience in Data Integration, Migration and Loading ETL process using Informatica Power Center 9.1/8.x/7.x/6.x,Apache Nifi, SSIS & Confidential .
- Strong Working Experience with various clients ranging from Telecom, Financial, Banking and Web (domain - space) related industries.
- Experience in integration of various Operational Data Source(ODS)s with Multiple Relational Databases like Oracle, DB2,and SQLWorked on the process of integrating data from flat files like fixed width and delimited.
- Excellent analytical skills in understanding the client systems and Organizational Structure.
- Design and Architect ETL jobs for the DW team.
- Well acquainted with Informatica Designer Components - Source Analyzer, Warehouse Designer, Transformation Developer, Mapplet and Mapping Designer.
- Provide high level optimization ideas for the ETL team to improve the code performance. This includes multi processing.
- Possess excellent documentation skills, prepared best practices document’s and have experience in data modeling and data modeling tools like Erwin.
- Experience in Oracle supplied packages, Dynamic SQL, Records and PL/SQL Tables.
- Experience in using Automation tools like Autosys; scheduling workflows using CA7 scheduling.
- Experience in loading XML & JSON files into NoSQL databases such as Marklogic and MongoDB Database using Apache Nifi 1.8/1.3.
- Consume the data from the client APIs of the servers located on AWS, ingest it, transform and load into Enterprise Data Hub using the Apache Nifi processors like InvokeAWSGatewayApi.
- Extensively used out of the box Kafka processors available in Nifi to consume data from Apache Kafka specifically built against the Kafka consumer API.
- Hands on experience in Performance Tuning of sources, targets, transformations and sessions.
- Strong Knowledge of Data Warehouse Architecture and Designing Star Schema, Snow flake Schema, FACT and Dimensional Tables, Physical and Logical Data Modeling using Erwin.
- Strong Knowledge in optimizing database performance in Oracle,Postgres, Netezza and DB2.
- Good knowledge in UNIX shell scripting, CRON and file management in various UNIX environment.
- Partitioned the fact tables and Materialized views to enhance the performance
TECHNICAL SKILLS
- ETL Tools: Informatica 9x/8.x/7.x/6.x/5.x (Power Center/Power Mart/Power Exchange) (Designer, Workflow Manager, Workflow Monitor, Informatica Data Quality (Informatica Developer), Apache Nifi 1.9/1.7, Apache KafkaTalend,Power Connect, MDM),SSIS.
- Data Modeling: Erwin 4.0/3.5, Star Schema Modeling, Snow Flake Modeling, MS Visio, Rational Rose Suite (UML)
- Databases: Oracle 11g/10g/9i/8i, MS SQL Server 2005/2000, Netezza, DB2, Teradata V2R5,My SQL, Sybase, Big Data, Hadoop, HDFS, Map Reduce, DB2/400,AWS
- Reporting Tools: Microstrategy 8.0/8.1, OBIEE, SSRS,SAP BO Reports.
- Languages: T - SQL, PL/SQL, Unix Shell Script, Perl, Visual Basic, XML, C and Java.
- DB Tools: Oracle SQL Developer, DB Visualizer, Toad, SQL* Loader,, Autosys, DB Visualizer, SQL Navigator
- Operating Systems: Windows 2003/2000/XP/NT, UNIX, Linux AIX, Sun Solaris
- Automation Tools: Autosys, CA7 scheduler
- Project Methodologies: Waterfall, Agile/Scrum
PROFESSIONAL EXPERIENCE
Confidential, Alexandria, VA.
Apache NiFi Consultant
Responsibilities:
- Involves in meetings with Business stakeholders to understand the requirements and provide the solutions from architecture standpoint.
- Develop Process groups according to the subject areas at the Enterprise level.
- Involves in understanding the need for correct Process and it’s corresponding Controller services to meet the needs of technical requirement.
- Worked with different processes like Get File, Put File, ExecuteSQL, SplitRecord and other complex Processes within a Process Group.
- Involved in loading the files and convert them to respective JSON files.
- Convert the SQLs to JSON files and eventually load them to an Interface where the front application webservices use them as input files.
- Utilize Kafka & Nifi to gain real-time streaming data into one of the source systems for EPM.
- Convert the CSV input files to JSON using SplitRecord to convert them to Json files and then load them to Marklogic Database.
- Involves in accepting different formats of flat files that were generated using PostgreSQL and Oracle Databases.
- Responsible for invoking remote HTTPS on AWS using Nifi processors like InvokeAWSGatewayApi.
- Responsible to load the JSON & XML Datasets into Marklogic & MongoDB Documents Database using PutMarklogic, PutMongo and PutMongoRecord with the help of other basic processors within a Process group.
Environment: Apache Nifi 1.8/1.3, Power BI, Marklogic, Version One, DB Visualizer, Linux, Marklogic 9.0-8, MongoDB 4 & Oracle 11g, AWS(EC2),DBeaver.
Confidential, Elkridge, MD.
Informatica /Apache NiFi Consultant
Responsibilities:
- Involves in Business meetings to understand and analyze the Apache Nifi process group requirements.
- Develop Process groups according to the subject areas at the Enterprise level.
- Involves in understanding the need for correct Process and it’s corresponding Controller services to meet the needs of technical requirement.
- Worked with different processes like Get File, Put File, ExecuteSQ, SplitRecord and other complex Processes within a Process Group.
- Involved in loading the files and convert them to respective JSON files.
- Convert the SQLs to JSON files and eventually load them to an Interface where the front application webservices use them as input files.
- Convert the CSV input files to JSON using SplitRecord to convert them to Json files and then load them to Marklogic Database.
- Implemented error handling and data inventory mechanisms successfully.
- Implemented a solution to integrate the Nifi 1.9.2 with LDAP server so that it would generate the certificates and can be secured.
- Involves in accepting different formats of flat files that were generated using PostgreSQL and Oracle Databases.
- Profile the source data to understand the ratio of bad data to good data.
Environment: Informatica(IDQ), Apache Nifi 1.8/1.3, Version One, IBM Data Studio, DB Visualizer. Unix, Postgres SQL, Marklogic 9.0-8, MongoDB 4 & Oracle 11g.
Confidential, Mclean, VA.
Confidential Developer
Responsibilities:
- Involves in understanding the data model changes that consists of database object definitions and additional supporting scripts.
- Responsible for understanding the ETL dependencies from source files to target reporting tables.
- Involves in building the T2Ts that load the data from source to targets.
- Responsible for unit testing the T2T results in DEV environment and responsible for code migration to QA environment.
- Involves in configuring the batches, runs and rules in Confidential .
- Able to meet the deadlines within each sprint of Agile methodology.
- Tuned performance of Informatica sessions by increasing block size, data cache size, sequence buffer length and Target based commit interval, and mappings by dropping and recreation of indexes
Environment: Confidential, Oracle 11g, SQL,PL/SQL, Oracle SQL Developer, UNIX, Version One.
Confidential
Sr.ETL Informatica Developer
Responsibilities:
- Involves in the data analysis for business users’ requests and giving solutions.
- Developing ETL Informatica mappings as per the business requirements.
- Involves in small projects which may require small enhancements to the coding.
- Involves in fixing the data issues in AUM Warehouse.
- Maintain data in Oracle and DB2 databases to avoid inconsistencies in reporting.
- Supported system development efforts including database analysis, design, performance tuning and migrations.
- Responsible for documenting at a functional level how the procedures work within the data quality applications.
- Used Debugger to validate transformations by creating break points to analyze, and monitor Data flow.
- Tuned performance of Informatica Session by increasing block size, data cache size, sequence buffer length and Target based commit interval, and mappings by dropping and recreation of indexes.
- The primary tasks were to use Informatica Data Quality(IDQ) to profile the project source data, define or confirm the definition of the metadata, cleanse and validate the project data, check for duplicate or redundant records, and provide data consuming parties with concrete proposals on how to proceed with solution development.
- Involved in implementing the Land Process of loading the customer/product Data Set into MDM using NiFi from various source systems
Environment: Informatica 9x,MDM, Informatica Data Quality, Informatica Power Exchange, Oracle 11g,DB2,Mainframes, SQL,PL/SQL, DBVisualizer, Business Objects, HP Quality Center, Autosys, BMC Remedy, Linux, UNIX, Hadoop, Apache NiFi