Sr. Sap Solution Architect & Integration Platinum Consultant Resume
WI
TECHNICAL SKILLS:
Sr. SAP Solution Architect & Integrator of BWoHANA, S/4HANA, HADOOP, (PAL) Predictive Algorithms Library - Expert Modeling via R, MDG 6.1-8.0 (Profiling, Enrichment, Single version of the Truth) includes Dark data, 9.0* ( *MDG100 Jan17 Class) - InfoSteward, BODS(Data Services), IntelliCorp-Live Model/Compare, SAP Commodity Management (CPE), SAP ILM, ECM, TDMS, EIM - SDI, SDQ, AI Algorithm data volume reduction via metrics at Row/Columnar ‘database ‘footprint’ level ranges 20 - 60 + % in pre/post HANA Go-Live, HANA In-Memory 50 % threshold optimization techniques, Data management options - Dynamic Tiering, NLS SQL IQ, SQL IQ, PBS-CL, HANA SDA with HADOOP Integration of SAP VORA / SPARK, SLT, SAP BO Predictive Analytics (PAL) Models designed in R & Python,, BOTS, ML, DL, NLP, RPA (Robotic Process Automation) - Blue prism, SAP ECC, BW, CRM, SRM and XI/PI (Classical Archiving ), ILM - ILM ADK Enablement with RM and Decommissioning (structured, non-structured, semi-structured), TJC Software Euro Legal / Compliance Integration, DART, Project management in 20 - 50 % perform assist in PM per scope, budget, skill set being integrated, IBM APL (Application Lifecycle Management) Framework, Deployment SAP RDS (Rapid Deployment Solution), Agile, Waterfall and S/4HANA using SAP ‘ACT-Activate’
PROFESSIONAL EXPERIENCE:
Confidential, WI
Sr. SAP Solution Architect & Integration Platinum Consultant
Responsibilities:
- As Sr. SAP Solution Architect & Integrator Re-Architected Kohler’s current SAP data management strategy specific to scope, approach and strategy creating a new DVM (Document Volume Management) set of robust innovative standards and guidelines that will reduce overall database ‘footprint’ as much as 20 - 60 + % percent based on AI Algorithm metric/measurements created during the initial discovery of Kohler’s 20-year-old Production environment when archiving went live in 1996.
- Re-designed current data management scope in lieu of a yearly 2.4 + (TB) database growth and having over 100 million documents in ‘open’ state since 1996 original Go-Live. Due to mergers, acquisitions, consolidation of companies and explosive growth the current data management model had no reduction impact or real traction thus redesigned a new scope, approach and strategy with metrics/measurements that would get Confidential below the 50 % database baseline prior to and in preparation of ‘HANA Readiness’ migration in immediate future. I created a set of 3 options developed via AI Algorithm called ‘Process and Time’ a series of metrics/measurements that would improve and introduced new innovative ways to reduce data, create consistency of data, remove de-duplication, improve reengineer of process flow, enable complete object base automation, added over (35 - 50), new Techno-Functional objects, reduce residency by increasing retention post Confidential Leadership, business lead and end user sign-off. Managed retention via ILM precepts along with NLS SQL IQ, HADOOP and PBS-CL options with Architecture and logic integration in lieu of future ‘HANA Readiness’
- Introduced via Integration Kbyte size volume metrics, established (GB/TB) size reduction % per FYR vs. CC vs. PP vs. Open/Closed vs. OIM vs. Incomplete vs. In Error vs. Phantom Data discovery, identified over (74) Z Table Development, ADK/REO Statistical Session purging, Forensic business process flow checker process and algorithm, realignment of residence vs. retention vs. business rules vs. policies compliance vs. data identification of ‘null data / phantom’.
- Created / Architected MDM / MDG data management governance around clients ‘rules’ and ‘policies’ around master i.e material, financial, business partner centric to a. de-duplication, b. data veracity, c. data consistencies/inconsistencies and d. data volume as the primary driver of creating that foundational paradigm of central governance as a current/future project of establishing that ‘Single version of the truth’ with MDM/MDG Techniques prior to future ‘HANA Readiness’
- Created / Developed retirement strategy of DART 2.7 Tax Extraction tool in lieu of replacement with TJC Software RJCAEC - Audit extraction cockpit, TJCFRGL - France Tax Audit Tool - For International Audit Compliance Single Prod Instance,
- Upgraded ADK (Archive Development Kit), ARC-IS (Archive Information System), AE - Archive Enabled SAP Transaction codes to read data from ADK via OSS, Custom ABAP4 Development, along with PBS-CL Release API’s C++ Binary Libraries, Connectors, Designed New EOL ‘End of Life’ Model
- MDM / MDG Incorporated industry practices around Master data management and governance around principals of data consistency, data cleanup, data de-duplication, forensic data review of process flow for 'open business doc and open item management G/L accounts' with data volume management per industry best practices. OIM Object Integration Model for the following domains i.e Mater, Customer, Vendor, and Business Partner
- Participated, Lead, presented via Workshops Data Aging techniques for BW on HANA and Classical Archiving for CRM 7.0 and ECC 6.05.
- Lead designed discussion on integration strategy using PBS-CL (Content Link ‘Light’ ECM) in parallel with several options in preparation of future HANA migration i.e introduced options using NLS SQL IQ, SAP IQ, HADOOP via MapReduce, Yarn and HDFS (HADOOP Data File System) from top 3 HADOOP Distributors i.e HDP (Hortonworks Data Platform), CLOUDERA and MAPR along with options of PBS NLS SQL (Columnar)
- Re-Modified / Recommended Global RRS (Records Retention Schedule Template of SLA (Service Level Agreement) for residence vs. retention vs. destruction based data policy in lieu of antiquated RRS (Records Retention Schedule) for SAP statutory business document compliance and business requirements
- Re-designed PBS-CL (Content Link) Production edition for NLS (Nearline Storage) for SAP business document ‘Life’ of document at retention level and ‘EOL’ beyond total retention with document classification as needing to be a. Prune/Purged/Destroyed and/or b. Re-stored to Tier # 3 - ‘Life Documents’ to be kept forever policy on 3rd Tier commodity servers being SAP ArchiveLink certified per SAP (PAM) Product Availability Metrix) listing in the event a view is requested post retention ‘life cycle’ this requirement would be part of overall design and integration the concept of ‘cradle to grave’ would be in play at this client for ‘Forever’ document classification.
- PMO/PM - Project Managed - Reporting to Confidential Leadership via Dashboard updates, daily scrum, and weekly status. Managed team of over 10 +
Confidential, FL
Sr. SAP Solution Architect & Integration Platinum Consultant
Responsibilities:
- As Sr. SAP Solution Architecture and Integrator created DVM (Document Volume Management) strategy on scope, approach with innovative techniques using new SAP data management strategies incorporated for both (Cold-Store) ‘aged’ rarely viewed vs. (Warm-Store) ‘seldom viewed’ data with Integration of EMC InfoArchive Solutions. Selection process on EMC InfoArchive is since SAP HANA and S/4HANA is compliant throughout ALM (Application Lifecycle Management) and in lieu of future HANA ‘Readiness’ created and designed a sustainable migration integration path forward in anticipation S/4HANA In-memory database migration and architecture.
- Created robust set of data management industry best practices for master data consistency and transactional data volume growth, aging, and destruction based on ILM (Information Life Management) tiered approach for SAP ECC 6.05 structured and non-structured data with full retention management, decommissioning of data and/or its destruction via EOL (End of Life) policies defined, created, and signed off by Confidential Leadership and business using HP-QC UAT Toolset.
- Designed metrics and measurements using AI ML (Machine Learning) models in AI algorithms based on data volume vs. growth projection, vs. codepage/Unicode vs. Kbyte size of header record and many other KPI’s include in designing a robust data management strategy around the various types of data structures and types i.e data veracity vs. data de-duplication vs. data quality, data governance and data volume of structured vs. non structured data for transactional, control and master data type constructs in (FI-Financials) 3.4 (TB) vs. (SD-Sales & Distribution) 1.925 (TB) growing the fastest rate factor along with all other 8 Core functional areas contained with a set of metrics that produced a 58.9 % overall reduction from 10.5 (TB) to 4.8 (TB) baseline in lieu of future HANA – S/4HANA migration based on AI Algorithm ‘Process and Time’ metrics created using custom C++ Libraries, R, Python constructs models converted into XLS.* as 3 – 5 options were provided to Confidential Leadership that could choose from based on how aggressive they want to be in there overall data volume management prior to S/4HANA migration. Consultant recommend most aggressive to achieve results they desired due to new / additional plants going live and new business expansion.
- Transactional, control and masters in SAP ECC 6.05 (OLTP) along with Imaged (i.e Blogs, Binary) attached invoices as an example i.e Financial accounting document were included as part of the overall scope, strategy, architecture in establishing feasibility of sizing actual vs. projected growth of database size and volume.
- Upgraded ADK (Archive Development Kit), ARC-IS (Archive Information System), AE – Archive Enabled SAP Transaction codes to read data from ADK via OSS, Custom coded via ABAP4 15 FICO reports, along with custom Infostructures across every core functional module.
- Designed a ILM Tiered Architecture for EOL ‘End of Life’ model beyond retention ‘life cycle’ being reached. EOL documents considered to be business documents that must remain for the life of established legal ‘entity’ but many laws per industry, sector, client requirements also dictate what constitutes ‘Life’ documents that are excluded from ‘destruction and/or purging’.
- PMO/PM – Project Managed – Reporting to Confidential Leadership via Dashboard updates, daily scrum, and weekly status team of over 30 +
Confidential, San Diego CA
Sr. SAP Solution Architecture & Integrator Platinum Consultant
Responsibilities:
- Sr. SAP Solution Architecture & Integrator Platinum Consultant / Project Managed a successful Go-Live of BW 7.4 HANA SP12 with over 90 + Confidential FTE, IBM, SAP, HP resourcing with a budget of $ 9.5 – $ 11.5. Using SAP RDS (Rapid Deployment Solution) and Agile Project Deployment Methodology
- Project Managed and ‘owned’ the Sempra BWoHANA project being instrumental in its success in managing, integrating consolidating all these groups of teams coordinating under a very ‘tight’ window with a ‘flux’ of technical H/W issues, delivery delays and so on and yet delivering on schedule. Reporting bi-weekly to Confidential Steering Committee, its Directors once a week and daily with entire project teams via scrum/standup meetings for the Lifecyle of this entire project in addition to performing budgeting, resourcing, working on FTP Financial Transaction Processing of accruement, billing, projections, overruns, funding etc.,
- Primary Responsibilities included:
- Supported / Created – Request for Information, Questionnaire, and Proposal process (i.e RFI, RFQ, RFP) – Designed Score card via AI Algorithms of Pros vs. Cons on each H/W, SW, SI System Integrator, Vendor selected per algorithms developed
- H/W Selection via RFP process via Quick sizer selection of H/W HP CS (Converge System) 500 on HANA Appliances (Scale-Out) architecture, setup (i.e blood build, configuration, installation and integration of HANA SP09 on RHEL 6 for database upgrade, consolidation of BW 3.5 – 7.0 to BW 7.4, Unicode Upgrade, BOBJ 3.1 – 4.1 & Universes Upgraded, Clean-up, BW Objects remediation of DSO’s/CUBE’s, SAPLOGON Upgraded via Auto-Script rollout, HP-QC critical UAT / Business Signoff, Security Critical Component HANA Appliances/APP Servers Data center,
- Supported Data Center Integration HP CS550 build-out of Appliance (i.e Supported HP Technical/Infrastructure team in i.e Blood build, Rack/Stack, Earthquake bracing/Data center compliance, Network configuration to HP CS500 Frame, Liaison between data center HP and Cabling, electricians, Rack stack vendor, infrastructure coordination with basis, architecture, storage, and system integrator(s) to deliver on time in lieu of H/W failures and delays
- Supported / Blueprint Phase (WPDD: Work product detail design) and development of RTM (Requirement Traceability Matrix). Finalized i.e conversion/migration BW Objects to requirements, create detailed Migration Guide (i.e systematically) with checklist includes Test Scripts (i.e Test Cases end to end test scenarios)
- BW 7.01 Migration to BW 7.4 DMO/SUM Upgrade/Migration
- BW Upgrade, Unicode conversion, Migration of DB Oracle to HANA, Object Validation, Data load & Unit Testing
- Consolidation of BW 7.3 into BW 7.4 HANA
- Lead HP-QC (UAT) User Acceptance Testing with Business Users Validation of selective BW Objects and Reports based on level of ranking and priority
- BOBJ Migration 3.1 to 4.1 With 4 BOBJ Migrations i.e Shadow Box # 1, QA # 2, PRD # 3, and DEV # 4
- Pre-Go-Live Support into Hypercare – Implementation of Z-Analyzer, DBA Cockpit, SAP BASIS Housekeeping introducing non-BW objects and processes and comparison of BW Structures/tables PROD with remediation options.
- Responsibilities into Post Go-live – Involved in cut-over plan Go-No-Go Metrix of decisions, Java stack upgrade, DB Copy from Production to Sandbox. Supporting any issues during HyperCard with IBM System Integrator and SAP Support organization in Live HANA Production. No Major issue post Go-Live, a Successful BW on HANA Integration .
Confidential, San Diego, CA
Sr. SAP Solution Architecture & Integrator Platinum Consultant
Responsibilities:
- As Sr. SAP Solution Architect and Integrator lead the BW on HANA ‘Data Aging/Archiving’ project using NLS SQL IQ (Cold-Store), DT Dynamic Tiering (Warm-Store) Displacement options to reduce overall ‘footprint’ of live HANA In-memory database size as much as 35.8 %
- H/W Selection BW NLS SQL IQ Optimization Data aging Project, setup installation, configuration, and integration of 3 commodity server’s connectors to HANA SP09 to SQL IQ (Columnar) database for storing of (Cold-Store) ‘aged’ data
- Development and execution of a. Assessment/Deep dive, b. Project plan, c. Blueprinting, d. Realization phase of H/W acquisition for NLS (Nearline Storage) commodity servers – performed installation, configuration and integration, e. Incorporated EIM – SDQ for profiling data, cleansing, enrichment of data, duplication, match record type, f. Creation and development of prototyping BW DAP (Data Archive Process) with full integration to SAP HANA, g. Performed UAT via HP-QC for business user test script/case signoff on reports, h. Created Pre / Post Cutover steps for I. BW 7.4 Go-Live with NLS SQL IQ (Nearline Solution) along with DT (Dynamic Tiering) enabling full automation using PC Process Chains.
- Architected BW (DAP) Data archive process using various InfoProviders i.e DSO’s, CUBE’s demonstrating ‘Time Slice’ archiving with connectivity to NLS SQL IQ (Nearline Storage) considered (Cold-Store) along with BW objects needing to still be viewed ‘as needed/requested’ placed via Dynamic Tiering/Configuration techniques into (Warm-Store) secondary disk storage in a SQL IQ database. Warm requirements could be ‘pulled’ back if ‘activated’ into ‘Live’ HANA (Hot-Store) In-memory database for viewing and processing.
- Developed Dynamic Tiering using Data Aging concepts of ‘Displacement’ meaning removal of data from live ‘Hot’ HANA database and placing in Secondary Storage via Dynamic Tiering displacement configuration options
- NLS Data strategy and usage is since OLAP/Reporting layer can be controlled by ‘Near-Line Storage’ settings found in BEx Queries properties, Multi Providers -properties and Cube/DSO properties – Recommended Activating NLS (Nearline) viewing at the MP (Multi Provider) level by setting Info provider switch ‘Nearline access switch on’ as enabled. Instead of at Bex report level or DSO/CUBE level.
- Recommended / Developed configuration sizing for NLS (Nearline Storage) commodity server specification and configuration with end to end integration options to reduce what is live in HANA In-memory (Hot-Store) database making sure Sempra remains well below the SAP recommended 50 % threshold of ‘Main Memory’ in avoidance of memory data overflow issues/warning
- Validation performed on all BW DAP (Data Archive Process) event steps i.e # 10 – Initiate, # 40 – Write Step, # 50 – Verification Trigger, # 60 Delete Step including final step of ‘Restore’ in the event Sempra BW Project team ever required BW Objects already archived and stored in NLS SQL IQ and brought back into HANA ‘live’ In-Memory database. This final integration process was also demonstrated and integrated as part of this rollout at Sempra via the BW DAP Framework under Archiving Tab > Status > Reload: Post deletion has successfully been performed
- Created technical and functional SBS (Step-by-Step) documentation on a. NLS SQL IQ – Installation, configuration and integration, b. DT (Dynamic Tiring), c. Displacement Secondary Storage, d. Creation/Development of TCSLA (Temperature Control Service Level Agreement), e. Set NLS (Nearline) MP (Multi-Provider) strategy on defined Info Provider at MP Level, f. Lessons Learned and finally g. Next Steps Phase 2 – New Innovation / Process of i.e Partitioning BW Objects etc.,
- Project Managed reporting to Sempra Sponsors/Directors, daily scrum, budget, resourcing, and leadership along with Solution Architect and Integrator as Lead
Confidential, San Diego, CA
Sr. SAP Solution Architect & Integrator
Responsibilities:
- Setup Environment on HANA SP09 /12 H/W: - HANA SP09 SBX (Sandbox/Copy Dev) in HANA SBX setup odbc.ini – DSN name – HDB, Driver Path -I. e etc./host Files, Customer.sh, odbc.ini – DSN name, Driver Path-/hdbclient/libodbcHDB.so, Host IP: r3dbxxx.client.com / 10.192.xxx.xx, Scheme, Hive UID/Password, Driver Path - //sharkodbc/lib/64/libsimbasharkodbc.so
- Setup Environment on 5 VM (Virtual Machines) HADOOP Servers i.e odbcinst.ini – ODBC Drivers - Simba Hive ODBC Driver 64-bit=Installed, Driver Path – /Hana/shared/SS1/hdbclient/libodbcHDB.so, ODBC Data Source – A. myodbc3 = MyODBC3.51, B. HDB = SAP HANA Driver DSN. Simba.hiveodbc.ini – log level -0, DriverMangEndcoding-URF-16, ODBCInstLib=libodbcinst.so, Simbashark.odbc.ini – log level – 0, Error message path-/ my odbc drivers/simba/hiveodbc/ErrorMessages/ and ODBCInstLib=libodbcinst.so
- Created SDA (Smart Data Access) Remote connection (Name: SPARK03) in HANA to SPARK
- HIVE Connection Validation
- /usr/sap/ss1/home> isql HDB system Hanasbx
- SQL> select * from BOBJ.BOBJ USERS Note: SQL IQ Connect to HANA (SBX) Sandbox returned 21,406 records from Content Scheme/Table: BOBJ USERS Validation in HANA Studio
- SPARK Connection Validation
- /usr/sap/ss1/home> isql -v SPARK
- SQL> Note: SQL prompt displays and can perform same sequence of events but this time for SPARK content validation using HANA and validation at HADOOP HDFS (HADOOP File System) – Was demonstrated to Sempra Technical and BW Functional HADOOP/BW Developers and Business.
- Developed and Architected HADOOP Ingestion and Retrieval processes using various techniques for example:
- Created in Data services Hive, Pig, or MR (MapReduce) jobs that allow the ‘ingestion’ from HANA Scheme/Table down to HADOOP via creation of for example a Hive jobs. The example I used was with Hive creating a job in Data Services which converts the job into hive SQL commands and runs it against the HADOOP Cluster as a map reduce job via (UDF’s). Thus, I could ‘push’ the entire job down to HADOOP Cluster at HDFS level.
- Integrated as Prototype - PIG, HDFS and MapReduce as follows: The integration related to Hive, HDFS & Pig, all eventually gets converted to MapReduce job and gets executed in the cluster. In additional to that, I used “Text data processing” transformation within data services, via this transformation it got to MapReduce jobs. That is based on several factors I established during prototyping phase and that is “if”, the source and target in Hadoop for the job is in Hadoop.
- Prototyped using SAP Lumira hive JDBC connector to read from HADOOP, Used SAP BO (Business Objects) and Crystal Reports to read data from HADOOP HDFS.
- Created / Developed Workshop for BW Project Team along with HADOOP Developers/Admin for the Installation, Configuration, Integration along with Ingestion, extraction using HADOOP Tools i.e HANA SDA (Smart Data Accessing) creating (VT) Virtualized Tables in HANA, Spark, hive sqoop, pig scripting, hdfs, MapReduce etc.,
- Project Managed reporting to Sempra Project Lead & Sr. BI Analytics Lead with weekly scrum/standup project status report along with being Sr. Solution Architect and primary integrator