We provide IT Staff Augmentation Services!

Computer Systems Architect, Senior Api / Python Developer Resume

0/5 (Submit Your Rating)

Irving, TexaS

SUMMARY

  • Above 8+ Years of IT experience with expertise in Application Development, Maintenance/Support and Incident/Problem Management seasoned with good analytical skills and extensive knowledge in the Banking and Finance Services domain.
  • Worked as a part of the Technology team for the Discretionary Strategy Trading Team, Trading Front Office Systems, Post Trade Clearing Systems, Derivatives Back Office Platforms and Regulatory Reporting System
  • Experienced in all aspects of technology projects including Business Requirements, Technical Architecture, Design Specification, Development and Deployment
  • Experienced in Application Development using Python, RDBMS and Linux shell scripting and Performance tuning
  • Expertise in practicing the SDLC models, agile methods with Scrum, Extreme Programming, Ticketing systems using JIRA and Service - Now.
  • Proficient in developing complex SQL queries, Stored Procedures, Functions, Packages along with performing DDL and DML operations on the database
  • Familiar and Worked on various Operating Systems like UNIX (Sun Solaris), Linux and Windows.
  • Good knowledge of version control software - SVN, GIT
  • Proficient in using editors Pycharm, Atom, Notepad++ while developing different applications
  • Proficient in automating manual tasks using Shell Scripting / Python
  • Good Experience in debugging the issues using PDB, IPDB.
  • Familiarity with development best practices such as Code Reviews, Unit Testing, System Integration Testing (SIT) and User Acceptance Testing (UAT)
  • Experience of Build & Deployment phase and usage of Continuous Integration (CI/CD) tools
  • Performed code reviews and implemented best Pythonic programming practices.
  • Excellent communication, interpersonal and analytical skills and a highly motivated team player with the ability to work independently
  • Hands on experience in data processing automation using python
  • Good Knowledge in Object Oriented Concepts, Data Structures and Design patterns

TECHNICAL SKILLS

Cloud: Microsoft Azure

Language: Python, Perl, SQL, PLSQL, Apache Spark.

Scripting: Shell Scripting, Python, Jupyter Notebooks

Scheduling Tools: Control-M, Autosys, Teevra, Crontab, BatchWeb(Proprietary), Kubernetes

Databases: Oracle, Sybase, Ms-SQL, Mongo-DB, Cosmos DB.

IDE: Pycharm, Atom

Version Control: GIT, SVN, VSTS

DB Query Tools: Oracle-SQL Developer,Sybase Central Rapid SQL, Interactive SQLMs-SQL Server Studio, Robo 3T.

Ticketing Tools: Service Now, PAC, DESFLOW(Proprietary), Remedy.

Monitoring Tools: Geneos-ITRS, Triage, IR-360, HP-Sitescope, Azure Log Analytics, SonarQube.

Content Management: Confluence, SharePoint, Wikipedia

Operating System: MS-Windows XP/Vista/7; Unix/Linux/Solaris OS

Utilities: Putty, FileZilla, Win SCP, Beyond Compare

Cloud Tools: Azure Logic Apps, Function Apps, Azure Data Factory, Azure Data Lake, Blob Storage, Azure Container Registry, Azure Kubernetes Services, Logz.io.

Visualization Tools: ThoughtSpot

PROFESSIONAL EXPERIENCE

Confidential, Irving, Texas

Computer Systems Architect, Senior API / Python Developer

Responsibilities:

  • Perform POCs on niche technologies and implement pilot projects.
  • Architect the Technical Design for the projects and guide the team towards quality development.
  • Develop Python Libraries and Modules to enable faster code development and prohibit code redundancy.
  • Coalesce with vendors on technical debts and drive issues to closure.
  • Inter Team and Intra Team Code Reviews.
  • Conduct meetings with Business Users to gather requirements and formulate them into technical roadmaps for the team.
  • Conducted weekly huddles with the API Development team to bridge the technical gaps and foster code quality.Service Now - Real Time Data Integration:
  • Designed and Developed the end to end framework for Service-Now data integration with Data Lake.
  • Developed Python Code to make REST calls to Service Now API via requests module and store the responses as JSON files in Azure Data Lake System using azure python libraries. Developed the framework for storing watermarks and audit logs in MS SQL Server. The processing logs were pushed to Azure Blob Storage Containers using azure blob libraries. All of the Data Downloader Framework was designed to execute on Docker Containers hosted via Azure Kubernetes Services.
  • Developed the Python Big Data Code using Apache Spark to persist the downloaded data from Azure Data Lake System to Databricks Hive Tables. The data for the hive tables were stored as parquet files.
  • Data is being ingested in a Real Time Fashion from Service Now to Data Lake.
  • ThoughtSpot - Store 360:-
  • Thoughtspot is an AI based Search, Analytics and Visualization tool that provides Data Insights and On Spot Analysis at a very fast speed. Implemented the Store-360 project on ThoughtSpot that connects all the data that can be linked back to the Confidential Stores. The Key objective is to understand a store better and thereby improving store operations and thereby sales revenue. Data Sources for the store included Sales, Inventory, Orders, Customer Feedback, Incidents, Weather, Events, etc. to name a few.
  • Procured, Instantiated and Orchestrated the required Infrastructure Stack for the tool like the Azure VMs, Storage Disks, Network Cards, Load Balancers, Azure Blob Mounts, CNAME Setup, SSL Certificate Installation, etc.
  • Designed and Developed Python Framework to ingest the data from various sources as per the requirements of ThoughtSpot. The Framework has the ability to ingest data from Ms SQL Server using the pyodbc and pymssql libraries, it connected to the Spark Hive Tables using JDBC connections to Databricks and also connected to Azure Data Lake System using the azure python libraries to inherit data residing in raw files.
  • Developed SQLs for DB and Table Creations within ThoughtSpot.
  • Implemented Single Sign On (SSO) for centralized user authentication via Azure Enterprise Applications along with LDAP Sync for User Access Management.
  • Formulated Training materials and conducted in person trainings for the Business Users for familiarity with the tool.
  • Designed and Automated Reports using the tool with specialization on Confidential Fuels.
  • Real Time Data Streaming: Mongo DB to Databricks:-
  • The 7Now App’s data is being stored on AWS Mongo Instances. This data needed to be integrated with the Data Lake. Developed the python framework for Real Time Data ingestion from Mongo DB’s change stream’s watch cursor to the queuing system (Azure Eventhub). A persistent copy of the Data is being stored in Azure Data Lake via Azure Python APIs. Apache Spark Module has been developed using Spark Streaming APIs to consume the data from Eventhub and persist them into Spark Tables.
  • POC: Databricks Hive Meta-Store:-
  • Performed POC on the Meta Store for Databricks where the metadata for Apache Spark Hive Tables were moved from local DataBricks workspace to a remote SQL server. This enabled the accessing of data via various compute/Databricks environments.
  • 7Now Delivery Data Integration:
  • Developed Python Code to integrate the delivery (7-Now) data residing on the Mongo-DB hosted on AWS. Data was pulled in and stored as parquet files in Azure Data Lake System via Python program running on Azure Kubernetes Services. This data was further pushed down to Databricks Hive Tables using the Python program for big data.
  • Eventful Data Integration:-
  • Eventful is a Data Vendor proving data related to the Events all over the world. The digital marketing team aims at using the eventful data to manage inventory and roll out custom offers based on Geo grabbing and customer insights.
  • Designed and Developed the Data Integration Framework for Eventful Data Integration with Enterprise Data.
  • Used Python requests module to make REST calls to Eventful APIs. The data was stored in JSON format in Azure Data Lake System using the Azure python libraries. The python program was configured to run on Kubernetes via Docker Containers.
  • Developed Python Big Data Code using Apache Spark to persist the data from JSON files to Hive Tables using pyspark libraries.
  • POC and Library Creation: Sendgrid Mailer
  • Performed POC on the mailer service Sendgrid and developed python modules/libraries that is being used by the team for mailing services. Sendgrid’s python libraries were used here.
  • Mixpanel Data Ingestion
  • Mixpanel is a vendor capturing user’s behaviour while interacting with the Mobile Apps (iOS/Android).
  • The App Development team makes use of these data to understand user interaction and design user retention strategies.
  • Designed and Developed Python Framework for Data Ingestion of Mixpanel Data into Enterprise Data’s Data Lake. REST calls are being made to Mixpanel APIs via python modules running on Azure Kubernetes which in turn stores data into Azure Data Lake.
  • Azure Logic Apps were used to design the Flow of Data Processing followed by Azure Data Factory for ingestion of data into Spark Hive Tables via Databricks.
  • POC: Dynamic Data Loader
  • One of the major challenges during the early stages of data integration with Enterprise Data was the varying Data Schema from Individual Sources. Designed the data ingestion Framework that had the flexibility to dynamically adapt to the Source Data Schema and Destination Data Schema by making the required alterations and enabling a smooth data load process.
  • This framework was developed in Apache Spark using Python and Spark Libraries.
  • DDL Generator:
  • Designed a utility to automatically parse through a raw data file and generate the DDLs to be run if a table had to be created for the file. This utility was created using Python and Apache Spark Libraries along with Databricks Dashboard.
  • This utility saved development efforts for the ETL developers.
  • POC: Azure Logic Apps
  • Performed POC and trained the team on the use of Azure Logic App for systematic Program Execution Flow.
  • Meta-Data Tables: Databricks
  • Databricks doesn’t have system tables that store the metadata of its objects unlike SQL server or any other DB. Developed the Python Module to Parse through a given workspace and scan for all the available database and tables and generate the Table Metadata like the Column Names, Partition Names, Table Size, etc.
  • This was achieved by querying the details of hive tables and storing the results in specific metadata tables.
  • POC: Azure Function Apps
  • Performed POC and trained the team on the use of Azure Function Apps for systematic Program Execution Flow as a serverless service.
  • POC: Azure Data Factory:
  • Performed POC and trained the team on the use of Azure Data Factory for ingestion/movement of data from various sources to various Azure Solutions.
  • ADFs are being heavily used in case of Application Migrations.
  • Kochava Data Ingestion:-
  • Kochava is a vendor capturing the Campaign Data for the various Confidential Apps. Integrated data streams such as Customer Influences and Impressions.
  • Designed and Developed the Data Ingestion Framework using Python to download data from Kochava via hosted APIs.

Confidential

Senior Python Developer

Responsibilities:

  • Involved in the complete Software Development Life Cycle including gathering Requirements, Analysis, Design, Implementation, Testing and Maintenance
  • Refactor Python modules to deliver Code reusability and scalability.
  • Utilized Python Libraries like Scrapy, Beautiful Soup, NumPy, Requests, URL Lib/Lib 2, Pandas, MatPlotLib, SQL Alchemy.
  • Wrote Python scripts to parse data files and load the data in database.
  • Created multiple Python Jupyter Notebooks to automate the day-to-day tasks carried out by the Tech Operations Team
  • Development of Python APIs to automate the execution of Jupyter Notebooks to pre-fetch the output/results and share the same via NB Viewer
  • Developed a python utility to log and analyze the usage of the production scripts.
  • Developed Python Scripts to scrape the data from various Public/Private Websites using modules such as Requests, Beautiful Soup
  • Used Pandas API to put the data as time series and in a tabular format for easy data manipulation and retrieval
  • Implemented Pandas Data Frames to handle Complex Data like Positions and Transactions
  • Implemented Server Status monitoring, process monitoring using Dynatrace, ITRS, Sitescope, etc
  • Developed Python Utilities for Log Checks and report for any issues
  • Designed and configured database and backend applications and programs.
  • Implemented code in Python to retrieve and manipulate data.
  • Used SQL/PLSQL to perform complex database operations and generated various stored procedures, functions for DML operations.
  • Developed multiple python scripts for reporting purposes
  • Scrapping of required data from various financial sites using vendor modules
  • Use Python unit and functional testing modules such as unit test, unittest2, mock, and custom frameworks in-line with Agile Software Development methodologies
  • Automation of regular manual Tasks
  • Performing the production code release and Disaster Recovery Tests
  • Automating Data Monitoring Tasks for the ease of operations
  • Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation CI Tracker along with Shell scripts to automate routine jobs.
  • Regular fixation ofthe breaking changes and Test Cases
  • Responsible for gathering requirements, system analysis, design, development, testing and deployment
  • Developed, tested and debugged software tools utilized by clients and internal customers
  • Created code repository and added the project to Git.
  • Worked on Segregation of software users for different uses/applications
  • Ensuring System Sanity and Availability
  • Ensuring Deliverables meet SLA.
  • Addressing all system alerts with priority.
  • Addressing User Queries/Issues on time with the required level of escalation
  • Maintaining the request system acknowledged and lint free at all the times.
  • Working with teams to stabilize the Production Environment
  • Fixing Production Issues along with bug fixing of code
  • Setup and scheduling of batches for programs
  • Documentation of Issues/Processes/Services, maintaining an updated Knowledge base

Confidential

Python Developer

Responsibilities:

  • Automate different workflows, which are initiated manually with Python scripts and Unix shell scripting.
  • Worked on integrating python 2.7 with Web development tools and Web Services
  • Utilize PyUnit, the Python Unit test framework, for all Python applications.
  • Wrote python scripts to parse XML documents and load the data in database.
  • Create, activate and program in Anaconda environment.
  • Implemented and modified various SQL queries and Functions, Cursors and Triggers as per the client requirements.
  • Used Pandas as API to put the data as time series and tabular format for manipulation and retrieval of data
  • Used various IDE's like Eclipse, Jupyter/IPython Notebooks, IDLE and Notepad++ for Python developments
  • Perform troubleshooting, fixed and deployed many Python bug fixes of the two main applications that were a main source of data for both customers and internal customer service team.
  • Write Python scripts to parse JSON documents and load the data in database.
  • Design and develop ETL APIs which will automate the data mining in different database sources.
  • Generating various capacity planning reports (graphical) using Python packages like Numpy, matplotlib.
  • Scraping website using Python Beautiful Soup, and then parsed it with XML
  • High level understanding of financial reporting tools and its core applications
  • Used Pandas for a data alignment and data manipulation
  • Worked on Python OpenStack APIs and used Numpy for Numerical analysis.
  • Autosys - Job Monitoring and Investigating Failures
  • Handling User Access
  • Resolving Trade Flow issues between Front Office and Back Office
  • Troubleshooting the Issues related to financial records, applications, data quality.
  • Used python's modules like Numpy, Matplotlib and Pandas library for statistical analysis and generating complex graphical data
  • Reports generations & Autosys job failures
  • Web development including standardizing the toolsets used from Pycharm to using Git for source control.
  • Used Git repository for version control
  • Developing UNIX shell scripts for automation to reduce the regular manual effort
  • Created Python and Bash tools to increase efficiency of application system
  • Logging and Updating timely updates into PAC
  • Handling major releases and migrations
  • Carrying out RCA
  • Maintaining and working on the Production Databases.
  • Providing enhancements and process improvements
  • Communication from upstream to downstream for the process flow
  • Creating wiki for production issues for references

Confidential

Software Developer

Responsibilities:

  • Worked on major bug fixes, which included UI issues and functionality issues as well
  • Worked on development of backend services using Python, SQL and Linux Created many API's for the project which involves creating and maintaining proprietary code base.
  • Wrote Python scripts to parse data files and load the data in database.
  • Use Python unit and functional testing modules.
  • Involved in development using Python, bug fixing and unit testing of the layout commands
  • Used the generous python modules like Scrapy, Beautiful Soup to write web scraping scripts to scrape data from various financial institution’s websites.
  • Refactor Python modules to deliver Code reusability and scalability.
  • Created Python and Bash tools to increase efficiency of application system and also to implement server monitoring.
  • Issue Escalation and Detailed understanding of Application Process Flow.
  • Developing UNIX shell scripts for automation to reduce the regular manual effort
  • Version control using SVN
  • Created data access modules in python.
  • Troubleshooting the issues related to financial records, applications, data quality.
  • Resolving trade flow/cash flow issues.
  • Deployment of code to the QA and then to Prod server is performed using fabfile written in Python.
  • Monthly Status Report Generation.
  • Involved in building database Model, APIs and Views utilizing Python, in order to build an interactive web based solution
  • Automated the deployment using Shell scripting for build and release operations.
  • Performed Design, involved in code reviews and wrote unit tests in Python.
  • Managed datasets using Panda data frames and Oracle-SQL, queried Oracle-SQL database queries from python using Python-Oracle-SQL connector and Oracle-SQL db package to retrieve information.
  • Maintaining time efforts logs details and issues resolution steps in Service Now tool.
  • Creating fixed logs and knowledge articles to maintain the tracks of issues

We'd love your feedback!