Machine Learning Engineer, Full Stack Application Developer Resume
Los Angeles, CA
TECHNICAL SKILLS:
Stress Resistant, Team Building, Customer Service, Time management, JavaScript 1 year, Python (2 year), R, Data Science ( 1.5 year), Tableau, MYSQL, C++, Machine Learning, Data Analysis, System Administration, Databases (SQL, Mongo DB) Confidential (S3, Lambda, SNS, Cloud Watch, SQS, DynamoDB), Spark NLP, Scala, Confidential EMR, React, React Hooks, custom Hooks, Serverless application development experience (full stack). SQL database automation using Python, heavily cloud oriented, strong computer networking background, Confidential certified
PROFESSIONAL EXPERIENCE:
Confidential,
Machine Learning Engineer, Full Stack Application Developer
Responsibilities:
- Constructed ML Multi - Label Classification Model with Spark NLP library. Worked with FCC comment data (21 million comments got classified based on the classifier I built)
- Did EDA on FCC comments data. Worked on Confidential EMR using scala (spark shell) and pyspark as well
- Did lots of research on NLP and text classification using Spark NLP .
- Developed Full stack Data Catalog for FCC for the internal data catalog usage using 100 % serverless modern stacks: React, Javascript, S3 bucket, Confidential dynamodb, Node JS API gateway, Cloud watch and many more
- Worked on Single Sign on Azure Power Bi application utilizing Okta integration.
- Built automation script using Python that enabled our ETL team to automatically create SQL tables from Excel spreadsheets every time the new spreadsheets get generated with fresh data
- Constructed ML Linear Regression model for Real Estate Price Prediction.
- Managed Exploratory Data Analysis for numerical as well as categorical variables.
- Applied Feature Engineering techniques to reduce the number of features (dimensions) of the dataset.
- Implemented text-preprocessing for the Sentiment Analysis Project.
- Developed a robust Logistic Classification Model for Natural Language Processing using a scikit-learn library.
- Performed research on Yelp Reviews business case implementation.
- Made Flask app for Deep Learning Model.
- Built Deep Learning model for computer vision using Convolutional Neural Networks.
- Deployed the Deep Learning Image Classification model using Tensorflow Extended (TFX).
- Established custom Virtual Private Cloud on Confidential .
- Used CI/CD tools like Jenkins.
- Worked with Hadoop Distributed Computing, specifically using PySpark library to manipulate data during the data preprocessing phase.
Confidential, Los Angeles, CA
Python Developer
Responsibilities:
- Responsible for design, develop, and implement efficient information systems and operations system in support of core enterprise functions.
- Using Python libraries like pandas and Numpy with Watson Natural Language understanding for analyzing customer satisfaction.
- Implementing topic modeling from customer ticket description in IMS and service-now using Topic Modeling libraries.
- Worked on migrating MapReduce programs into Spark transformations using Spark and Scala, initially done using Python (PySpark)
- Using Chef, deployed and configured Elastic search, Log stash and Kibana (ELK) for log analytics, full text search, application monitoring in integration with Confidential Lambda and Cloud Watch. Built Elastic search, Log stash and Kibana (ELK) to store logs and metrics into S3 bucket using Lambda function.
- Creating test cases to provision the resources in cloud and monitor the resource in system to detect and resolve problems in pre-prod environment before deploying into production.
- Log monitoring and generating visual representations of logs using ELK stack to Implement CI/CD tools
- Wrote python scripts using Boto3 to automatically spin up the instances in Confidential EC2 and OPS Works stacks and integrated with Auto scaling to automatically spin up the servers with configured AMIs.
Confidential, Los Angeles, CA
Python Developer
Responsibilities:
- Developed full stack Python web framework with an emphasis on simplicity, flexibility, and extensibility. It is built a top excellent components and reinvents zero wheels. WSGI, routing, templating, forms, data, plugins, Config, events, CouchDB, OpenID, App Engine, jQuery, etc.
- Developed consumer-based features and applications using Python, Django, pyramid, Flask, Web2py, HTML and other web technologies. Designed, developed, and integrated a My SQL database of ontological information with a proprietary Scala- based NLP pipeline.
- Implemented advanced procedures like text analytics and processing using the in-memory computing capabilities like Apache Spark written in Scala.
- Involved in installing software using pip commands for python libraries like BeautifulSoup, NumPy, SciPy, python-twitter, RabbitMQ, Celery, matplotlib, Pandas data-frame and used the PEP8 coding convention.
- Used Spark Streaming APIs to perform transformations and actions on the fly for building common learner data models which get the data from Kafka in Near real time and persist it to Cassandra.
- Used Selenium Library to write a fully functioning test automation process that allowed the simulation of submitting different requests from multiple browsers to web applications.
- Designed and implemented open source AI frameworks- Pytorch, TensorFlow, Scikit-learn, Apache Open Source (Kafka, Storm, Spark) for NLP and ML Algorithms.
Confidential, New Brunswick, NJ
Software Developer
Responsibilities:
- Created Python and Bash tools to increase the efficiency of call center application system and operations; data conversion scripts, AMQP/Rabbit MQ, REST, JSON, and CRUD scripts for API Integration.
- Used Celery with Rabbit MQ, MySQL, Django, to create a distributed worker framework.
- The application was based on service-oriented architecture and used Python 2.7, Django1.5, JSF 2, Spring 2, Ajax, HTML, CSS for the frontend.
- Experience with Data migration from Sqlite3 to Apache Cassandra database. Cassandra data model designing, implementation, maintaining and monitoring using DSE, Dev Centre, Data Stax Opscenter.
- Responsible for data extraction and data integration from different data sources into Hadoop by creating ETL pipelines Using Spark, Yarn, and Hive.
- Deployed the project into Heroku using GIT version control system.
- Improved the coding standards, code reuse. Increased performance of the extended applications by making effective use of various design patterns (Front Controller, DAO).
- Built various graphs for business decision-making using Python mat plot lib library.
- Fetched twitter feeds for certain important keyword using the python-twitter library.
- Used Python library Beautiful Soup for web scrapping to extract data for building graphs.
Confidential
System Administrator
Responsibilities:
- Install and configure software and hardware
- Manage network servers and technology tools
- Set up accounts and workstations
- Monitor performance and maintain systems according to requirements Troubleshoot issues and outages
- Ensure security through access controls, backups and firewalls
- Upgrade systems with new releases and models
- Develop expertise to train staff on new technologies
- Build an internal wiki with technical documentation, manuals and IT policies