We provide IT Staff Augmentation Services!

Sr Devops/cloud Engineer Gcp/aws Resume

5.00/5 (Submit Your Rating)

Monroe, NC

SUMMARY

  • A Cloud enthusiastic team player having around 8 years of experience in the IT industry as a DevOps Engineer with proven expertise in Automation, Build/Release Engineering, and Software development involving cloud computing platforms like Amazon Web Services (AWS), Azure and Google Cloud (GCP).
  • Extensively worked on AWS Cloud services like EC2, VPC, IAM, RDS, ELB, EMR, ECS, Auto - Scaling, S3, Cloud Front, Glacier, Elastic Beanstalk, Lambda, Elastic Cache, Route53, Ops Works, Cloud Watch, Cloud Formation, RedShift, DynamoDB, SNS, SQS, SES, Kinesis Firehose, Lambda, API Gateway, Cognito IAM.
  • Around 4 Years of extensive Experience in designing User Interface (UI) applications and professional web applications using HTML 4.0/5, XHTML, CSS2/CSS3, JAVASCRIPT, JQUERY, NodeJS, AngularJS, JSON and XML and IDE tools like WebStorm, Brackets, Sublime text, Eclipse.
  • Expertise in developing front - end systems with JavaScript, Bootstrap, HTML5, CSS3, and MVC Framework such as Angular JS.
  • Provisioning AWS EC2 instances with Auto scaling groups, Load Balancers in a newly defined VPC, and used Lambda Functions to trigger events following the requests for Dynamo Db.
  • Over 6 years of working experience as a VMware Administrator in Data Center Environment running vSphere 5.1, 5.0, 4.1, VMware Infrastructure 4.1 4.0, 3.5, and 3.0 including the latest vSphere 5.1 and Other VMware Products like VCloud director, SRM Site Recovery Manager 4.0/4.5/5.0, vCenter Life Cycle Manager 4.0, VMware View 5.0, 4.5 and 4.0 and Thinapp 4.5, vCenter Operations Manager vCOP.
  • Good understanding of machine learning tools in GCP like Vision API, Speech API, Translate API.
  • Experience in changing over existing AWS infrastructure to Serverless architecture (AWS Lambda, Kinesis) through the creation of a Serverless Architecture using Lambda, API Gateway, Route53, S3 buckets.
  • Experience in Migrating production infrastructure into Amazon Web Services cloud utilizing AWS Server Migration Service (SMS), AWS Database Migration Service, Elastic Bean Stalk, Cloud Formation, Code Deploy, Code Commit, EBS and OpsWorks.
  • Expertise in implementing Inspector security service to infrastructure for running scheduled tests and used WAF for stopping SQL injections and cross-site scripting and certificate manager to manage SSL certificates.
  • Experience in designing a Terraform and deploying it in a cloud deployment manager to spin up resources like cloud virtual networks, Compute Engines in public and private subnets along with AutoScaler in Google Cloud Platform.
  • Experience in Designing, Architecting, and implementing scalable cloud-based web applications using AWS and GCP.
  • Set up a GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
  • Extensively worked on various GCP infrastructure design and implementation strategies and experienced in designing, architecting, and implementing scalable cloud-based web applications using AWS and GCP.
  • Configuring and deploying instances on GCP environments and Data Centers and building/maintaining Docker container clusters managed by Kubernetes, Bash, Git, Docker on GCP (Google Cloud Platform).
  • Experience in the implementation of Python web frameworks like Django, Flask, Pylons, Web2py, and Python Servlet Engine(PSE).
  • Experienced in working with various Python Integrated Development Environments like IDLE, PyCharm, and Sublime Text.
  • Experience in Object Oriented Design and Programming concepts in Python and java.
  • Excellent knowledge of Python Collections and Multi-Threading.
  • Experience in working on various python packages such as Numpy, SQLAlchemy, matplotlib, Beautiful soup, pickle, Pyside, Scipy, and PyTables.
  • Extensive ETL tool experience working with IBM Data Stage and worked on DataStage client tools like DataStage Designer, DataStage Director, and DataStage Administrator.
  • Experienced in scheduling sequence, parallel and server jobs using DataStage Director, UNIX scripts, and scheduling tools.
  • Container-Based Orchestration through Google Kubernetes in Creating Mock Environment for Bulk Requests to the Main Machines on the cloud.
  • Experience in developing AI Chatbot mobile application using Microsoft Bot Framework, LUIS model, and Azure Cloud. Also writing direct line API for Chat-Bot to call conversation history and BLE device data to pass as a conversation data feed for ChatBot.
  • Knowledge in working on rolling updates using the deployments feature in Kubernetes and implemented BLUE GREEN deployment to maintain zero downtime.
  • Experience in providing highly available and fault-tolerant applications utilizing orchestration technologies like Kubernetes and Apache Mesos on Google Cloud Platform.
  • Experience in Blue/green deployment strategy by creating new applications that are identical to the existing production environment using CloudFormation templates & Route53 weighted recordsets to redirect traffic from the old environment to the pristine environment via DNS.
  • Experience on various Azure Services like Compute (Web Roles, Worker Roles), Azure Websites, Caching, SQL Azure, NoSQL, Storage, Network services, Azure Active Directory, API Management, Scheduling, Auto Scaling, and Power SHELL Automation.
  • Experience in Creating and maintaining containerized microservices and configuring/maintaining private container registry on Microsoft Azure for Hosting Images and using Windows Active Directory to secure an Azure AD domain services managed the domain with LDAPS.
  • Worked with SAS Predictive Modeling for preparing data, building predictive models, assessing models, scoring new data sets, and implementing models. Certified Predictive Modeler and good knowledge with
  • Experience in migrating on-premises to Windows Azure and build Azure Disaster Recovery Environment and Azure backups from the scratch using PowerShell script.
  • Experience in working on ELK architecture and its components like Elastic search, Log stash, and Kibana. Handled installation, administration, and configuration of ELK stack on AWS. AWS Import / Exportwith utmost optimization reducing the billing for different environments as per the requirements.
  • Designed, developed, provisioned, and configuredGoogle cloud platform, used APIs & Services like Compute Engine, App Engine, Cloud Scheduler, Cloud Testing, Cloud Monitoring with appropriate IAM @Admin, Security configurations with optimistic billing options.
  • Configure ELK stack in conjunction with AWS and using LogStash to output data to AWS S3.
  • Expertise in creating Kubernetes cluster with Cloud Formation templates and deploy them in the AWS environment and monitoring the health of pods using Helm Charts.
  • Expertise in setting up Kubernetes (k8s) clusters for running microservices and pushed microservices into production with Kubernetes backed Infrastructure. Development of automation of Kubernetes clusters via playbooks in Ansible.
  • Experience in using tools like Docker Compose, Kubernetes, for Orchestrating and deploying the services related to the Containers and with container-based deployments using Docker, working with Docker images, Docker hub.
  • Expertise in virtualization of servers using Docker, running Docker Swarm, worked with Docker Engine and Docker Machine, to deploy the micro services-oriented environments, and configuration automation using Docker containers.
  • Proficient in using Docker swarm and Kubernetes for container orchestration, by writing Docker files and setting up the automated build on Docker HUB.
  • Experience in data modeling of Cassandra.
  • A unique experience with Pivotal Cloud Foundry and OpenShift/Kubernetes architecture and design, troubleshooting issues with platform components (PCF), and developing global/multi-regional deployment models and patterns for large scale developments/deployments on Cloud Foundry and OpenShift/Kubernetes
  • Vast knowledge of utilizing cloud technologies including Amazon Web Services (AWS), Microsoft Azure and Pivotal Cloud Foundry (PCF)
  • Experience working with Apache Hadoop, Kafka, Spark, and Log stash.Development of services that utilize the Cloud Foundry and Azure client libraries (SDK) for Java.
  • Worked with Apache Kafka for High throughput for both publishing and subscribing, with disk structures that provide constant performance even with many terabytes of stored messages.
  • Expertise in writing Ansible Playbooks from scratch using YAML functions and utilizing set up and automate the CI/CD pipeline and deploy microservices. Provisioned load balancer, auto-scaling group, and launch configuration for microservices using Ansible.
  • Experience in working with Ansible Tower to manage multiple nodes and manage inventory for different environments and automated the cloud deployments using Ansible, and AWS Cloud Formation Templates.
  • Expertise in deploying Ansible playbooks in the AWS environment using Terraform as well as creating Ansible roles using YAML. Used Ansible to configure Tomcat servers and maintenance.
  • Experience in Deploying and configuring Chef server including bootstrapping of Chef-Client nodes for provisioning and created roles, recipes, cookbooks, and uploaded them to Chef-server, Managed On-site OS, Applications, Services, Packages using Chef as well as AWS for EC2, S3, Route53 and ELB with Chef Cookbooks.
  • Highly proficient in configuring Chef to build up services and applications on the instances once they have been configured using Cloud Formation and written ruby scripts for Chef automation.
  • Tested Chef Cookbook modifications on cloud instances in AWS, using test Kitchen, ChefSpec, and utilized Ohai to collect attributes on the node.
  • Experience in deploying Puppet, Puppet Dashboard, and Puppet DB for configuration management to existing infrastructure and created Modules for protocol configuration and managing them using Puppet automation.
  • Highly Proficient in developing Puppet module for Automation using a combination of Puppet Master, R10K Wrapper, GIT Enterprise, Vagrant, and Simple UI (Jenkins).
  • Extensively worked on Hudson, Jenkins, and Bamboo for continuous integration and the end to end automation for all build and deployments including setting up pipeline jobs and upstream/downstream job configurations in Jenkins.
  • Hands-on experience in using Bamboo modules such as Build Complete Action, Build Plan, definition, and Administration configuration. Setup Continuous Deployment for the various test environments utilizing Bamboo.
  • Experience in System Administration, Configuration, upgrading, Patches, Troubleshooting, Security, Backup, Disaster Recovery, Performance Monitoring, and Fine-tuning on Unix & Linux Systems.
  • Expertise in working with different Bug Tracking Tools like JIRA, Bugzilla, ServiceNow, Clear Quest, and Quality center

TECHNICAL SKILLS

Cloud Environment: Amazon Web Services (AWS), Google Cloud Platform (GCP), Azure, Pivotal Cloud Foundry.

Infrastructure as code: Terraform, Cloud Formation.

AWS Services: VPC, IAM, S3, Elastic Beanstalk, CloudFront, Redshift, Lambda, Kinesis, DynamoDB, Direct Connect, Storage API Gateway, DMS, SMS, SNS, and SWF.

Operating Systems: Linux (Red Hat, CENTOS & SUSE), Ubuntu, Solaris, DEBAIN, HP-UX, Windows

Scripting: SHELL Scripting, Groovy, Python, Ruby, Perl, and Power SHELL, Yaml.

Version Control Tools: GIT, GITHUB, GITLab, Subversion (SVN), and Bitbucket.

Build Tools: MAVEN, Gradle, Sonar, Nexus, and Ant.

Containerization Tools: AWS ECS, Docker, Kubernetes, Mesos

Application Servers: WebSphere Application Server, Apache Tomcat, JBoss, WebLogic, Nginx, Kafka.

Automation & Configuration Tools: Chef, Puppet, Ansible, Jenkins.

Orchestration Tools: Kubernetes, Docker Swarm, and Apache Mesos, Marathon, and Google Cloud Engine.

Networking Protocols: TCP/IP, DNS, DHCP, Cisco Routers/Switches, WAN, LAN, FTP/TFTP, SMTP

Monitoring Tools: Nagios, AWS Cloud Watch, Splunk, and ELK

Bug Tracking Tools: JIRA, Bugzilla, and Red Mine.

PROFESSIONAL EXPERIENCE

Sr DevOps/Cloud Engineer GCP/AWS

Confidential -Monroe, NC

Responsibilities:

  • Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring, and cloud deployment manager.
  • Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
  • Mentored to Dev team for setting up infrastructure using PostgreSQL for troubleshooting and Orchestrated using Mesos on Google Cloud Platform (GCP).
  • Deliver OpenText EIM products as an EMS on a secure, globally scaled platform from Google
  • Optimized OpenText performance with Kubernetes container management
  • Coordinated with the development team as a backup, and resolved issues based on configuration management tool like Chef.
  • Defined branching strategies in Bitbucket and implemented best practices on GCP.
  • Created projects, VPC's, Subnetwork's, GKE Clusters for environments QA3, QA9, and prod using Terraform Created projects, VPC's, Subnetwork's, GKE Clusters for environments.
  • Create clusters inGoogle Cloudand manage the clusters usingKubernetes(k8s).
  • Developed, and implemented architectural solutions involving multiple Pivotal Cloud Foundry (PCF) foundations on VMware virtual infrastructure (on-premises).
  • Utilized Virtualization technologies like VMWare, Virtual Box, and worked with containerizing applications like Docker, Kubernetes, and worked on Apache and Firewalls in both development and production.
  • Worked with storage networking teams to ensure allocated SAN, fiber and networking infrastructure reflects specifications laid out in the initial VMware farm design to ensure successful deployment
  • Experience in developing and deploying the microservices applications in the Pivotal Cloud Foundry (Paas) cloud platform and CF command-line interface.
  • Created deployment models for cloud foundry, explaining the underlying VM, Container, and application layout across multiple PCF foundations spread across the globe
  • Created a Dashboard to see server status, metrics, and errors in logs. Scripts to analyze Log files for troubleshooting to resolve issues. Monitored applications in production using Logstash
  • Working with the ELK (Elastic Search, Logstash, Kibana) stack to analyze docker logs and configured to displayed through Kibana dashboards.
  • Worked on installing, configuring, and managing Docker Containers, Docker Images for Web Servers and Applications, and Implemented Docker -MAVEN-plugin in and MAVEN pom to build Docker images for all microservices and later used Docker file to build the Docker images from the java jar files.
  • Strong development and design experience with various Java and JEE frameworks like Spring, Spring boot, Groovy, Grails, JAX-RS, JAX-WS, Apache CXF, Jersey, Apache Axis, JPA, Hibernate, MyBiatis, Struts, JSF, EJB 3.1, EJB 2.1 and JMS.
  • Defined Global Data Architecture using MS Azure HDInsight, SQL DW, Azure Blobs, Azures, DocumentDB, HBase, Neo4J, Spark, Kafka, Polybase, IBM DataStage / BigIntegrate, Infosphere IGC.
  • Worked with SAS Predictive Modeling for preparing data, building predictive models, assessing models, scoring new data sets, and implementing models. Used Linear and Logistics regression modeling along with ANOVA and ARIMA modeling to provide analytical consultation, modeling, and solutions to optimize business goals, objectives, and priorities of the credit business.
  • Using SAS Predictive Modeling created data sources in Enterprise Miner, explored and assessed data sources, build predictive models using regression analysis (linear and logistics), decision trees and neural networks, used fit statistic for different predictions, used decision processing for adjusting oversampling, used profit/loss information for assessing model performance and for comparison of models and forecasting.
  • Created Python and Bash tools to increase the efficiency of call center application system and operations; data conversion scripts, AMQP/RabbitMQ, REST, JSON, and CRUD scripts for API Integration.
  • Wrote Python code embedded with JSON and XML to produce HTTP GET requests, parsing HTML data from websites.
  • Strong knowledge of Python Web Frameworks such as Django and Flask
  • Debugging the failure issues by capturing the array and register dumps using Python scripts, traces and performing several experiments by interacting with the design team
  • Implemented Multithreading module and complex networking operations like race route, SMTP mail server, and web server Using Python.
  • Used the existing Deal Model in Python to inherit and create an object data structure for regulatory reporting.
  • Extraction of source data from multiple legacy applications and heterogeneous technologies such as mainframes z/OS, mainframe DB2 LUW based, SQL Servers files into HDFS Ingestion Zone using the Sqoop, DataStage jobs, and Spark jobs
  • R&D on AWS ECS, AWS IoT, and AWS Lambda to Integrate a Complete IoT Service working on Cloud
  • Worked on Jenkins file with multiple stages like checkout a branch, building the application, testing, pushing the image into GCR, Deploying to QA3, Deploying to QA9, Acceptance testing and finally Deploying to Prod
  • Responsible for Setup and build AWS infrastructure using resources VPC, EC2, S3, RDS, Dynamo DB, IAM, EBS, Route53, SNS, SES, SQS, CloudWatch, CloudTrail, Security Group, Autoscaling and RDS using CloudFormation templates.
  • Deploy and monitor scalable infrastructure on Amazon web services (AWS) and configuration management instances and Managed servers on the Amazon Web Services (AWS) platform using Chef configuration management tools and Created instances in AWS as well as migrated data to AWS from data Center.
  • Involved in design and deployment of a multitude of Cloud services on AWS stack such as EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, IAM, while focusing on high-availability, fault tolerance, and auto-scaling in AWS CloudFormation
  • Developed strategy for cloud migration and implementation of best practices using AWS services like database migration service, AWS server migration service from On-Premises to cloud.
  • Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS CloudWatch / Splunk and assigned AWS elastic IP addresses to work around host or availability zone failures by quickly remapping the address to another running instance
  • Provisioned the highly available EC2 Instances using Terraform and Cloud Formation and wrote new Python scripts to support new functionality in Terraform.
  • Worked in Cloud Formation to automate AWS environment creation along with the ability to deploy AWS using bill scripts (Boto3 and AWS CLI) and automate solutions using Python and SHELL scripting
  • Managed AWS infrastructure as code (IaaS) using Terraform. Expertise in writing new Python scripts to support new functionality in Terraform. Provisioned the highly available EC2 Instances using Terraform and Cloud Formation and Setting up the build and deployment automation for Terraform scripts using Jenkins
  • Designed AWS Cloud Formation templates to create custom sized VPC, to set up IAM policies for users, subnets, NAT to ensure successful deployment of Web applications, database templates, and Security groups.
  • Used Flume, Kafka to aggregate log data into HDFS.
  • Developed a stream filtering system using Spark streaming on top of Apache Kafka.
  • Designed a system using Kafka to auto-scale the backend servers based on the events throughput.
  • Managed Docker orchestration and Docker containerization using Kubernetes. Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers
  • Created and deployed Kubernetes pod definitions, tags, labels, multi-pod container replication. Managed multiple Kubernetes pod containers scaling, and auto-scaling.
  • Securing the GCP infrastructure using Private subnets, Security groups, NACL(VPC), WAF, etc.
  • Contribute to ChatBot projects and ML to build internal tools for GoogleCloud users/PSO
  • Deployed pods using Replication Controllers by interacting with Kubernetes API server defining through declarative YAML files.
  • Implementation of new tools such as Kubernetes with Docker to assist with auto-scaling and continuous integration (CI) and Upload a Docker image to the registry so the service is deployable through Kubernetes. Use the Kubernetes dashboard to monitor and manage the services.
  • Worked on installing, configuring and managing Docker Containers, Docker Images for Web Servers and Applications and Implemented Docker -MAVEN-plugin in and MAVEN pom to build Docker images for all microservices and later used Docker file to build the Docker images from the java jar files
  • Created Docker images using a Docker file, worked on Docker container snapshots, removing images and managing Docker volumes, and also virtualized servers in Docker as per QA and Dev-environment requirements and configured automation using Docker containers.
  • Designed and developed ETL processes using DataStage to load data from Teradata, Flat Files to a staging database, and from staging to the target Data Warehouse database.
  • Creating and Tracking defects in IBM Rational ClearQuest, Remedy Action Request System.
  • Worked on troubleshooting, performance tuning and performances monitoring for enhancement of DataStage jobs and builds across Development, QA and PROD environments
  • Configuring with different artifacts to make an image and deploy Docker image to install the application on an instance, maintain and troubleshoot for any user issues or network problems
  • Worked on Docker-Compose, Docker-Machine to create Docker containers for testing applications in the QA environment and automated the deployments, scaling and management of containerized applications
  • Installed and Implemented Ansible configuration management system. Used Ansible to manage Web applications, Environments configuration Files, Users, Mount Points, and Packages also Worked with automation/configuration management using Ansible and created playbooks in YAML to automate the development processes.
  • Implemented Infrastructure automation through Ansible for auto-provisioning, code deployments, software installation, and configuration updates.
  • Familiar with all the internal tools of Cassandra.
  • Involved in the process of Cassandra data modeling and building efficient data structures
  • Created Pre-commit hooks in Perl/SHELL/bash for authentication with JIRA-Pattern Id while committing codes in SVN, limiting file size code and file type and restricting development team to check-in while code commit
  • Deployed and configured JIRA, both hosted and local instances for issue tracking, workflow collaboration, and tool-chain automation
  • Used monitoring tools like Nagios and Splunk to improve application performance, helps to get the visibility & business context to meet business demands, and implemented SPARK communication tool chat BOT for triggering alerts.
  • Data modeled the new solution based on Cassandra and the use case.
  • Good Command on CQL to run queries on the data present in Cassandra Cluster with multi DC's in 8 nodes each.
  • Analyzed the performance of Cassandra cluster using Nodetool TP stats and CFstats for thread analysis and latency analysis.
  • Used the DataStax OpsCenter for monitoring the health of the cluster for the keyspaces and the tables designed.
  • Modified Cassandra.yaml and Cassandra-env.sh files to set the configuration properties like node addresses, Memtables size and flush times, etc.
  • Good understanding of the principles and best practices of software configuration management (SCM) in agile, scrum, and waterfall methodologies.
  • Well versed with Software development (SDLC), Test life cycle (STLC), and Bug life cycle and worked with testing methodologies like a waterfall and the agile methodology (SCRUM) with an in-depth understanding of the principles and best practices of Software Configuration Management (SCM).

Environment: AWS, EC2, RDS, ELB (Elastic Load Balancing), S3, Cloud Watch, Cloud Formation, Route53, Lambda, MAVEN, Nexus, Chef, Terraform, Jenkins CI/CD, Nagios, Jira, SHELL, Python, VPC, Autoscaling, Apache, JBoss, Nginx, Tomcat, GIT, Docker, Kubernetes, GCP, Service now, Cassandra, Kafka, Blue/green deployment.

Sr DevOps/Cloud Engineer AWS/Azure

Confidential - Austin, TX

Responsibilities:

  • Worked with AWS services using S3, RDS, EBS, Elastic Load Balancer, and Auto-scaling groups, EC2 instances with optimized volumes, and achieved cloud automation and deployments using Chef, Python, and AWS Cloud Formation Templates.
  • Worked with AWS CLI and AWS API to manage resources on AWS for many services such as an EC2, S3, VPC, Cloud Watch, ELB, Auto-scaling, created Python script using AWS API Calls to manage all resources deployed on AWS.
  • Configured AWS IAM and Security Groups in Public and Private Subnets in VPC Managed IAM accounts (with MFA) and IAM policies to meet security audit & compliance requirements.
  • Provided high durability of the available data using data storage in the AWS S3 bucket, versioning S3, lifecycle policies. Also, web hosting the data from the S3 bucket by creating URLs.
  • Developed various helper classes needed following Core Java multi-threaded programming and Collection classes
  • Developed the Presentation and Controller layers using JSP, HTML, JavaScript, Business layer using Spring (IOC, AOP), DTO, JTA, and Persistent layer DAO, Hibernate for all modules.
  • Implemented User interface (UI) entire application using JSP, JSTL, Custom Tag Libraries, JavaScript, XML/ XSLT, HTML, CSS and Especially.
  • Utilized AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS and create AMIs for mission-critical production servers as backups and Used AWS Beanstalk for deploying and scaling web applications and services developed with Java, Node.js, Python and Ruby on familiar servers like Apache, Nginx, Tomcat.
  • Extensive experience in working with Datastage Designer for developing jobs and Datastage Director to view the log file for execution errors.
  • P2V conversions using VMware convertor, doubletake
  • Experience with installing, configuring and troubleshooting VMware Lab Manager
  • Created various standard/reusable jobs in DataStage using various active and passive stages like Sort, Lookup, Filter, Join, Transformer, aggregator, Change Capture Data, Sequential file, Datasets.
  • Architected and developed a mobile web application publishing framework component library based on AngularJS.
  • Utilized Agile Methodology (SDLC) to manage the development lifecycle.
  • Worked with Microsoft team and developed a data-driven conversation engine for proprietary Chatbot mobile application using Microsoft Bot Framework, LUIS model, and Azure Cloud.
  • Leading a development team to create a Xamarin Form-based tool app to call Azure IoT Machine Learning API, and display health predictive analysis results on the app dashboard. Also wrote direct line API calls to invoke Azure Machine Learning (AML). Worked and created a machine learning algorithm data feed model for AML Studio.
  • Developed Chatbot for Ordering SIM cards from Confidential with Python.
  • Responsible for the overall layout design that meets cross-device compatibility using Bootstrap, a color scheme of the web site using HTML5, and CSS3 and Responsible for creating detailed wireframes and process flows.
  • Involved in developing object-oriented JavaScript and experienced with AJAX, JSON, HTML5, Angular.js, Node.js, and CSS3
  • Involved in writing application level code to interact with Restful Web APIs, Web Services using AJAX, JSON, XML, and jQuery.
  • Terraform as infrastructure as code, execution plans, resource graph, and change automation. Managed AWS infrastructure as code using Terraform.
  • Deployed and configured Elasticsearch, Logstash, and Kibana (ELK) for log analytics, full-text search, application monitoring in integration with AWS Lambda, and CloudWatch.
  • Designed and configured Azure Virtual Networks (VNets), subnets, Azure network settings, DHCP address blocks, DNS settings, and Security policies & configured BGP routes to enable ExpressRoute connections between on-premise data centers & Azure cloud.
  • Led implementation of Azure Active Directory for single sign-on access to thousands of cloud SaaS applications like Office 365, Dropbox. Also configured Azure Role-based Access Control (RBAC) to segregate duties within our team and grant only the amount of access to users that they need to perform their jobs.
  • Created and deployed VMs on the Microsoft cloud service Azure, created and managed the virtual networks to connect all the servers, and designed ARM templates for Azure platform.
  • Configured three types of blobs, block blobs, page blobs, and append blobs in Azure for storing a large amount of unstructured object data such as text or binary data, that can be accessed from anywhere via HTTP or HTTPS.
  • Worked on Azure Fabric, Microservices, IoT & Docker containers in and involved in setting up Terraform continuous build integration system. Used Azure Internal Load Balancer to provide high availability for IaaS VMs & PaaS role instances.
  • Configuring the Docker containers and creating Docker files for different environments.
  • Worked with container-based deployments using Docker, working with Docker images, Docker Hub and Docker-registries, and Kubernetes.
  • Setting up Chef Infra, bootstrapping nodes, creating and uploading recipes, node convergence in Chef SCM.
  • Created a private cloud using Kubernetes that supports DEV, TEST, and PROD environments.
  • Implemented a production-ready, load-balanced, highly available, fault-tolerant, auto-scaling Kubernetes AWS infrastructure, and microservice container orchestration.
  • Creating clusters using Kubernetes and worked on creating many pods, replication controllers, deployments, labels, health checks, and ingress by writing YAML files.
  • Used Jenkins and pipelines to drive all microservices builds out to the Docker-registry and then deployed to Kubernetes, Created Pods, and managed using Kubernetes.
  • Installed Workstation, Bootstrapped Nodes, Wrote Recipes and Cookbooks and uploaded them to Chef Server, Managed On-site OS Applications, Services, Packages using Chef as well as AWS for EC2, S3 & ELB with Chef Cookbooks.
  • Worked on Ansible setup, managing hosts file, Using YAML linter, authoring various playbooks, and custom modules with Ansible.
  • Created inventory in Ansible for automating the continuous deployment and wrote playbooks using YAML scripting.
  • Written Terraform templates, Chef cookbooks pushed them onto Chef for configuring EC2 Instances and Solved Gateway time issue on ELB and moved all the logs to S3 Bucket by using Terraform.
  • Used ELK stacking to monitor the logs for detailed analysis and worked on dashboarding using Elastic, Logstash & Kibana (ELK) and to set up real-time logging and analytics for Continuous delivery pipelines & applications.
  • Configured ELK stack in conjunction with AWS and using LogStash to output data to AWS S3 and reduced ElasticSearch disk space usage by 66% by automating Elasticsearch maintenance using Jenkins.
  • Automated the cloud deployment using Chef, Python, and AWS Cloud Formation Templates. Used Chef for unattended bootstrapping in AWS.
  • Responsible for managing AWS resources in the cloud and maintain Continuous Integration and Continuous Deployment (CI/CD) pipeline for a fast-paced robust application development environment.
  • Created Chef Cookbooks and wrote recipes in Ruby Script to install and configured Infrastructure across environments and automated the process using Python Script.
  • Involved in Setting up Chef Workstation, bootstrapping various enterprise nodes, setting up keys.
  • Worked on using a GIT branching strategy that included developing branches, feature branches, staging branches, and master. Pull requests and code reviews were performed.
  • Coordinated with developers in establishing and applying appropriate branching, labeling/naming conventions using GIT source control, and analyzed and resolved conflicts related to merging of source code for GIT.
  • Configured Jenkins, used as a Continuous Integration tool for Installing and configuring Jenkins Master and hooking up with different build slaves. Automatized Java application builds using Ant and MAVEN.
  • Used Nagios as a monitoring tool to identify and resolve infrastructure problems before they affect critical processes and worked on Nagios event handlers in case of the automatic restart of failed applications and services.
  • Configured automation and maintained build and deployment CI/CD tools GIT, Jenkins, Ant, MAVEN, Docker-registry/daemon, Nexus, and JIRA for Multi-Environment (Local/POC/NON-PROD/PROD) with high degrees of standardization for both infrastructure and application stack automation in AWS cloud platform.
  • Utilized Virtualization technologies like VMWare, Virtual Box, and worked with containerizing applications like Docker, Kubernetes, and worked on Apache and Firewalls in both development and production.

Environment: AWS, AZURE, Chef, Docker, Ansible, Jenkins, Terraform, Kubernetes, ANT, MAVEN, Ruby, SHELL, Python, WebLogic Server 11g, Load Balancers, WLST, Apache Tomcat 7.x, Virtualization, Configured plug-ins for Apache HTTP server 2.4, Nginx, LDAP, JDK1.7, XML, GitHub, Nagios, Splunk.

DevOps AWS Engineer

Confidential

Responsibilities:

  • Developed and implemented Software Release Management strategies for various applications according to the Agile Process.
  • Configuring of monitoring and alerting tools according to the requirement like AWS CloudWatch, CloudTrail, Dynatrace, Nagios, Splunk Enterprise, SNMP monitoring for the VPN connections.
  • Written Cloud Formation Templates (CFT) in JSON and YAML format to build the AWS services with the paradigm of Infrastructure as a Code.
  • Involved in AWS EC2, VPC, S3, SQS, SNS based automation through Terraform, Python, Bash Scripts. Adopted new features as they were released by Amazon, including ELB & EBS.
  • DevOps role converting existing AWS infrastructure to Server-less architecture (AWS Lambda, Kinesis) deployed via the Terraform template.
  • Create automation and deployment templates for relational and NoSQL databases including MSSQL, MySQL, Cassandra, and MongoDB in AWS.
  • Part of the Developers team supporting the automation of software delivery across multiple AWS regions and availability zones while also helping to support over 1000 workloads to AWS to reduce its data center footprint to support an agile disaster recovery system
  • Coordinate/assist developers with establishing and applying appropriate branching, labeling/naming conventions using GIT source control.
  • Worked on Puppet Server and workstation to manage and configure nodes, experience in writing puppet manifests to Automate the configuration of a board range of services.
  • Deployed Puppet for configuration management to existing infrastructure and implemented puppet modules for server housekeeping.
  • Actively involved in Architecting the puppet infrastructure to manage servers in different environments.
  • Utilized Puppet for configuration management of hosted Instances within AWS.
  • Configure ELK stack in conjunction with AWS and using LogStash to output data to AWS S3.
  • Understanding of developing and testing enterprise products, with the current focus on Cloud-Based Application and giving the solution to challenges imposed by multi-data center deployed SaaS products and their needs for DevOps Tools and Automation.

Environment: AWS CloudWatch, CloudTrail, Nagios, Splunk, EC2, VPC, S3, SQS, SNS, Terraform, ELB & EBS, Lambda, Kinesis, NoSQL, MSSQL, MySQL, MongoDB, PowerShell, SVN, GIT, Puppet, MAVEN, Nexus, WebLogic, Tomcat, Python Scripts, Perl Scripts, Ruby Scripts, Bash Scripts, Puppet, XML, Unix, JIRA.

Build/Release Engineer

Confidential

Responsibilities:

  • Release Engineer for a team that involved different development teams and multiple simultaneous software releases.
  • Developed and implemented Software Release Management strategies for various applications according to the agile process.
  • Responsible for designing and deploying the best SCM processes and procedures.
  • Supported and developed tools for Integration, Automated Testing, and Release Management.
  • Used Subversion as Source Code Repositories.
  • Managed SVN repositories for branching, merging, and tagging.
  • Coordinate/assist developers with establishing and applying appropriate Branching, Labeling/Naming Conventions using GIT source control.
  • Developed and executed functional test cases and also assisted users with User Acceptance Testing UAT.
  • Worked on SWIFT gateway applications like SWIFT Alliance Access SAA
  • Developed and executed functional test cases and also assisted users with User Acceptance Testing UAT.
  • Worked on SWIFT gateway applications like SWIFT Alliance Access SAA

Environment: Chef, MAVEN, Nagios, Subversion, AWS, Power SHELL, SHELL/Perl, SCM, SVN, Jenkins, Tomcat.

JAVA/J2EE Developer

Confidential

Responsibilities:

  • Set up Jenkins server and build jobs to provide continuous automated builds based on polling the GIT source control system during the day and periodic scheduled builds overnight to support development needs using Jenkins, GIT, J-unit, Selenium, and MAVEN.
  • Coordinated with QA and DEV teams for validating the CRs.
  • Created conditional logic in pages using JSF tags and JSTL.
  • Developed the application web pages using HTML, CSS, JSP, Javascript, and jQuery.
  • Worked as a full-stack developer responsible for Web, Middle Tier, and Databases in the development of the application.
  • Developed use case diagrams, Object diagrams, Class diagrams, and Sequence diagrams using UML.
  • Implemented the presentation layer using Struts, JQuery.
  • Worked on SWIFT gateway applications like SWIFT Alliance Access (SAA), Foreign Exchange (FX) applications FSS Spectrum and Axletree.
  • Hand on experience in Database Testing, Stored procedure testing, and executing the SQL queries to check the data validity and data integrity.
  • Skilled in bug reporting and tracking using Quality Center, Team Foundation Server, Jira, and performing Root Cause Analysis.
  • Implemented Email Notification using Spring, Java Email API.
  • Design and Develop Restful API using CXF and validated using Rest Client Swagger, Postmen.

Environment: Java, J2EE, JavaScript, HTML, CSS, JSF, QA, DEV, MySQL, DAO, JDBC, JQuery, SQL, PL/SQLEclipse, JavaBeans, UML, JSP, JSON, SHELL Scripting and WINDOWS 7, Jenkins, GIT, J-unit, Selenium, MAVEN, QA, DEV, SOAP, WEB LOGIC, JDK.

We'd love your feedback!