Aws Devops Engineer/ Site Reliability Engineer Resume
PROFESSIONAL SUMMARY:
- Good experience in IT industry with imperative concentration in designing and Orchestrating workflow and a strong technical background in deploying and maintaining in Cloud Platforms (AWS, Azure), Automation and microservices.
- Worked in organizations that Participated on the architecture solution team, provided design ideas, following industry best practices, code review, source code control, software build process, upgrades, testing/QA.
- Designed, configured and managed public/private cloud infrastructures utilizing Amazon Web Services (AWS) like EC2, Elastic Load - balancers, Elastic Container Service, S3, Elastic Beanstalk, Cloud Front, Elastic File System, RDS, DynamoDB, DMS, VPC, Direct Connect, Route53, Cloud Watch, Cloud Trail, Cloud Formation, IAM, EMR and Elastic Search.
- Hands-on experience in Azure Cloud Services (PaaS & IaaS), Storage, Web Apps, Active Directory, Application Insights, Logic Apps, Data Factory, Azure Monitoring, Visual Studio Online (VSO), and SQL Azure.
- Knowledge in Virtualization technologies like VMware ESX/ESXI, CITRIX XEN SERVER/OPENSTACK and involved in the maintenance of virtual servers.
- Provided support to Production, Staging, QA, Development environments for code deployments, changes and general support.
- Designed and deployed a scalable LINUX/Windows infrastructure that seamlessly integrated a network storage server, LDAP servers and SAMBA servers. Reduced infrastructure complexity by 60% and improved development productivity by 50%.
- Working on Splunk/ Datadog to monitor the logs.
- Extensive experience in designing and development of software applications with ASP.Net, VB, c#, Java script, jQuery, Sql Server, XML Web Services, RESTful Web Services, MVC.
- Experience in configuring NIS, DNS, DHCP, NFS, SAMBA, FTP, Remote Access Protocol, Security Management, Security trouble shoots and SOA-based applications. Expertise in installations of SQL server, MYSQL server and PostgreSQL.
- Working knowledge on Nginx for the load balancer configuration using ELB, ALB in AWS.
- Expertise in DevOps tools like Chef, Puppet, Salt Stack, Ansible, Docker, Subversion (SVN), GIT, Jenkins, Ant and Maven.
- Proficient on examining Cookbooks using food critics, implementing chef Recipes and deploying them into Amazon EC2.
- Hands on experience in using GIT to synchronize with the chef-repo, and then to manage it, as if it were source code.
- Extensively used Ruby scripting on Chef Automation for creating cookbooks comprising all resources, data bags, templates, attributes and used Knife commands to manage Nodes.
- Written Shell Scripts to apply the Integration label to all the files which needs manual labelling of files. Configured the user accounts for Continuous Integration - Jenkins, Nexus and Sonar . Handled Jira Tickets for SCM Support activities.
- Experience on automating the code using Chef and Python to build AWS environments autonomously.
- Experience with migration of code base from SVN to GIT. Automation, designing and implementing continuous integration using Jenkins and Hudson.
- Experienced in writing Bash and Python scripts, Included to3 to supplement automation provided by Ansible and terraform for tasks such as encrypting EBS volumes backing AMI's and Scheduling Lambda functions for routine AWS tasks.
- Experienced with Version Control tools such as Subversion/Git, Clear Case, Stash and Source code management client tools like Visual SVN, Tortoise SVN, Stash, Source Tree, GIT Bash, GITHUB, GIT GUI and other command line applications etc for Branching, Merging Strategies, migrating projects from Subversion repositories to GitHub Enterprise repositories through Team Foundation server (TFS).
- Expertise on designing project workflows/pipelines using Jenkins as CI tool and on building Jenkins jobs to create AWS infrastructure from GitHub repos containing Terraform code.
- Strong knowledge/experience in creating CI pipelines With Git/SVN, Jenkins, Maven, NEXUS, Docker, and Kubernetes on AWS RedHat Enterprise AMI. and in automating deployment pipelines
- Good scripting knowledge on Pearl, Bash, Shell and Python.
- Skilled in various Atlassian Bug tracking tools like JIRA, Bit bucket, Bamboo, Confluence and IBM clear quest.
- Applied servers like Tomcat, WebLogic across Linux platforms as well as wrote Bash shell scripts, Perl, Python and Ruby scripts in Linux.
- Experience in implementing and administering the monitoring tool Nagios for monitoring and alerting services for servers, CloudWatch, Splunk to monitor the logfiles, Network Monitoring, Log trace monitoring and the hard drive status.
- Experience in Installing Firmware Upgrades, kernel patches, systems configuration, performance tuning on Windows/Unix systems.
- Written Terraform scripts for various projects to create GKE Clusters and AWS EC2 instances along with S3 storage buckets.
- Managed environments DEV, SIT, QA, UAT and PROD for various releases and designed instance strategies.
- Excellent understanding of SDLC methodologies like Agile, Waterfall and SCRUM.
- Excellent knowledge on release schedules with agile methodologies, agile operations, process and tools area.
TECHNICAL SKILLS LIST:
Cloud Environments: AWS, Azure
Configuration Management Tools: Chef, Puppet, Ansible, Salt Stack
Databases: Oracle, MySQL, MongoDB, SQL Server, MS SQL, NOSQL, DynamoDB
Monitoring Tools: Tableau, Splunk
Build Tools: ANT, MAVEN, Hudson, Jenkins
Version Control Tools: Subversion(SVN), GIT, GIT Hub, Perforce
Web Servers: Apache, Tomcat 8.0, Web Sphere, JBOSS, WebLogic Web, RabbitMQ
Languages/Scripts: C, HTML, Shell, Bash, PHP, Python, Chef, PHP
SDLC: Agile, Scrum
Web Technologies: HTML, CSS, Java Script, jQuery, Bootstrap, XML, JSON, XSD, XSL, XPATH, Node.JS
Operating Systems: Red Hat 5/6.1/6.2, Ubuntu, Linux & Windows 10, CentOS, Debian
AWS Services: EC2, EMR, ELB, VPC, RDS, AMI, IAM, Cloud Formation, S3, Cloud Watch, Cloud
Lamda, SNS, SQS, EBS, Route 53.:
Bug Tracking Tools: JIRA, Bugzilla, HP Quality Center
Volume Manager: Logical Volume Manager, Veritas Volume Manager, Solaris Volume Manager
PROFESSIONAL EXPERIENCE:
Confidential
AWS DevOps Engineer/ Site Reliability Engineer
Responsibilities:
- Worked on Rest APIs and API Management tool known as AKANA which provides an end-to-end, full lifecycle API management tool for designing, implementing, securing, managing, monitoring, and publishing APIs.
- Worked on production support/ troubleshooting issues 24/7.
- Used as “RESTful API” or “REST APIs for building microservices applications.
- Experience as Cloud AWS DevOps Engineer project teams that involved different development teams and multiple simultaneous software releases.
- Utilized Ansible and AWS lambda, Elastic Cache and Cloud Watch logs to automate the creation of log aggregation pipeline with Elastic Search, Logstash, Kibana (ELK) stack to send all our teams logs coming into CloudWatch to process them and send them off to Elastic Search.
- Backing up data on EBS volumes to S3 by taking snapshots. Good customer facing interaction in troubleshooting issues.
- Data migration from on prem data centers to AWS cloud. Creating the infrastructure for data migration in AWS Cloud.
- Performed the automation deployments using AWS by creating the IAMs and used the code pipeline plug-in to integrate Jenkins with AWS and also created the EC2 instances to provide the virtual servers.
- Involved in Designing and deploying AWS solutions using EC2, S3, RDS, EBS Volumes, Elastic Load Balancer, Auto Scaling groups, Lambda Functions, Apigee, CloudFormation Templates, IAM Roles, Policies.
- We migrated from our company datacenter to AWS using their services for our projects for server based centos, red hat for development, QA, UAT, Prod environments and provide access to all teams who worked on that applications.
- Configured a deployment to automatically rollback when a deployment fails or when a monitoring threshold specified is met.
- Manage the development, deployment and release lifecycles by laying down processes and writing the necessary tools to automate the pipe. Worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation
- Manage AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Ansible. Automated the regular build and deployment processes for pre-prod and prod environments using Tools such as Maven following the Software Implementation Plan.
- Expertise in solving manual redundant infrastructure issues by creating Cloud Formation Templates using AWS's Server less application model and deploy RESTFUL API's using API gateway and triggering Lambda Functions.
- Wrote custom groovy scripts to automate CI/CD pipelines in Jenkins.
- Used Maven dependency management system to deploy snapshot and release artifacts to Nexus to share artifacts across projects. Configured and maintained Jenkins to implement the CI process and integrated the tool with Ant and Maven to schedule the builds. Worked with Hudson/Jenkins continuous integration servers to write/perform sanity checks as well as automatic rpm builds.
- Assisted developers with establishing and applying appropriate branching, labeling/naming conventions using GIT source control.
- Actively participate in high level team activities such as suggesting architecture improvements, recommending process improvements and conducting tool evaluations. System troubleshooting and problem solving across platform and application domains - will be expected to participate in on-call escalations to troubleshoot customer facing issues.
- Installing, setting up & Troubleshooting Ansible, created and automated platform environment setup.
- Wrote Ansible Playbooks with Python, SSH as the Wrapper to Manage Configurations of AWS nodes and Tested Playbooks on AWS instances using Python. Run Ansible Scripts to Provide Dev Servers.
- Responsible for Continuous Integration and Continuous Delivery process implementation using Jenkins along with Python and Shell scripts to automate routine jobs.
- Worked on AWS Elastic Beanstalk for fast deploying of various applications developed with Python, Ruby and Docker on familiar servers such as Apache and IIS.
- Used Artifactory repository Tool for maintaining the Java based release code packages and Responsible for build and deployment automation using AWS, Docker, Kubernetes Containers. Deployed applications using Docker Containers in the cloud with PaaS.
- Experience on Java Multi-Threading, Collection Framework, Interfaces, Synchronization, and Exception Handling.
- Written Shell Scripts to apply the Integration label to all the files which needs manual labelling of files. Configured the user accounts for Continuous Integration - Jenkins, Nexus and Sonar. Handled Jira Tickets for SCM Support activities.
- Installed, Configured and Managed Monitoring Tools such as AppDynamics, Cloud Watch for Resource Monitoring.
Environment: AWS, Git, GitHub, Jenkins, Ansible, Nexus, Docker, Kubernetes, Terraform, Nagios, Jira, AppDynamics, Cassandra, Shell Scripts, Akana, Groovy, JavaScript, JSON, Python, Swagger 3.0/ OAS 3.0.
Confidential, Mooresville, NC
Sr. DevOps /AWS Engineer
Responsibilities:
- Implemented AWS solutions like EMR, EC2, S3, IAM, EBS, Elastic Load Balancer(ELB), Security Group, Auto Scaling, and RDS in Cloud Formation JSON templates.
- Involved in File Manipulations, File Uploads using Python.
- Integration of user-facing elements developed by front-end developers with server side logic using javascript.
- Used AWS Beanstalk for deploying and scaling web applications and services developed with Java, PHP, Node.js, Python, Ruby, and Docker on familiar servers such as Apache, and IIS.
- Worked on Splunk dashboard to create new features and implementing new features.
- Worked with Kubernetes to provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts and managed containerized applications using its nodes .
- Expertise in creating Pods using Kubernetes and worked with Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes.
- Managed local deployments in Kubernetes, creating local cluster and deploying application containers.
- Developed an Ansible playbook for Gerrit and ELK cluster, implementing automated deployment and configuration.
- Created snapshots and Amazon machine images (AMI) of the instances for backup and created access Management(IAM) policies for delegated administration within AWS.
- Created Python scripts to fully automate AWS services which includes ELB, Cloud Front distribution, EC2, Security Groups, and S3. This script creates stacks, single servers and joins web servers to stacks.
- Built and deployed a Java web application to EC2 application servers in a Continuous Integration Agile Environment and automated the complete process
- Managed AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Chef.
- Used IAM to create new accounts, roles and groups and polices and developed critical modules like generating amazon resource numbers and integration points with Dynamo DB, RDS.
- Created Cloud Watch dashboards for monitoring CPU utilization, Network In-Out, Packet In-Out and other parameters of the instances.
- Experience managing and tuning MySQL and writing SQL scripts.
- Created monitors, alarms and notifications for EC2 hosts using cloudwatch.
- Installed Chef-Server enterprise On-Premise/WorkStation, bootstrapped the Nodes using Knife and worked with Chef Enterprise Hosted as well as On-Premise.
- Wrote Recipes and Cookbooks and uploaded them to Chef-server, managed On-site OS/Applications/Services/Packages using Chef as well as AWS for EC2/S3/Route53 & ELB with Chef cookbooks.
- Created Docker images using a Docker file, Docker container snapshots, removing images and managing Docker volumes.
- Worked with serverless function Lambda in hosting Java web applications.
- Automated various infrastructure activities like Continuous Deployment, Application Server setup, Stack monitoring using Ansible playbooks and has Integrated Ansible with Jenkins.
- Created and maintained continuous integration (CI) using tools Jenkins/Bamboo over different environments to facilitate an agile development process which is automated enabling teams to safely deploy code repeatedly.
- Integrated JaCoCo with Jenkins for code coverage analysis in Java VM based environments.
- Worked on setting up Kafka for streaming data and monitoring for the Kafka Cluster.
- Involved in creation and designing of data ingest pipelines using technologies such as Apache Strom and Kafka .
- Built Jenkins jobs to create AWS infrastructure from GitHub repos containing Terraform code and administered/engineered Jenkins for managing weekly Builds.
- Optimized the configuration of Amazon Redshift clusters, data distribution, and data processing.
- Used jenkins for integrating ansible to generate builds, conduct unit tests with Junit Plugin, Regression tests with Selenium, Nexus Artifactory for storing jar, war and ear files, AppDynamics & ELK Stack for monitoring, Sonarqube for code coverage reports and JIRA for document generation.
- Log and event forwarding (System logs, CloudWatch, CloudTrail, AWS Config), aggregation to Sumologic.
- Installed, Configured and maintained Nagios for over 300 hosts and 2000 services.
- Worked on branching, labeling, and merging strategies for all applications in Git.
- Created SonarQube reporting dashboard to run analysis for every project.
- Written GRADLE, MAVEN Scripts to automate build processes and managed MAVEN repository using Nexus Tool and used the same to share snapshots and releases.
- Manage Amazon Redshift clusters such as launching the cluster and specifying the node type as well.
- Used ANT and MAVEN as a build tools on Java projects for the development of build artifacts on the source code.
- Managed Maven project dependencies by creating Parent-child relationships between all projects.
- Maintained JIRA for tracking and updating project defects and tasks ensuring the successful completion of tasks in a sprint.
- Managed different infrastructure resources, like physical machines, VMs and even Docker containers using Terraform.
- Experience in Windows and Linux administration.
- Involved in troubleshooting the automation of Installing and configuring .net applications in the test and production environments and involved in complete lifecycle.
- Good experience with Dynamo DB, Redshift and Amazon EMR .
- Experienced in configuration of DNS, LDAP, NFS, DHCP Server, Samba , and TCP/IP and have experience in process automation and system monitoring using Shell Scripts.
- Planned release schedules with agile methodology and coordinated releases with engineering & SQA for timely delivery.
Environment: AWS, CHEF, Jenkins, Maven, Jira, Linux, Kubernetes, Terraform, Docker, AppDynamics, Nagios, SonarQube, Node.JS, Nexus, uDeploy, JaCoCo, PowerShell, Bash, Ruby and Python, Redis, RedHat 6.1/ 6.2.
Confidential, Boston, MA
AWS Engineer/ Site Reliability Engineer
Responsibilities:
- Created and configured AWS EC2 instances using preconfigured templates such as AMI, RHEL, Centos, Ubuntu as well as used corporate based VM images which includes complete packages to run build and test in those EC2 Instances.
- Deployed Jenkins Continuous Integration tool by connecting to Linux EC2 Instance, downloaded and Installed Jenkins, then Installed AMAZON EC2 plugin to add AWS EC2 as a new cloud which allows new EC2 instances to be used as new Jenkins build slaves.
- Administered and Engineered Jenkins for managing weekly Build, Test and Deploy chain, SVN/GIT with Dev/Test/Prod Branching Model for weekly releases.
- Rapid-Provisioning and Configuration Management for Linux/Unix, Windows using Chef and Cloud Formation Templates on Amazon Web Services.
- Worked on AWS lambda for deploying application with zero downtime in AWS Elastic Beanstalk for pipeline.
- Worked with Site Reliability Engineer to implement Datadog system metrics, analytics, and dashboards.
- Worked on functions in Lambda that aggregates the data from incoming events, then stored result data in Amazon Dynamo DB. This function also sends data to Amazon CloudWatch for simple monitoring of metrics.
- Used Puppet to deploy ELK for automating continuous deployment (CD) and configured Slave Nodes and deployment failure reporting.
- Worked with PowerShell 3.0 for installing windows features and roles and for automating monthly security patching.
- Extensive use of Elastic Load Balancing mechanism with Auto Scaling feature to scale the capacity of EC2 Instances across multiple availability zones in a region to distribute incoming high traffic for the application.
- Created AWS IAM users with Code Pipeline Full Access, AmazonEC2FullAccess, AmazonS3FullAccess and AWS Code Deploy FullAccess, generated security credentials and provided those credentials to users to get AWS access.
- Configured software and services using ansible playbooks, added users to Identity access and management and created S3 bucket to hold deployment files.
- Expertise in solving manual redundant infrastructure issues by creating Cloud Formation Templates using AWS's Server less application model and deploy RESTFUL API's using API gateway and triggering Lambda Functions.
- Used Kubernetes to deploy scale, load balance, and worked on Docker Engine, Docker HUB, Docker Images, Docker Compose for handling images for installations and domain configurations.
- Used Docker to run and deploy the application in multiple containers Docker Swarm, Docker Wave for auto discovery.
- Continuous Architectural changes to move software system offerings to a distributed service-based architecture utilizing Docker/Kubernetes Technologies.
- Used Docker to containerize custom web applications and deployed on Digital Ocean with Ubuntu instances through SWARM Cluster and automated application deployment in cloud using Docker HUB, Docker Swarm, and Vagrant.
- Worked on dynamically adding and removing servers from AWS production environment, automating backups by shell for Linux/Unix to transferring data into S3 buckets.
- Setting up JIRA as defect tracking system and configured various workflows, customizations and plug-ins for JIRA.
- Setting up the code review tool Gerrit with GIT and Integrated with CI system to help developers for peer code reviews & identified code issues at early in the cycle using code analysis.
- Performed and deployed Builds for various Environments like QA, Integration, UAT and Productions Environments.
- Deployed and managed web services with Tomcat and JBOSS. Provided end-user straining for all Tortoise SVN, JIRA users to effectively use the tool.
- Responsible for creating and managing user accounts, security groups, disk space, Process monitoring in Linux/Unix.
- Worked on strengthening security by implementing and maintaining Network Address Translation in company's network.
Environment: AWS EC2, Jenkins CI, Kubernetes, Elastic Load Balancing, Elastic Bean Stalk, Elastic Container Service, VPC, RDS, ECS, Cloud Front, Cloud Formation, Elastic Cache, Cloud Watch, Route 53, Redshift, Lambda and DynamoDB, Gerrit, GIT.
Confidential
Site Reliability/Build & Release Engineer
Responsibilities:
- Initiated planning sessions for development and testing teams to simplify deployment activities.
- Developed various test cases to ensure proper testing is performed across all corners of application post deployment.
- Executed the automation from commit to deployment by implementing a CD pipeline with the help of Jenkins and Chef.
- Expertise with all the aspects of Chef Concepts like Chef Server, workstations, Nodes, Chef Clients and various components like Kitchen.
- Experience in using Jenkins and pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, created Pods and managed using Kubernetes. Managed a PaaS for deployments using Docker, Kubernetes.
- Install and configured K ubernetes, Chef Server/workstation and nodes via CLI tools and wrote Docker files to create new images based on working environments for testing purposes before deployment.
- Worked on AWS Lambda to run the code in response to events, such as changes to data in an Amazon S3 bucket, Amazon DynamoDB table, HTTP requests using AWS API Gateway and invoked the code using API calls made using AWS SDKs.
- Used the Salt stack for Continuous Code Deployment and Real-time Automation.
- Implemented and Deployed Urban Code/UDeploy application to dynamically deploy Company’s Website Build's.
- Created end to end automation of Continuous Deployment and Configuration Management in UDeploy.
- Achieved Continuous Delivery in high scalable environment by Docker coupled with load balancing tool Nginx.
- Wrote the Vagrant scripts to spin up servers on the developer work stations. Created Vagrant windows and Linux boxes using Packer.
- Maintained the interfaces and secure connections between Jenkins and CI/CD tools. Configured jobs and pipelines using Jenkins.
- Formulated in connecting continuous integration with GIT version control repository and continually build as the check-inn’s come from the developer. Responsible for providing an end to end solution for hosting the web application on AWS cloud with integration to S3 buckets.
- Supervised the DevOps team for infrastructure support on AWS cloud.
- Designed a highly available secure multi zone AWS cloud infrastructure utilizing Chef with AWS Cloud Formation.
- Responsible for managing Ubuntu, Linux and windows virtual servers on AWS EC2 instance by creating Chef Nodes through Open Source Chef Server.
- Maximized the through put between the CPU and drives and improved the performance of data processing with the help of Amazon Red shift.
- Launched and configured the Amazon EC2 Cloud servers using Linux AMI and Ubuntu AMI and configuring the servers for specified applications using Jenkins.
- Implemented the automated Nagios in Ops environment for alerts and email notifications using Python script and executed them through Chef.
- Enabled the Amazon Cloud watch to monitor major metrics like Network packets, CPU utilization and load balancer.
- Utilized Amazon Elastic Block Storage which provides persistent block storage volumes for use with Amazon EC2 instances in the AWS cloud.
- Enhancement of Amazon Virtual Private Cloud in the scalable environment which provides advanced security features such as security groups and network access control lists to enable inbound and outbound filtering at the instance level and subnet level.
- Worked with Amazon Elastic Load Balancing which automatically distributes traffic across multiple Amazon EC2instances which enables to achieve fault tolerance in the applications.
- Incorporated Amazon Ops works which is a configuration management tool that uses Chef to automate the servers that are configured and deployed across Amazon EC2 instances.
- Well versed with Amazon Route 53 which effectively connects user requests to the infrastructure running on AmazonEC2 instances and AmazonS3 buckets.
- Initiated the process of deployment for automation to Web Sphere servers by developing Python scripts.
- Established ANT and scripts for build activities MAVEN in QA, Staging and Production environments.
- Implemented the monitoring tools like Nagios 3.0 to monitor services like CPU, Hard Drive, Memory, Users, HTTP and SSH etc.
- Used Nagios as monitoring tool and monitored the servers using it.
- Provided 24x7 support for production issues on call support rotation basis.
Environment: GIT, Jenkins, Chef, AWS EC2, S3, Route 53, VPC, Elastic Block Storage RDS, Python, Cloud watch, DOCKER, Kubernetes, LINUX, AMI, AWS Elastic Load Balanced, Vagrant, Nagios, Auto scaling groups, Apache Tomcat, JIRA, Ubuntu, Windows server NT, Oracle server.