We provide IT Staff Augmentation Services!

Sr. Elk Stack/ Devops Engineer Resume

3.00/5 (Submit Your Rating)

Phoenix, AZ

SUMMARY

  • Overall 10+ years of experience as a Sr. DevOps / ELK Engineer and Build/Release management, SCM, Environment Management and Build/Release Engineering for automating, building, releasing and configuring changes from one environment to other environment.
  • Good experience building Elastic search High Availability Clusters and building Logstash Environment.
  • Worked on DevOps/Agile operations process and tools area (Environment, Service, unit test automation, Build & Release automation, Code review, Incident and Change Management).
  • Profusely worked on Hudson, Jenkins Team City and Team Forge for continuous integration and for End to End automation for all build and deployments.
  • Proficient in SQLite, MySQL and Postgre SQL databases with Python. Experienced in developing Web Services with Python programming language.
  • Experience in development and configuration experience with software provisioning tools like Chef, Puppet, Docker and Ansible.
  • Integrated delivery (CI and CD process) Using Jenkins, Bamboo, Nexus, Yum and puppet. Encountered with Version Control Systems administering Subversion and Perforce.
  • Experienced in Linux/Unix system Administration, System Builds, Server Builds, Installations, Migration, Upgrades, Patches, Trouble shooting on RHEL 4.x/5.x, Subversion (SVN), Clear Case, GIT, Perforce, TFS.
  • Integrated Jenkins with various DevOps tools such as Nexus, Sonarqube, Puppet, CA Nolio, HP CDA, HP ALM and HP QTP etc.
  • Experience building Kibana and Grafana Dashboards (used for real time performance and Network traffic patterns)
  • Chef Experience in Server infrastructure development on AWS Cloud, extensive usage of Virtual Private Cloud (VPC), Cloud Formation, Cloud Front, EC2, RDS, S3, Route53, SNS, SQS, Cloud Trail.
  • Good experience in Code Quality Analysis tool like Sonarqube for testing the code quality of developed code.
  • Hands - on knowledge of software containerization platforms like Docker and container orchestration tools like Docker-Swarm and Knowledge on Kubernetes.
  • Worked on Configuration of New Relic for Application Performance Monitoring and Infrastructure Monitoring.

TECHNICAL SKILLS

Operating Systems: Linux CentOS, Ubuntu, UNIX, Windows, AIX

Version Control Tools: SVN, GIT, TFS, CVS and IBM Rational Clear Case

Web/Application Servers: Web Logic, Apache Tomcat, Web Sphere and JBOSS

Automation Tools: Jenkins/Hudson, BuildForge and Bamboo

Build Tools: Maven, Ant and MS Build, Docker.

Configuration Tools: Chef, Puppet, Ansible, Docker, Kubernetes, Openshift

Databases: Oracle, MySQL, PostgreSQL

Bug Tracking Tools: JIRA, Remedy, ServiceNow and IBM Clear Quest

Scripting: Shell, Ruby, Python and JavaScript

Virtualization Tools: Docker, VM virtual Box and VMware

Monitoring Tools: Nagios, Cloud watch, Splunk.

Cloud Platform: AWS EC2, VPC, EBS, Cloud Formation AWS Configer and S3, Terraform

Languages: C/C++, Java, Python and PL/SQL

PROFESSIONAL EXPERIENCE

Sr. ELK Stack/ DevOps Engineer

Confidential

Responsibilities:

  • Work closely with the product management and development teams to rapidly translate the understanding of customer data and requirements to product and solutions.
  • Analyze structured and unstructured data points to design data architecture solutions for scalability, high availability, fault tolerance, and elasticity
  • Design, develop and implement web-based Java applications that are often high-volume and low-latency, required for mission-critical systems. Follows State Street Standards life cycle methodologies, creates design documents, and performs program coding and integration testing.
  • Improved the performance of the Kafka cluster by fine tuning the Kafka Configurations at producer, consumer and broker level.
  • Using Jenkins, able to build the different kind of projects like Freestyle, Maven Based, Pipeline, Multi-branch pipeline
  • Implemented Disaster management for creating Elasticsearch clusters in two DC and configure Logstash to send same data to two clusters from Kafka.
  • Installed and deployed Kafka, Zookeeper, ELK, and Grafana using Ansible playbooks
  • Written and maintained Wiki documents for the Planning, installation, Deployment for Elk Stack and Kafka.
  • Written custom plugins to enhance/customize open source code as needed.
  • Wrote and executed various MYSQL database queries from Python using Python-MySQL connector and MySQL db package.
  • Written automation salt Scripts for managing, expanding, and node replacement in large clusters.
  • Sync Elasticsearch Data between the data centers using Kafka and Logstash.
  • Managing Kafka Cluster and integrated Kafka with Elasticsearch.
  • Proficient with container systems like Docker and container orchestration like EC2 Container Service, Kubernetes, worked with Terraform.
  • Separate Java URL's Data from Elasticsearch Cluster and transfer to other cluster Using Logstash,
  • Snapshot Elasticsearch Indices data and archive in the repository every 12 hours.
  • Strong expertise in implementation of Kinesis, Elasticsearch, Logstash, Kibana Plugins.
  • Experience on create kubernetes nodes, pods and spread in all availability zones for HA.
  • Experience on Kubernetes pod anti affinity used to spread data nodes across AZs.
  • Experience on collect this Kubernetes metrics using Prometheus and send to Elasticsearch.
  • Analysis the logs data and filter required columns by logstash configuration and send it to Elasticsearch.
  • Involved in enabling cluster logs and search slow logs temporarily using rest API calls to collect logs and analyzing those logs to troubleshoot the elastic search related functional and performance issues.
  • Involved in updating the cluster settings using both API calls and configuration file changes.
  • Have prepared elastic search operations guide and trained the operations team to perform day-to-day operations like back-up, restore, re-indexing, troubleshooting frequently occurring problems etc.
  • Working on cluster maintenance and data migration from one server to other and upgrade ELK stack.
  • Merge the data into share on avoid data crush and support load balancing.
  • Using Kibana illustrate the data with various display dashboard such as matric, graphs, pia-chart, aggregation table.
  • X-PACK (security)-monitoring tools that provide system metrics, service state, process state, file system usage.
  • Strong expertise in object-oriented design and analysis, programming styles and design patterns.
  • Developing distributed Complex event processing pipelines with simplicity.
  • Experience with code repositories, continuous integration.
  • Experience on Bitbucket, confluence and jira for modern delivery system.
  • Experienced in developing models with contextual data and proficient in Machine Learning algorithms.
  • Develop automation for the setup and maintenance of the AMA platform.
  • Build, maintain, and scale infrastructure for Production, QA, and Dev environments.

Environment: - Kafka, Ansible 2.7, Jenkins, Elasticsearch ECE, Logstash, Filebeat, Metricbeat, JavaScript.

Devops / ELK Stack Engineer

Confidential, Phoenix, AZ

Responsibilities:

  • Provided design recommendations and thought leadership to improved review processes and resolved technical problems.
  • Working with product managers to architect the next generation of Workday search's.
  • Benchmark Elasticsearch-5.6.4 for the required scenarios.
  • Worked on configuring the EFK stack and used it for analyzing the logs from different applications.
  • Involved in creating the cluster and implemented the backup of the cluster with the help of curator by taking the snapshots
  • Spin up the environment with the help of chef cookbooks and involved in modifying them per our requirement.
  • Created users for application teams to view their logs using curl statements and provided only the read access to them.
  • Using Curator API on Elasticsearch to data back up and restoring.
  • Configured xpack for the security and monitoring of our cluster and created watches to check for the health and availability of the nodes.
  • Used AWS Beanstalk for deploying and scaling web applications and services developed with Java, PHP, Node.js, Python, Ruby, and Docker on familiar servers such as Apache, and IIS.
  • Worked on Cloud automation using AWS Cloud Formation templates.
  • Hands on experience in EC2, VPC, Subnets, Routing tables, Internet gateways, IAM, Route53, VPC peering, S3, ELB, RDS, Security Groups, CloudWatch, SNS on AWS
  • Implemented Continuous Delivery framework using Jenkins, Chef, and Maven in Linux environment.
  • Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins along with Shell scripts to automate routine jobs.
  • Manage AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Chef.
  • Installed and deployed Kafka, Zookeeper, ELK, and Grafana using Ansible playbooks
  • Skilled in monitoring servers using Nagios, Data dog, Cloud watch and using EFK Stack Elasticsearch Fluentd Kibana
  • Managed servers on the open stack / cloud / Amazon Web Services AWS platform instances using Chef Configuration management.
  • Worked on using Chef Attributes, Chef Templates, Chef Recipes and Chef Files for managing the configurations across various nodes using Ruby.
  • Experience with container based deployments using Docker, working with Docker images, Docker HUB and Docker registries.
  • Manage the configurations of all the servers using Chef configured Jenkins builds for continuous integration and delivery. Automated web server content deployments via shell scripts.
  • Container management using Docker by writing Docker files and set up the automated build on Docker HUB and installed and configured Kubernetes
  • Implemented a production ready, load balanced, highly available, and fault tolerant Kubernetes infrastructure.
  • Worked with JIRA for creating Projects and Created Mail handlers and notification Schemes for JIRA
  • Worked with ServiceNow for creating / reporting tickets, change dashboards and use service catalog
  • By using JIRA/CONFLUENCE we maintain our product release wikis on confluence and administer JIRA and manage tickets raised.
  • Involved in an Agile/ Scrum environment and daily standup meetings
  • Manage regular changes in priority due to customer priority changes.
  • Configured logstash: input, filter, output plugins - database, jms, log file sources and elastic search as output converting search indexes to Elastic with large amount of data
  • Elastic search experience and capacity planning and cluster maintenance. Continuously looks for ways to improve and sets a very high bar in terms of quality.
  • Written custom plugins to enhance/customize open source code as needed, written automation salt Scripts for managing, expanding, and node replacement in large clusters.
  • Sync Elasticsearch Data between the data centers using Kafka and logstash. managing Kafka Cluster and integrated Kafka with elastic
  • Installed and Configure curator to delete indices older than 90 days.
  • Responsible to designing and deploying new ELK clusters (Elasticsearch, logstash, Kibana, beats, Kafka, zookeeper etc.

Environment: ELK stack, Service Now, Kafka, beats, Python, Java, GIT, SVN, Maven, Ansible, Puppet, Docker, Jenkins, Apache Webserver, JIRA, Windows, Python, Power Shell, AWS, Chef, MYSQL, Kubernetes, VMware / OpenStack servers.

ELK/DevOps Admin

Confidential

Responsibilities:

  • Responsible for Elasticsearch mapping creating, document indexing, including deploying, managing, and tuning/optimizing large-scale Elasticsearch clusters.
  • Manage individual project priorities and deliverables and communicate progress to internal teams and executives
  • Work with partner divisions of NBC Univsersal to drive new ELK platform capabilities and roadmaps
  • Working with Ansible to automate the process of deploying/testing the new build in each environment, setting up a new node and configuring machines/servers.
  • Develop complex applications that scale high-volume production quality.
  • Provided design recommendations and thought leadership to improved review processes and resolved technical problems.
  • Worked on development of Configuration scripts for Dev and Production servers.
  • Design, build, deploy, maintain and enhance ELK platform
  • Working with product managers to architect the next generation of Workday searches.
  • Elastic search cluster and capacity planning and cluster maintenance. Continuously looks for ways to improve and seting a very high bar in terms of quality.
  • Configured Logstash: input, filter, output plugins - database, JMS, log file sources and elastic search as output
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP (Google Cloud Platform). Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy.
  • Proficient with container systems like Docker and container orchestration like EC2 Container Service, Kubernetes, worked with Terraform.
  • Separate Java URL's Data from Elasticsearch- 6.5.0 Cluster and transfer to Elasticsearch-7.9 cluster Using Logstash,
  • Maintained changed control and testing process for all modifications and Deployments.
  • Used Kubernetes to orchestrate the deployment, scaling and management of Docker Containers.
  • Experience with container based deployments using Docker, working with Docker images, Docker Hub and Docker-registries and Kubernetes.
  • Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.
  • Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our systems end to end Using Beats.

Environment: s: ELK Stack, GCP, Logstash, Docker, Jenkins, Terraform, Ansible.

ELK Admin

Confidential

Responsibilities:

  • Responsible for Elasticsearch mapping creating, document indexing, including deploying, managing, and tuning/optimizing large-scale Elasticsearch clusters.
  • Developed Logstash Pipeline configurations
  • Wrote custom filter plugins in Ruby to enrich the pipeline data before ingested it in ES
  • Wrote infrastructure code using Terraform 0.12, and Ansible to build all three environments written for Sandbox and then moved that same code to other environments
  • Wrote Jinja2 template to generate Logstashyml file dynamically with Ansible template module
  • Wrote watcher functionality to implement monitoring alerts.
  • Implemented AWS solutions using EC2, S3, RDS, EBS, Elastic Load Balancer, and Autoscaling groups, Optimized volumes, and EC2 instances -
  • Elastic search cluster and capacity planning and cluster maintenance. Continuously looks for ways to improve and setting a very high bar in terms of quality.
  • Configured Logstash: input, filter, output plugins - database, JMS, log file sources and elastic search as output
  • Worked with internal clients to move them from Splunk to this new observability platform.
  • Helped build the Logstash environment on AWS with Terraform and Ansible
  • Wrote pipeline configuration file that pulls and enriches data that comes from Kafka before gets into Elasticsearch
  • Came up with ILM strategies
  • Sinking data in Solr through morphline and creating solr collection.
  • Came up with a solution to implement Disaster Recovery (hence High Availability) solution, architecture, presented to the management, and implemented in lower environments
  • Writing Python scripts to automate scripts, deployments such as ILM policies, RBAC policies
  • Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our systems end to end Using Beats.

Environment: Terraform 0.12, Ansible 2.7, Jenkins, Elasticsearch ECE, Logstash, Filebeat, Metricbeat, JavaScript, Ruby.

Linux & UNIX Admin

Confidential

Responsibilities:

  • Responsible for handling the tickets raised by the end users which includes installation of packages, login issues, access issues
  • User management like adding, modifying, deleting, grouping.
  • Responsible for preventive maintenance of the servers on monthly basis. Configuration of the RAID for the servers.
  • Resource management using the Disk quotas.
  • Documenting the issues on daily basis to the resolution portal.
  • Responsible for change management release scheduled by service providers.
  • Generating the weekly and monthly reports for the tickets that worked on and sending report to the management.
  • Managing Systems operations with final accountability for smooth installation, networking, and operation, troubleshooting of hardware and software in LINUX environment.
  • Identifying operational needs of various departments and developing customized software to enhance System's productivity.
  • Running LINUX SQUID Proxy server with access restrictions with ACLs and password.
  • Established/implemented firewall rules, Validated rules with vulnerability scanning tools.
  • Proactively detecting Computer Security violations, collecting evidence and presenting results to the management.
  • Accomplished System/e-mail authentication using LDAP enterprise Database.
  • Implemented a Database enabled Intranet web site using LINUX, Apache, MySQL Database backend.
  • Installed Cent OS using Pre-Execution environment boot and Kick-start method on multiple servers.
  • Monitoring System Metrics and logs for any problems.
  • Running Cron-tab to back up Data.
  • Applied Operating System updates, patches and configuration changes.
  • Maintaining the MySQL server and Authentication to required users for Databases.
  • Appropriately documented various Administrative & technical issues

Environment: Linux/Centos 4, 5, 6, Logical Volume Manager, VMware ESX 5.1/5.5, Apache and Tomcat Web Server, Oracle 11,12, Oracle Rac 12c, HPSM, HPSA.

We'd love your feedback!