Aws Engineer Resume
SUMMARY
- AWS Certified Solution Architect with 8+ years of industry experience and more than 2 years as Full Stack AWS Engineer comprising of software configuration, design, development, and data lake implementation/support on AWS
- Proficient in various AWS services such as VPC, EC2, S3, ELB, AutoScalingGroup(ASG), EBS, RDS, IAM, EFS, CloudFormation, Redshift, DynamoDB, Glue, Lambda, Step Functions, Kinesis, Route 53, CloudWatch, CloudFront, CloudTrail, SQS,SNS, SES, AWS Systems Manager etc.
- Highly skilled in deployment, data security and troubleshooting of the applications in AWS.
- Experienced in working in variety of DevOps tools in mixed environments of Linux and windows servers in Amazon Web Services.
- Proficient in writing Cloud Formation Templates (CFT) in YAML and JSON format to build the AWS Services with the paradigm of Infrastructure as a Code.
- Experienced with event - driven and scheduled AWS Lambda functions to trigger events in variety of AWS resources using boto3 modules
- Developed data ingestion modules using AWS Step Functions, AWS Glue and Python modules
- Experienced with Continuous Integration/Continuous Delivery tools such as GitBucket, bug-tracking tool JIRA and Jenkins to merge development with testing through pipelines.
- Worked with Docker container, Kubernetes infrastructure to encapsulate code into a file system with abstraction and automation.
- Experienced with installation of AWS CLI to control various AWS services through SHELL/BASH scripting.
- Experienced in version control and source code management tools like GIT, SVN, and TFS.
- Good knowledge in relational and NoSQL databases like MySQL, DynamoDB, Redshift and AWS RDS.
- Highly result driven, quality and craftmanship focused with good interpersonal skills, linear problem-solving skills, self-motivated, fast learner, good team player.
TECHNICAL SKILLS
Hardware / Server: Redhat Linux / Ubuntu
RDBMS: MySQL, Oracle, SQL Server, PostgresSQL
Cloud: AWS Services EC2, S3, ELB, Glacier, EBS, EFS, ENI, CloudFormation, RDS, VPC, Route53, CloudWatch, CloudTrail, IAM, SNS, SQS, ElasticCache, RedShift, EMR, Lambda, Step Functions, EKS, ECS, Glue, Kinesis, Snowflake DB
CI/CD: Terraform, Jenkins, Git
BIG Data: Hadoop, HIVE, Spark
LANGUAGES/SCRIPTING: Python, bash Scripting, JSON, HCL
PROFESSIONAL EXPERIENCE
AWS Engineer
Confidential
Tools: /Services used: AWS - EC2, VPC, ELB, KMS, S3, EBS, RDS, Route53, ELB, CloudWatch, CloudFormation, ASG, Lambda, AWS CLI, Step Functions, Glue GIT, MySql, Jira, Python, Shell scripting, Jenkins, Terraform
Responsibilities:
- Responsible for design, implementation, and operational support for Cloud-based infrastructure solutions.
- Manage Amazon Web Services (AWS) infrastructure with orchestration tools such as CFT, Sceptre, Terraform and Jenkins Pipeline
- Create VPCs, subnets including private and public, NAT gateways in a multi-region, multi-zone infrastructure landscape as per requirement.
- Create CFT and Terraform scripts to automate deployment and configuration of various data lake components on AWS such as Cassandra clusters, Couchbase on EKS, Airflow, Glue, AWS RDS, Redshift, EMR, Tableau and SAS on RHEL with stripped volume at RHEL
- Developed data ingestion modules (both real time and batch data load) to data into various layers in S3, Redshift and Snowflake using AWS Kinesis, AWS Glue, AWS Lambda and AWS Step Functions
- Hands on experiences in working with security groups, network ACLs, internet gateways and route tables to ensure a security of organization’s assets in AWS public cloud.
- Configure Route53, elastic load balancers and auto scaling groups to distribute traffic, optimize cost, tolerate fault and scale a highly available environment.
- Create S3 buckets for data storage in AWS cloud and manage bucket policies and lifecycle as per organization’s guidelines
- Create parameters and SSM documents using AWS Systems Manager
- Hands on experience in creating EC2 instances using AMIs including Amazon Linux 2, Ubuntu, RHEL, and Windows and bootstrap instances, secure instances using AWS KMS keys, security groups, etc.
- Hands on experience in IAM services including creating roles, users, groups. Have experience in implementing MFA to provide strong security to AWS account and its resources.
- Hands on experience in CloudWatch integration with Splunk, including monitoring and alerting of production services.
- Hands on experience in creating backends for apps in AWS using AWS RDS services, securing databases using AWS KMS services, security groups, IAM roles, etc.
- Responsible for performing tasks like Branching, Tagging, and Release Activities on Version Control Tools like SVN, GIT.
- Create PySpark Glue jobs to implement data transformation logics in AWS and stored output in Redshift cluster.
- Use S3 storage for summarized business data and leveraged Athena for SQL queries and dashboarding with QuickSight
Confidential, Houston, TX
Bigdata Engineer
Responsibilities:
- Participated on project scoping exercises and creating requirement document and source to target mapping
- Participated in designing the Ingestion framework for History and Incremental load into Hadoop file system and hive
- Built data ingestion modules with Sqoop and shell scripts
- Performed complex business transformations using both spark sql and Spark APIs and saved final dataset in partitioned Hive tables
- Developed ETL data pipeline using Spark API to fetch data from Legacy system (SQL Server) and third-party APIs (External data)
- Migrated SQL Server packages into Spark transformations using Spark RDDs and Data Frames
- Worked on Data lake Staging, conformed and Reporting layers and building the data pipeline from ingest to consumption
- Created Fact and Dimension tables and summary tables for Reporting consumption
- Developed and designed POCs using PySpark and Hive and deployed on the YARN cluster, compared the performance of Spark with SQL Server modules
- Improved runtime performance of Spark applications with YARN queue management and Memory tuning
- Performed Unit testing and end to end application testing with data validation
- Used PyCharm IDE, spark CLI for development and managed code repository in Git
- Performed Hive Query performance tuning and helped end users with Reports
- Used Impala query engine for Reports serving in Tableau
Environment: Languages/Technologies: Hadoop, Data Lake, Python, PySpark, Spark SQL, Hive, Impala, PyCharm, GIT Repository, Cloudera CDH, Maven, UNIX Shell Scripting, SQL server, Sqoop, Autosys, AWS, S3, EMR
Confidential, Houston, TX
SQL Server BI/ (SSIS/SSAS/SSRS) ETL Developer
Responsibilities:
- Part of a six-man DBA team that manages over 200 production and development/test databases in diverse environments
- Troubleshooting and resolving day to day database issues (incident and problem management)
- Perform database health check remediation as per ISeC requirement
- Transparent Data Encryption
- Schedule and perform various forms of Rman backup on Cron or via the OEM Grid control
- Manage RAC/ASM configurations, managing disk groups, and investigating performance issues
- Apply various quarterly database patches and support System Admins for OS patching.
- Work on DATAGUARD operations; archive gap resolution and rebuilds
- Perform installs, configurations, and upgrades to 12c; both RAC and single instance environments with/without dataguard.
- Controlling and monitoring user access to the database.
- Work with members across other teams to support agency application changes deployment.
- Participate in weekly on-call rotations as well as team meetings
- Allocating storage, maintaining tablespaces, and maintaining system security compliance.