We provide IT Staff Augmentation Services!

Azure Cloud Architect Resume

2.00/5 (Submit Your Rating)

SUMMARY

  • Optimization experienced with DevOps Engineer understands the melding of operations and development to quickly deliver code to customers. Have experience with the Cloud and monitoring processes as well as DevOps development in Windows Mac and Linux systemsTerraform emphasized infrastructure provisioned for DevOps implementation
  • Worked within the Cloud for integration processes.
  • Performed DevOps for Linux Mac and Windows platforms.
  • Focused on automation and integration.
  • Monitored developed applications and fixed bugs.
  • Wrote code and designed continual updates.
  • Completed load and performance testing of extremely complex systems
  • Cloud Architect responsibilities go hand in hand with Big Data / Hadoop activities which I am extremely hands on with as an Administrator / Cloudera / Hortonworks / Data Scientist / Hadoop Architect (CDH 5) / Hadoop Security Specialist (Kerberos / Sentry / ACL) / Cloud Architect (AWS) / Cloud Security Professional (CCSP) / GCP ( Confidential Cloud Platform) / Microsoft Azure Platform
  • With all the above skills - I acquired Multi-dimensional skilled matrix with single point of justified solution applicable with enormous expertize to Architect, build and execute enterprise level solution with minimal dependency off of other technical resource(s).
  • 25+ years IT Industry across various industries with Data Science related world of skills emphasize over the certified skills below:
  • Hadoop Architecture related:
  • Cloudera Hadoop / Hortonworks / Data Science Analysis / Data Visualization skills / Amazon Web Services (AWS) Architect / Windows Azure Architect / Cloud Security (CCSP) / Confidential Cloud Platform
  • Working on a POC with Confidential Cloud Platform with features included:
  • Stackdriver (Monitoring / Logging / Error Reporting), App Engine, Compute Engine, Container and Networking, storage related Bigtable/SQL, Confidential Cloud Platform related API Manager, Cloud Launcher and IAM & Admin activities. BigQuery, Dataproc, DataFlow and Genomics from the BigData implementation of data migration activities.
  • Familiar with Hortonworks, MapR with Lambda Architecture, IBM BigInsights environments
  • Over 5 years of experience in BigData, Hadoop, HDFS, HBASE, Hive, pig and Linux with hands-on project experience in various Vertical Applications.
  • Expertise in HDFS Architecture and Cluster concepts.
  • Expertise in Hadoop Security and Hive Security.
  • Expertise in Hive Query Language and debugging hive issues.
  • Expertise in Sqoop and Flume.
  • Worked with Kafka messaging services, familiar with other related messaging tools - RabbitMQ.
  • Involved in implementation of Hadoop multi node cluster, installed Hadoop Ecosystem softwares, configured HDFS.
  • Worked on Multi Clustered environment and setting up Cloudera Hadoop echo-System, creating jobs to get the data from RDBMS to HDFS, from HDFS to RDBMS.
  • Experience in Big Data, Hadoop architecting.
  • Worked on Hadoop environment (HDFS) setup, Map Reduce Jobs, HIVE, Hbase, PIG and NoSQL and MongoDB
  • Software installation and configuration
  • Built automation and internal tools for Hadoop jobs.
  • Tableau 9.0 / Tableau 8.1, SQL Server 2008R2 & 2012, Excel, Access, Confidential Stack
  • Worked with all kinds of Data Sources, TDE, TDS, Extracts, live connections using HBase, Hadoop HDFS, GPDB (Greenplum), data blends, joins with both relational model databases and multi-dimensional modeled data sources across heterogeneous databases.
  • Experience designing complex dashboards that take advantage of all tableau functions including data blend
  • Strong experience writing complex SQL and troubleshoot and tune SQL to provide the best performance
  • Ability to drive insight by designing visualizations with logical and meaningful data flow
  • Experience doing full life cycle development, including business requirements, technical analysis and design, coding, testing, documentation, implementation, and maintenance
  • Experience implementing data visualization solutions using Hadoop is a pulsing, actions, and parameters
  • Big Data (Map Reduce, Impala, HIVE, etc.), JIRA project management suite
  • Data Modeling / Data Architecture
  • Multiple Reporting Structures / Dashboards

TECHNICAL SKILLS

  • Apache Sentry / Apache Knox / Apache Argus (Hortonworks)
  • Tableau 9.3 / Tibco Spotfire / Qlikview / Sisense BI / Kibana / Splunk / Watson Analytics / Pentaho / SSRS / OBIEE / Microstrategy
  • Informatica BDE 9.6 / Ab Initio / SSIS / Spark / ODI / Datastage
  • Oracle, Solr, HBase, MongoDB, Casandra, Greenplum (GPDB)

PROFESSIONAL EXPERIENCE

Confidential

Azure Cloud Architect

Responsibilities:

  • Worked as Cloud Engineer using Infrastructure as Code as part of your deployment process has a number of immediate benefits to your workflow:
  • To ensure the speed - automating, manually navigating through an interface to deploy and connect up resources
  • Large set of infrastructure reliability, resource or provision the services. With IaC the resources will be configured exactly as declared, and implicit/explicit dependencies can be used to ensure the creation order.
  • With the ease at which the infrastructure can be deployed, experimental changes can be readily investigated with scaled down resources to minimize the cost and can be scaled up for production deployments.
  • As developer, I always look to employ the known best practices of software engineering wherever we can. Writing code to design and deploy infrastructure facilitates this in the arena of cloud provisioning, using established techniques like writing modular, configurable code committed to version control will lead us to view our infrastructure as somewhat of a software application in itself, and shifts us in the direction of a DevOps culture.

Environment: Azure Data Factory, DevOps, DataBricks, Terraform, Spinnaker, HortonWorks Hadoop, Cassandra, Azure Cloud Platform, PCF, Kafka, Flume, Splunk 6.2, DB2, TeraData, SQL Server, SQL, PL/SQL

Confidential, Boston, MA

Hadoop / Cloud Architect

Responsibilities:

  • Automated administration to manage the infrastructure to run your code
  • Designed solution for various system components using Microsoft Azure
  • Created Solution Architecture based upon PaaS Services
  • Create Web API methods for three adapters to pull data from various systems like Database, BizTalk and SAP
  • Configure & Setup Hybrid Cluster to pull data from SAP Systems
  • Orchestrate multiple functions
  • Custom Functions related development for the requests served and the compute time
  • Worked with Spinnaker cloud deployment tool to support Confidential Cloud along with AWS and other cloud

Environment: Spark/Scala, Kafka Realtime Dataprocess, HortonWorks Hadoop, Cassandra, Azure Cloud Platform, PCF, Flume, Splunk 6.2, DB2, TeraData, SQL Server, SQL, PL/SQL

Confidential, Sunnyvale, CA

Hadoop Architect (HortonWorks)

Responsibilities:

  • Confidential Cloud Admin / Architect using Confidential Cloud Platform:
  • Understand the various source systems, architect, design and develop each component of the architecture.
  • Extensively worked on all Confidential Cloud Platform components - BigQuery, BigTables, Confidential Cloud Storage in Cloud Shell writing scripts and GUI interface too.
  • Worked with migration of data from on-premise to cloud.
  • Developed pipeline using JSON-Kafka-GCS-BigQuery (BQ) using GCP Pub/Sub and DataFlow.
  • Work closely with domain experts of other group's Data scientists to identify their requirements to blend to generate a common model to leverage and avoid redundant processes in providing the data.
  • Collaborate with the other members of the practice to leverage and share the knowledge which helps in the implementation using Unix shell scripting.
  • Worked with Confidential CloudML libraries, deployed models to predictive analytics.
  • Worked with TensorFlow and Cloud Machine Learning Engine managed infrastructure.
  • Train machine learning models at scale
  • Host trained models to make predictions on cloud data.
  • Understanding the data and modeling Cloud ML Engine features
  • Worked with Deep Learning / machine learning applications.

Environment: Confidential Cloud Platform(GCP), Confidential BigQuery, Pub/Sub(Kafka), Confidential DataFlow(Spark), BigTable, CloudSQL

Confidential, Atlanta, GA

Cloud Architect and Security Consultant

Responsibilities:

  • Role as MS Azure Architect / Role as Azure Cloud IaaS Admin / PaaS Lead
  • Designed an Azure based solution using Web APIs, SQL Azure
  • Architect on Windows Azure and designing / implementing solutions.
  • Worked on components of Azure such as Service Bus (Topics, Queues, Notification Hubs), Blobs
  • Administered and tech design methodologies
  • Table Storage
  • Active directory
  • Web / Worker Roles / Web Sites
  • ACS / Azure Diagnostics and Monitoring
  • Multi-tenancy
  • SaaS / SQL Azure
  • SQL Reporting
  • IaaS Deployment
  • PowerShell etc
  • Market Trends in DevOps
  • Worked with Delivery pipeline in DevOps and the ecosystem
  • DevOps Security options and notification management in Jenkins.
  • Well versed with GIT and Continuous Integration via Jenkins.
  • Worked with Containers and VMs
  • Image and Containers in Docker / Networking
  • Best practice implementation using Docker Volume
  • Specialized in Virtualization using Docker.

Confidential, Atlanta, GA

Cloud Architect and Security Consultant

Responsibilities:

  • Define and deploy monitoring, metrics, and logging systems on AWS
  • Implement systems that are highly available, scalable, and self-healing on the AWS
  • Design, manage, and maintain tools to automate operational processes
  • Deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS
  • Migrating an existing on-premises application to AWS
  • Implementing and controlling the flow of data to and from AWS
  • Selecting the appropriate Engine / service based on compute, data or security reqs.
  • Identifying appropriate use of AWS operational best practices
  • Estimating AWS usage costs and identifying operational cost control mechanisms
  • Migrated data from DB2 and Teradata to Hadoop
  • Migrated data from hadoop to AWS Cloud Storage
  • Written automated scripts to deploy compressed files into Cloud
  • Assigned roles and administered AWS Cloud users from Console
  • Functional, non-functional and performance tuning to flatten tables in AWS
  • Worked on configuration setup via Cloud Shell in AWS Cloud environment
  • As Big Data Architect am responsible for the creation, care and maintenance of the high performance systems
  • Node configuration, cluster management processes.

Environment: DevOps, Terraform, Spinnakerm HortonWorks Hadoop, Cassandra, AWS Cloud Platform, PCF, Kafka, Flume, Splunk 6.2, DB2, TeraData, SQL Server, SQL, PL/SQL

Confidential, San Ramon, CA

Hadoop Architect / Cloud Architect and Security Consultant (AWS) / IoT

Responsibilities:

  • Hadoop Admin Responsibilities:
  • Responsible for architecting Hadoop clusters.
  • Install, Configure and Manage of Hadoop Cluster spanning multiple racks.
  • Debug, remedy, and automate solutions for operational issues in the production environment.
  • Participate in the research, design, and implementation of new technologies for scaling our large and growing data sets, for performance improvement, and for analyst workload reduction.
  • Define job flows using fair scheduler.
  • HA implementation of Name Node Replication to avoid single point of failure.
  • Manage and review Hadoop Log files.
  • Set up automated 24x7x365 monitoring and escalation infrastructure for Hadoop cluster.
  • Load log data into HDFS using Flume.
  • Provide support data analysts in running Pig and Hive queries.
  • Perform Infrastructure services (DCHP, PXE, DNS, KICKSTART, and NFS).
  • IoT Role: Architected IoT components on the Weather related data to make the application aware for the predictive analysis and analytics to cater for Solar Panel readings for the energy delivered on day-to-day basis.
  • Created a Technology Strategy to align with the client’s five-year Business Strategy for data lake integration
  • Implement and manage continuous delivery systems and methodologies on AWS
  • Understand, implement, and automate security controls, governance processes, and compliance validation
  • Define and deploy monitoring, metrics, and logging systems on AWS
  • Implement systems that are highly available, scalable, and self-healing on the AWS platform
  • Design, manage, and maintain tools to automate operational processes
  • Deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS
  • Migrating an existing on-premises application to AWS
  • Implementing and controlling the flow of data to and from AWS
  • Selecting the appropriate AWS service based on compute, data, or security requirements
  • Identifying appropriate use of AWS operational best practices
  • Estimating AWS usage costs and identifying operational cost control mechanisms
  • Migrated key systems from on-prem hosting to Amazon Web Services
  • Functional, non-functional and performance testing of key systems prior to cutover to AWS
  • Configured auto-scaling website platform with peak visitors of 14k per minute
  • As Big Data Architect am responsible for the creation, care and maintenance of the high performance indexing infrastructure.
  • In-dept understanding of the Hadoop ecosystem.
  • Am responsible for designing the next generation data architecture for the unstructured data
  • Written, debugged, and analyzed the performance of many map reduce jobs.
  • Devised and lead the implementation of the next generation architecture for more efficient data ingestion and processing.
  • Proficiency with mentoring and on-boarding new engineers Hadoop and getting them up to speed quickly.
  • Experience with being a technical lead of a team of engineers.
  • Proficiency with modern natural language processing and general machine learning techniques and approaches
  • Extensive experience with Hadoop and HBase, including multiple public presentations about these technologies.
  • Experience with hands on data analysis and performing under pressure.
  • Designed and wrote a layer on top of MapReduce to make the task of writing MapReduce jobs easier and more safe for Junior Engineers.
  • Contributed much of the code in our open source project.
  • Provide thought leadership and architectural expertise to a cross-functional team charged with deploying a host of customer-related applications and data to the cloud.
  • Conduct systems design, feasibility and cost studies and recommend cost-effective cloud solutions.
  • Administer discovery, user testing and beta programs to garner feedback prior to each major release.
  • Advise software development teams on architecting and designing web interfaces and infrastructures that safely and efficiently power the cloud environment.
  • Selected Achievements - Reduce overhead and infrastructure costs by 38 percent by consolidating and deploying 10 legacy applications to cloud platforms Amazon web services
  • Deliver major releases to stakeholders on time and under budget.
  • Successfully develop feature packages that include use cases, work-flows, requirements and functional specifications for hand off to development team.
  • In-depth understanding of the Hadoop ecosystem.
  • Am responsible for designing the next generation data architecture for the unstructured data
  • Written, debugged, and analyzed the performance of many map reduce jobs.
  • Devised and lead the implementation of the next generation architecture for more efficient data ingestion and processing.
  • Proficiency with mentoring and on-boarding new engineers Hadoop and getting them up to speed quickly.
  • Experience with being a technical lead of a team of engineers.
  • Proficiency with modern natural language processing and general machine learning techniques and approaches
  • Extensive experience with Hadoop and HBase, including multiple public presentations about these technologies.

Environment: Cloudera Hadoop (CDH 4), AWS, MongoDB, Spark, Splunk 6.2, TeraData, SQL Server, SQL, PL/SQL, TOAD

Confidential, San Francisco, CA

Hadoop Admin / Datawarehouse Architect

Responsibilities:

  • As Hadoop Administrator responsibilities:
  • Responsible for maintaining and performance tuning on Hadoop cluster.
  • Maintaining, monitoring and running Hadoop streaming jobs to process terabytes of data.
  • Loading data onto HDFS and transform large sets of structured, semi structured and unstructured data.
  • Responsible for administering Unix/Linux along with Databases.
  • Responsible to manage data coming from different sources.
  • Supported Map Reduce Programs those are running on the cluster.
  • Involved in loading data from UNIX file system to HDFs
  • Installed and configured Hive and also written Hive UDFs.
  • Monitor and maintain cluster availability, scalability, high availability and fault tolerance.
  • It was a Sales/Finance based project, aimed at implementing a data warehouse as a single repository of sales/Finance data and enhance business processes to capture additional sales and finance data that enable reporting - Sales, Purchase Order Taxes, Return Taxes, Consolidated Taxes for Confidential .com for Canada business. I worked with business team closely to gather requirements. Design the data model from the source databases and route it to the ETL tool for filtering the data set to avoid performance issues.
  • Hands-on work as Functional and Technical architect, conducted several Business workshops globally, ETL programs, etc. Developed customized gap/fit analysis and effort estimation methodology, helping in consolidating requirements.
  • ETL to incorporate multi-data sources, DFFs, Label Security, etc. Drafting the Upgrade process and approach. A phased approach towards operational and advanced analytical intelligence architecture design will be delivered. A highly innovative and unique implementation strategy for one of the biggest source database (20 Terabytes and more) will be designed. I am also the Oracle SME for parallel ongoing implementation of OOTB BI Apps Financial Analytics responsible for leading and de-bugging implementation issues, ETL performance, etc.
  • Designed & constructed aggregate tables and implemented Aggregate Navigation that includes the physical construction of facts & dimensions to support aggregates and logical constructs.
  • Debugged reports and Dashboards visibility with respect to user’s responsibility and web groups in an integrated environment
  • Implemented security by creating roles, web groups, and LDAP Authentication
  • Analyzed and provided the solution for an existing ETL process
  • Worked with PL/SQL and shell scripting
  • Worked with existing database schema to migrate to OLAP Datamarts.

Environment: Hadoop, Sqoop, Hive, Oracle Warehouse Builder 11gR2 (OWB), Oracle 10g

Confidential

Datawarehouse Architect / BigData Hadoop Analyst / Developer

Responsibilities:

  • Responsible for the technical strategy, Data architecture, and systems administration for database management systems across multiple platforms as follows:
  • Configured and built multi-node cluster and installed Hadoop Ecosystem involving HDFS, MapReduce, HBase, Pig, sqoop, spark and Hive.
  • Responsible for maintaining and performance tuning on Hadoop cluster.
  • Experience in running Hadoop streaming jobs to process terabytes of xml format data.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Written shell scripts to move data from data nodes to HDFS locations.
  • Responsible for administering Unix/Linux along with Databases.
  • Administer Hadoop, diagnose databases, storage, and other backend services.

Environment: Hadoop, Unix Shell scripting, DB2, Teradata, Informatica 8.6, Oracle 10g, SQL developer/SQL Plus

We'd love your feedback!