We provide IT Staff Augmentation Services!

Sas Programmer Resume

0/5 (Submit Your Rating)

SUMMARY

  • Having around 7 Years of IT experience which includes 3.5 years of Experience in Hadoop as well as strong Experience in SAS and Informatica power center designer
  • Good understanding of Data warehousing concepts.
  • Expertise in extraction, transformation and loading of data directly from different heterogeneous source systems like flat files, Oracle and Teradata.
  • Extensively worked on Informatica Designer Components - Source analyzer, Transformations Developer, Mapplet and Mapping Designer
  • Hands on experience with complex mappings from varied transformation logics like Unconnected and Connected lookups, Router, Aggregator, Joiner, Update Strategy and re-usable transformations.
  • Strong Experience on Workflow Manager Tools - Task Developer, Workflow & Worklet Designer.
  • Working experience in using Oracle 10g, SQL, PL/SQL.
  • Understanding, flexible, and adept at working in high paced environments, adapting quickly to different business organization’s needs.
  • Analyzed, designed and developed Extraction, Transformation and Load (ETL) processes for Data Warehousing.
  • Expertise in Testing complex Business rules by creating mapping and various transformations.
  • Experience testing and writing SQL and PL/SQL statements
  • Solid experience in Black box testing techniques, which includes, Integration, Functional, System, Regression Testing.
  • Expertise in creation, execution and maintenance of Test Plans, Test Scenarios, Test Scripts and Test cases.
  • Extensively used ETL methodology for supporting data extraction, transformations and loading processing in a corporate-wide-ETL Solution using Informatica
  • Hands-on experience with the Hadoop eco-system (HDFS, Map Reduce, Pig, Hive, Sqoop, Flume).
  • Experience in providing support to data analyst in running Pig and Hive queries. Developed Map Reduce programs to perform analysis. Performed Importing and exporting data into HDFS and Hive using Sqoop.
  • Experience in writing shell scripts to dump the Shaded data from MySQL servers to HDFS.
  • Experience in designing both time driven and data driven automated workflows using Oozie.
  • Experience in setting up Infiniband network and build Hadoop cluster to improve the map reduce performance. Experience in performance tuning the Hadoop cluster by gathering and analyzing the existing infrastructure.
  • Experience in setting up monitoring infrastructure for Hadoop cluster using Nagios and Ganglia.
  • Strong debugging and problem solving skills with excellent understanding of system development methodologies, techniques and tools. Good knowledge of No-SQL databases- HBASE.
  • Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, Hbase database and Sqoop, Cassandra, zookeeper, AWS.
  • Extensive knowledge on Statistical consulting on model evaluation, interpretation and SAS/R programming
  • Strong Problem Solving and Analytical skills with high aptitude to adopt new tools and technologies with ease and seamlessly integrate skills set into the project implementation lifecycle
  • Good Exposure on Apache Hadoop Map Reduce programming, Hive, PIG scripting and HDFS.
  • Experience in managing and reviewing Hadoop log files.
  • Hands-on experience in Business Requirements Gathering, creating High level & Low level
  • Good knowledge on various stages of SDLC

TECHNICAL SKILLS

ETL Tools: Informatica Power Center 8.6/9.x.

RDBMS: Oracle 9i/10g, Teradata.

Hadoop Ecosystem: Pig, HIVE, Hbase, Sqoop.

Other Framework: Spark, Apache Phoenix, R (Machine Learning)

Pivotal Ditribution: HAWQ, SpringsXD.

Other Tools: Toad, SQL Assistant

Languages: SQL, PL/SQL.

Operating Systems: UNIX, MS Windows 2000/XP

PROFESSIONAL EXPERIENCE

Confidential

Hadoop Developer

Responsibilities:

  • Working as a Developer For to create Hive queries which helped in analysis the payment data and also the current market trends and also analysis the historical data.
  • Initially had setup 8-Node cluster for the project.
  • Conceptualizing the business functional requirements, preparing the solution document, verify ok.
  • Involved in design of HBASE Table and its schema which is integrated with THE Hive table.
  • Developed Pig Scripts for processing the XML data
  • Agile way of working involving sprint planning, daily standups, tracking burn downs, sprint
  • Review and sprint retrospectives. Used Extreme Programming - XP & TDD - Test Driven Development
  • Involved in CMMI assessment activities, project planning, configuration management, process
  • Improvements, resource management and trainings.
  • Used Sqoop script to export and import data through DB2 Tables in Hadoop (HDFS), HIVE, PIG
  • Also analyzed how to compress data in Hive and store them in Hive.
  • Developed Map reduce code to analysis XML data and parsing of XML.
  • Done some R Statistical Analysis using R.
  • Implemented UDFS, UDAFS, UDTFS in java for hive to process the data that can’t be performed using Hive inbuilt functions.
  • Exploited Hadoop MySQL-Connector to store Map Reduce results in RDBMS.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Worked on loading all tables from the reference source database schema through Sqoop.
  • Worked on designed, coded and configured server side J2EE components like JSP, AWS and JAVA.
  • Collected data from different databases( i.e. Oracle, MySQL) to Hadoop
  • Used Oozie for workflow scheduling and monitoring.
  • Worked on Designing and Developing ETL Workflows using Java for processing data in HDFS/Hbase using Oozie.
  • Experienced in managing and reviewing Hadoop log files.
  • Involved in loading and transforming large sets of structured, semi structured and unstructured data from relational databases into HDFS using Sqoop imports.
  • Created several Hive tables, loaded with data and wrote Hive Queries in order to run internally in Map Reduce.
  • Developed Simple to complex Map Reduce Jobs using Hive and Pig.

Confidential

Hadoop Developer

Responsibilities:

  • Actively participated with developing team to meet the specific customer requirements and
  • Proposed effective Hadoop solutions.
  • Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Worked on the proof -of- concept for Apache Hadoop framework initiation on Amazon Web Services.
  • Designed, planned and delivered proof of concept and business function/division based
  • Implementation of Big Data roadmap and strategy project (Apache Hadoop stack with Tableau) using Hadoop.
  • Responsible for developing data pipeline using Flume, Sqoop and Pig to extract the data from Weblogs and store in S3 file system.
  • Connected Tableau from client end with AWS in addresses and view the end results.
  • Implemented proof-of- concept on IMPALA.
  • Imported date from various database sources into Dynamo DB utilizing SQOOP.
  • Developed Sqoop scripts to import and export data from and to relational sources by handling incremental data loading on the customer transaction data by date.
  • Involved in moving all log files generated from various sources to S3 for further processing through Flume.
  • Used Tableau for visualizing and to generate reports.
  • Responsible for creating complex tables using Hive.
  • Created partitioned tables in Hive for best performance and faster querying.
  • Developed workflow in Oozie to automate the tasks of loading the data into S3 and pre-processing with Pig.
  • Developed Shell scripts to automate and provide Control flow to Pig scripts.
  • Performed extensive data analysis using Hive and Pig.
  • Performed Data scrubbing and processing with Oozie.
  • Responsible for managing data coming from different sources.
  • Worked on setting up Pig, Hive and Redshift on multiple nodes and developed using Pig, Hive, and Map Reduce.
  • Worked on Data Serialization formats for converting Complex objects into sequence bits by using AVRO, JSON, CSV formats.

Confidential

SAS Programmer

Environment: SAS/Base, SAS/Macros, SAS/SQL, SAS/EG, SAS/Access, SAS/ODS, Oracle, Windows.

Responsibilities:

  • Understanding the business concepts and data flow.
  • Extract data from oracle database to SAS by using SQL Pass through query and Lib name facility.
  • Prepare analysis datasets and modified existing datasets using SAS Statements and functions.
  • Generating reports as per the business requirement by using SAS procedures and Macros.
  • Creating reports like leading indicators reports, collection productivity reports, Cheque bounce ratio report, deviation report and decline reasons reports etc.
  • Using ODS extensively to generate reports in html, PDF and excel reports using Proc Report.
  • Testing these reports to meet the client expectations like data wise, formatting wise look and feel after the reporting wise, Execution time wise.
  • Responsible for preparing test case documents and Technical specification documents

Confidential

Informatica Developer

Environment: Informatica 9.1, Oracle 10g, and UNIX.

Responsibilities:

  • Understand & Analyze the BRD
  • Analysis of the ETL conversion requirements provided by the client.
  • Responsibilities include designing the documents.
  • Created complex mappings using Unconnected Lookup, Sorter, Aggregator, Lookup and Router transformations for populating target table in efficient manner.
  • Performances tuning of the Informatica mappings using various components like Mapplets, Parameter files and Variables.
  • Testing the Informatica mappings, sessions, workflows
  • Perform unit testing at various levels of the ETL
  • Fixed the invalid mappings and troubleshoot for the same.
  • Done extensive testing and wrote queries in SQL to ensure the loading of the data.
  • Created test cases for various test scenarios to test the functionality of the objects.
  • Developed and implemented the coding of Informatica Mapping for the different stages of ETL
  • Extracted data from Sales department to flat files, Excel files and load the data to the target database

We'd love your feedback!