We provide IT Staff Augmentation Services!

Sr. Teradata Developer Resume

5.00/5 (Submit Your Rating)

Cleveland, OH

SUMMARY:

  • Over 8+ years of Technical and Functional experience in Data warehouse implementations, ETL
  • Extensive experience using Teradata Oracle 11g/10g/9i/8i and MS SQL SERVER in Finance, Health Insurance Domains.
  • Expertise in Informatica Power Center 7.x/8.x/9.1 Designer tools like Source Analyzer, Warehouse Designer, Mapping Designer, Maple Designer, Transformation Developer, Workflow Manager and Workflow Monitor.
  • In depth knowledge of understanding the Hadoop architecture and its components such as HDFS, Job tracker, Task tracker, Name Node, Data Node, Resource Manager, Node Manager, Map Reduce programs and YARN paradigm.
  • Designed and developed complex mappings, from varied transformation logic like Unconnected and Connected lookups, Router, Filter, Expression, Aggregator transformation, Active and Passive transformations, Joiner and Update Strategy transformations.
  • Expertise in design and implementation of Slowly Changing Dimensions (SCD) type1, type2, type3.
  • Expertise in RDBMS, Data Warehouse Architecture and Modeling. Thorough understanding and experience in data warehouse and data mart design, Star schema, Snowflake schema, Normalization and Demoralization concepts and principles.
  • Extensively worked on data migration projects to migrate the data from Teradata, SQL Server to Vertica using vsql scripts.
  • Experience in working with Mainframe files, COBOL files, XML, and Flat Files.
  • Extensive experience in ETL (Extract Transform Load), Data Integration and Data Warehousing using Informatica Power Center & Oracle PL/SQL technologies.
  • Excellent Trouble Shooting skills, Performance tuning of reports and resolving issues within Tableau Server and Reports.
  • Worked excessively on Core Java concepts like polymorphism, inheritance, serialization, synchronization, exception handling.
  • Extensive experience in Installation, Configuration and administration of Tableau Server in a multi - server and multi-tier environment.
  • Experience with software development life cycle (SDLC), Agile, Scrum and Project Management Methodologies.
  • Extensively involved in Performance tuning in Vertica queries as well as Informatica mappings.
  • Extensive knowledge in Business Intelligence and Data Warehousing Concepts with emphasis on ETL and System Development Life Cycle (SDLC).
  • Working Knowledge of Data warehousing concepts like Star Schema and Snowflake Schema, Data Marts, Kimball Methodology used In Relational, Dimensional and Multidimensional data modeling.
  • Experience in Teradata Physical implementation and Database Tuning, technical and functional applications of RDBMS, Data Mapping, Data management, Data transportation and Data Staging.
  • Analyzed, Designed and documented requirements for data migration projects between numerous source Legacy/Oracle application feeds and the new Teradata platform.
  • Good experience with Big Data and Hadoop ecosystem, Hadoop Distributed File System (HDFS).
  • Expert in Coding Teradata SQL, Stored Procedures, Views, Macros and Triggers.
  • Expertise in Complex Query Analyzing, performance tuning and testing.
  • Extensive database experience and highly skilled in SQL in Oracle, DB2, Teradata, Flat Files
  • Expertise in OLTP/OLAP System Study, Analysis and E-R modeling, developing Database Schemas like Star schema and Snowflake schema (Fact Tables, Dimension Tables) used in relational, dimensional and multidimensional modeling.
  • Technical expertise in ETL methodologies, Informatica 9.1 - Power Center, Client tools - Mapping Designer, Workflow Manager/Monitor and Server tools - Repository Server Manager.
  • Worked on ETL tool data stage 11.x,9.x,8.x (Designer, Monitor, Administrator)
  • Strong Experience in Performance optimization of Teradata Queries.
  • Extensive knowledge on Data Profiling using Informatica Developer tool.
  • Implemented Slowly changing dimension types (I, II &III) methodologies for accessing the full history of accounts and transaction information designed and developed change data capture solutions (CDC) for the project, which captures and analyzes changes from daily feeds to maintain history tables.
  • Strong hands on experience on Teradata tools such as BTEQ, FLOAD, MLOAD, TPUMP, FASTEXPORT, TPT(Teradata Parallel Transporter).
  • Good knowledge on Teradata Macros, BTEQ scripts & Utilities like Multi load, Fast load etc.
  • Very strong skills on project management, requirement analysis, business analysis, database modeling, design and analysis, issue co-ordination and development with Teradata/Oracle/SQL Server based Relational Databases.
  • Proficient in Teradata … database design (conceptual and physical), Query optimization, Performance Tuning.
  • Strong hands on experience using Teradata utilities (FastExport, MultiLoad, FastLoad, Tpump, BTEQ and QueryMan).
  • Familiar in Creating Secondary indexes, and join indexes in Teradata.
  • Expertise in different types of loading like Normal and Bulk loading challenges. Involved in Initial Loads, Incremental Loads, Daily loads and Monthly loads.
  • Expert in troubleshooting/debugging and improving performance at different stages like database, Workflows, Mapping, Repository and Monitor
  • Experience in handling different data sources ranging from flat files, Excel, Oracle, SQL Server, Teradata, DB2 databases, XML files.
  • Expertise in tuning the performance of Mappings and sessions in Informatica and determining the performance bottlenecks.
  • Proficient in applying Performance tuning concepts to Informatica Mappings, Session Properties and Databases.
  • Experienced with mentoring Teradata Development teams, data modeling, program code development, test plan development, datasets creation, testing and result documentation, analyzing defects, bug fixing.
  • Hands on experience in handling data from various source systems such as Flat Files, XML Source, Oracle, MS SQL Server, IBM DB2, Teradata and Excel Files

TECHNICAL SKILLS:

Databases: Oracle 10g/9i/8i, Teradata DB2, MS-SQL Server, MS-Access.

Languages: C, C++, Java, SQL, PL/SQL, UNIX Shell Script, HTML, XML

DB Tools/Utilities: Teradata SQL Assistant, BTEQ, Fastload, Multiload, FastExport, TPump, Teradata Manager, Teradata Query Manager, Teradata Administrator, Teradata SQL Assistance, PMON, SQL Loader, Teradata's Campaign Management Tool (CIM), TOAD 8.0. SAS SAS 8/9, SAS/BASE SAS/SQL, SAS/GRAPH, SAS/STAT, SAS/MACRO, SAS/ODS, SAS/ACCESS, SAS/QC, SAS/CONNECT, SAS/INTRNET, SAS/LAB, SAS/IML

ETL Tools: PL/SQL, Informatica Power Center Informatica Power exchange, Ab Initio (GDE EME).

Data Modeling: Erwin 7.3/9, ER Studio, Sybase Power Designer, Logical/Physical/Dimensional, Star/Snowflake/Extended-star schema, OLAP.

Scheduling Tools: Autosys, Tivoli Maestro

Version Control Tools: Clear Case.

Operating Systems: Sun Solaris, Linux, Windows, UNIX

PROFESSIONAL EXPERIENCE:

Confidential, Cleveland, OH

Sr. Teradata Developer

Responsibilities:

  • Interacting with business team to understand business needs and to gather requirements.
  • Prepared requirements document in order to achieve business goals and to meet end user expectations.
  • Created Mapping document from Source to stage and Stage to target mapping.
  • Performed Unit testing and created Unix Shell Scripts and provided on call support.
  • Created and modified several UNIX shell Scripts according to the changing needs of the project and client requirements.
  • Involved in importing the real-time data to Hadoop using Kafka and implemented the Oozie job for daily imports.
  • Combined views and reports into interactive dashboards in Tableau Desktop that were presented to Business Users, Program Managers, and End Users.
  • Wrote Unix Shell Scripts to process the data received from source system on daily basis.
  • Extensively involved in using hints to direct the optimizer to choose an optimum query execution plan.
  • Involved in the SDLC phases of the project to analyze the requirements, design, development and testing of the application based on Java/J2EE technologies and Design Patterns.
  • Use vsql/bteq/TPT utilities to import data from Teradata, sql server into Vertica using Unix shell scripts.
  • Extensively used the Teradata Stand-alone utilities Fast load/Multiload/Fast Export/TPT to Load/Export data into/from database objects.
  • Developed web services in Java and Experienced with SOAP, WSDL.
  • Partitioned the fact tables and materialized views to enhance the performance.
  • Worked with different teams to install operating system, Hadoop updates, patches, version upgrades of Hortonworks as required.
  • Created records, tables, collections (nested tables and arrays) for improving Query performance by reducing context switching.
  • Experienced in developing Applications using Java, J2EE, Servlets, JSP, Eclipse, JDBC, Web Services, AWS and AJAX.
  • Experienced using Design patterns in Java.
  • Worked on server side implementation using spring core, spring annotations navigation from presentation to other layers using Spring MVC and integrated spring with Hibernate using Hibernate template to implement persistent layer.
  • Developed a Java/J2EE based Web Application with complete spring suite, implementing Spring MVC and other spring modules.
  • By Using Teradata CIM provided broad functionality to analyze customer information, target or segment very specific groups of customers, trigger customer events, and choreograph multi-channel communication planed based on the needs of consumers.
  • Design, Development and Support phases of Software Development Life Cycle (SDLC)
  • Predicted customer responses to strategic offers based on historical data using CIM tool.
  • Worked with TPT wizards to generate the TPT scripts for the Incoming Claims data.
  • Implemented pipeline partitioning concepts like Hash-key, Round-Robin, Key-Range, Pass Through techniques in mapping transformations. Used Control-M for Scheduling.
  • Expertise in writing scripts for Data Extraction, Transformation and Loading of data from legacy systems to target data warehouse using BTEQ, FastLoad, MultiLoad, and Tpump.
  • Creating Adhoc reports by using FastExport and BTEQ.
  • Performed tuning and optimization of complex SQL queries using Teradata Explain and Run stats.
  • Created a BTEQ script for pre population of the work tables prior to the main load process.
  • Extensively used Derived Tables, Volatile Table and Global Temporary tables in many of the ETL scripts.
  • Do performance tuning in Vertica to make sure the performance is up to the required standards.
  • Send personalized, relevant and timely messages based on specific attributes and behaviors using CIM tool.
  • Developed server-side services using Java, spring, Web Services (Restful, SOAP, WSDL, JAXB, JAX-RPC), SOA (Service oriented architecture)
  • Testing and production support of core java based multithreading ETL tool for distributed loading XML data into Oracle11g database using JPA/Hibernate.
  • Developed MLOAD scripts to load data from Load Ready Files to Teradata Warehouse.
  • Performance Tuning of sources, Targets, mappings and SQL queries in transformations.
  • Worked on exporting data to flat files using Teradata FastExport.
  • Analyzed the Data Distribution and Reviewed the Index choices.
  • Involved in reviewing business requirements and analyzing data sources form Excel/Oracle SQL Server for design, development, testing, and production rollover of reporting and analysis projects within Tableau Desktop.
  • Develop Fast load scripts to load data from host file in to Landing Zone table.
  • Used Unix script to trigger the Bteq, TPT, Mload, Fast Load and Fast Export jobs
  • Created and managed custom offers that align with a holistic brand messaging strategy using CIM tool
  • Created views in Tableau Desktop that were published to internal team for review and further data analysis and customization using filters and actions.
  • In-depth expertise in the Teradata cost based query optimizer, identified potential bottlenecks.
  • Worked with PPI Teradata tables and was involved in Teradata specific SQL fine-tuning to increase performance of the overall ETL process.

Environment: Datastage 7.1 Parallel Extender using IBM Information Analyzer and Quality stage, Oracle 9i/10g, Teradadata12, Flat files,Teradata Utilities Multiload, FastLoad, FastExport, BTEQHadoop, Core Java, UNIX- AIX, Windows XP, Vertica,Tableau, CIM tool, MS Word, Excel, Maestro Tool for scheduling jobs.

Confidential, Keene, NH

Sr. Teradata Developer

Responsibilities;

  • Interacted with business users for requirement gathering.
  • Involved in dimension tables design using star schema.
  • Very good understanding Teradata architecture (SMP and MPP)
  • Follow SDLC process and wear the hats of both business analyst and production support.
  • Worked on Primary index (Unique and Non-Unique) and Secondary index, Partition Primary index.
  • Leveraged the full power of customer data by accessing it from different sources, at any time, to improve communication effectiveness and drive customer interactions by using CIM tool.
  • Worked on Teradata Join strategies and set and multi set tablesLead the offshore team.
  • Worked on analyzing Hadoop cluster and different big data analytic tools including Pig.
  • All the Business logic in all the modules is written in core Java.
  • Worked on moving tables from test to production using fast export and fast load.
  • Involved in implementing the JMS (Java messaging service) for asynchronous communication.
  • Written technical design document and Source to target mapping sheets
  • Involved with architect in designing
  • Discussed with data Modeller about dimensional modelling
  • Worked on Teradata Parallel Transporter: export, Load, Update, Stream (Teradata utilities: Load and fastLoad, Bteq, FASTLOAD)
  • Developed Mappings, Mapplets and workflows to load data from Data Warehouse to Datamart using Power Center Designer and Workflow Manager.
  • Created interfaces to load the dimensional and fact tables.
  • Created new Mappings and updated old Mappings according to changes in Business logic.
  • Developed reusable Mappings.
  • Extracting and loading of data from flat file, Oracle sources to Teradata using different transformations.
  • Responsible for building data solutions in Hadoop using Cascading frameworks.
  • Used Debugger to troubleshoot the Mappings.
  • Used Informatica Scheduler to schedule the workflows/worklets.
  • Improved the performance of Mappings and sessions.
  • Developed Event Wait Tasks for the workflow.
  • Combined views and reports into interactive dashboards in Tableau Desktop that were presented to Business Users, Program Managers, and End Users.
  • Reviewed basic SQL queries and edited inner, left, and right joins in Tableau Desktop by connecting live/dynamic and static datasets.
  • Populated data into Teradata tables by using Fast Load utility.
  • Developed story telling dashboards in Tableau Desktop and published them on to Tableau Server
  • Monitored the sessions in Workflow monitor and test the session prior to the normal run.
  • Perform unit testing, integration testing. Provided End-user training and support
  • Developed the documentation for ETL Mappings / ETL unit testing/ETL Integration Testing.
  • Extensively involved in performance tuning of Mappings and SQLs.
  • Worked on Teradata parallel extender (TPUMP, MLOad, Bteq.FastLoad, FastExport).

Environment: Datastage Parallel Extender using IBM Information Analyzer and Quality stage,Teradata Utilities Multiload, FastLoad, FastExport, BTEQOracle 9i/10g, Teradata, Flat files, UNIX- AIX, Core Java, Hadoop, Windows XP, Tableau, MS Word, Excel, CIM tool, Maestro Tool for scheduling jobs.

Confidential, Chicago, IL

Teradata Developer

Responsibilities:

  • Designed the jobs in Data stage 7.1 Parallel Extender. Provided hands-on participation, technical guidance and leadership to support existing Data warehouse applications.
  • Used Multi load stage using Teradata Utilities(TPUMP, MultiLoad) to load into Teradata DB
  • Worked on Change data capture (CDC) for updating target. Developed reusable jobs using Dynamic Meta data using run column propagation.
  • Developed in writing transforms using stage variables for jobs and code for batch jobs to run jobs in parallel.
  • Performing export and import of Datastage components, table definitions and routines.
  • Developed Job Sequences and implemented logic for job sequence loops, recovery and restart.
  • Used Datastage Director to clear the job logs, job resources and status files.
  • Experience in complete life cycle of Design, Development, Maintenance and Documentation of Conversion Process. Worked with Datasets, Lookup, Merge, Joiner, Funnel, Transformer
  • Profiled vendor supplied raw data using Information Analyzer.
  • Achieved performance tuning of the complex queries on Vertica by tuning the projections, optimizing the joins from hash join to merge join and enabling parallelism.
  • Prepared technical design, tests (Unit/System), and supported user acceptance testing.
  • Involved in setting up the ETL standards and helped the client to use ETL best practices.
  • Extensively used Perl Scripting for Transforming Raw data to the standard format.
  • Developed UNIX Shell scripts in order to run Informatica jobs and SQL’s on Vertica data base. Have identified performance bottlenecks and tuned ETL code.
  • Identified performance bottle-necks and suggested improvements. Used Maestro tool as third party tool for scheduling ETL jobs and notifications.

Environment: Datastage Parallel Extender using IBM Information Analyzer and Quality stage, Oracle 9i/10g, Teradata, Flat files, UNIX- AIX, Windows XP, MS Word, Excel, Vertica 5.x/6.x,Maestro Tool for scheduling jobs.

Confidential, Broadway, NY

Teradata Developer

Responsibilities:

  • Involved in full Software Development Life Cycle (SDLC) - Business Requirements Analysis, preparation of Technical Design documents, Data Analysis, Logical and Physical database design, Coding, Testing, Implementing, and deploying to business users.
  • Interacted with business team to understand business needs and to gather requirements.
  • Prepared requirements document in order to achieve business goals and to meet end user expectations.
  • Involved in creating data models using Erwin.
  • Worked with Designer tools like Source Analyzer, Target designer, Mapping designer, Mapplets designer, Transformation Developer.
  • Designed Mappings by including the logic of restart.
  • Did the Data Profiling and Data Analysis using SQL queries looking for Data issues, Data anomalies.
  • Created Source and Target Definitions, Reusable transformations, Mapplets and Worklets.
  • Created Mappings and used transformations like Source Qualifier, Filter, Update Strategy, Lookup, Router, Joiner, Normalizer, Aggregator, Sequence Generator and Address validator.
  • Involved in tuning the mappings, sessions and the Source Qualifier query.
  • Redesign the existing Legacy/ Terdata EDW system to Unix/ Vertica system.
  • Identified performance issues in existing sources, targets and mappings by analyzing the data flow, evaluating transformations and tuned accordingly for better performance.
  • Changing the existing Data Models using Erwin for Enhancements to the existing Data warehouse projects.
  • Manage all technical aspects of the ETL mapping process with other team members.
  • Developed mappings to load Fact and Dimension tables, SCD Type 1 and SCD Type 2 dimensions and Incremental loading and unit tested the mappings.
  • Worked with the Statisticians, Data Managers to provide SAS programming in analyzing Clinical Trial Data.
  • Created sessions and workflows to run with the logic embedded in the mappings.
  • Extensively used SQL, PL/SQL code to develop custom ETL solutions and load data into data warehouse system.
  • Wrote Unix Shell Scripts to process the data received from source system on daily basis.
  • Developed PL/SQL triggers and master tables for automatic creation of primary keys.
  • Created PL/SQL stored procedures, functions and packages for moving the data from staging area to data mart.
  • Created and reviewed scripts to create new tables, views, queries for new enhancement in the application using TOAD.
  • Created indexes on the tables for faster retrieval of the data to enhance database performance.
  • Involved in data loading using PL/SQL and SQL*Loader calling UNIX scripts to download and manipulate files.
  • Performed SQL and PL/SQL tuning and Application tuning using various tools like EXPLAIN PLAN, SQL*TRACE, TKPROF and AUTOTRACE.
  • Extensively involved in using hints to direct the optimizer to choose an optimum query execution plan.
  • Used Bulk Collections for better performance and easy retrieval of data, by reducing context switching between SQL and PL/SQL engines.
  • Created PL/SQL scripts to extract the data from the operational database into simple flat text files using UTL FILE package.
  • Partitioned the fact tables and materialized views to enhance the performance.
  • Created records, tables, collections (nested tables and arrays) for improving Query performance by reducing context switching.
  • Created Informatica Mappings and TPT Scripts to load Medical, Eligibility and Pharmacy claims from flat file to table.
  • Worked with TPT wizards to generate the TPT scripts for the Incoming Claims data.
  • Implemented pipeline partitioning concepts like Hash-key, Round-Robin, Key-Range, Pass Through techniques in mapping transformations Used Autosys for Scheduling.
  • Expertise in writing scripts for Data Extraction, Transformation and Loading of data from legacy systems to target data warehouse using BTEQ, FastLoad, MultiLoad, and Tpump.
  • Performance Tuning of sources, Targets, mappings and SQL queries in transformations.
  • Worked on exporting data to flat files using Teradata FastExport.

Environment: Datastage 7.1 Parallel Extender using IBM Information Analyzer and Quality stage, Oracle 9i/10g, Teradata, Flat files, UNIX- AIX, Windows XP, MS Word, Excel, Maestro Tool for scheduling jobs.

Confidential

Teradata Developer

Responsibilities;

  • Analyzing, designing and developing ETL strategies and processes, writing ETL specifications, Informatica development, and administration and mentoring other team members.
  • Developed mapping parameters and variables to support SQL override.
  • Used various transformations like Filter, Expression, Sequence Generator, Update Strategy, Joiner, and SQL, Lookup (File and Database) to develop robust mappings in the Informatica Designer.
  • Worked on Teradata and its utilities - tpump, fastload through Informatica. Also created complex Teradata Macros.
  • Worked and Implemented Pushdown Optimization (PDO) to optimize performance issues of complex mappings involving numerous transformations and hence degrading the performance of the session.
  • Involved in Performance tuning at source, target, mappings, sessions, and system levels.
  • Worked on Change Data Capture (CDC) using CHKSUM to handle any change in the data if there is no flag or date column present to represent the changed row.
  • Worked on reusable code known as Tie outs to maintain the data consistency. Compared the source and target after the ETL loading is complete to validate no loss of data during the ETL process.
  • Worked on PowerExchange bulk data movement process by using PowerExchange Change Data Capture (CDC) method, Power Exchange Navigator, Power Exchange Bulk Data movement. PowerExchange CDC can retrieve updates at user-defined intervals or in near real time.
  • Worked independently on the critical milestone of the project interfaces by designing a completely parameterized code to be used across the interfaces and delivered them on time in spite of several hurdles like requirement changes, business rules changes, source data issues and complex business functionality.

Environment: Datastage 7.1 Parallel Extender using IBM Information Analyzer and Quality stage, Oracle 9i/10g, Teradata, Flat files, UNIX- AIX, Windows XP, MS Word, Excel, Maestro Tool for scheduling jobs.

We'd love your feedback!