Senior Lead, Senior Solution Architect Consultant Resume
Santa Ana, CaliforniA
SUMMARY
- 18 years of solid working experience with a PhD from University of Confidential UCLA/Davis
- 15 years of experience as Solution Architect, Team Lead, Specialist, Developer, project manager, Vice President, doing multi - tiered applications, Big Data, Business Intelligence (BI), Data Warehousing Machine Learning, Deep Learning, System Engineering.
- 7+ years of experience in Big Data, Hadoop, Spark, MapR, Cloudera, Hortonworks, Storm, KafKa, Hive, Impala, Flume, Sqoop, MapReduce, Pig, HBase, NiFi, oozie, Tableau Power BI, Cloudera visualization, QlikView Scala SBT (My Preference), Java Maven.
- 7 years of search engines ELK Stack Elasticsearch, Logstash, Kibana, Filebeat, SOLR, Lucene, Rsync, Tika. Also been involved with migration from SOLR to Elasticsearch for at least three companies.
- 7 years of Machine Learning, Deep Learning and Artificial Intelligence MLlib, TensorFlow, Keras, Weka Mahout, Multilayer perceptron classifier (MLPC), the feedforward artificial neural network, scikit-learn, Pandas, Deeplearning4j, H2o, Sparkling Water ML, Caffe2, MxNet etc. Different algorithms K-Means, Random Forest, Gradient Boosting algorithms (GBM, XGBoost and CatBoost).
- 6 years of Azure Cloud Extensive full cycle Cloud Azure experience with full Big Data, Machine Learning Deep Learning, Azure Machine Learning Studio, Azure Power BI, Azure Search, and Elasticsearch on Azure development and deployment. Comprehensive deployments of massive Azure infrastructures from inside the Visual Studio 2017 using ARM Templates. Deployment of many Azure projects for different companies and here are some of practical Hands on Component dat I have personally used, Azure Spark Hive HBase HDInsight clusters, Domain-joined HDInsight clusters, Azure Zeppelin notebooks, Azure Jupyter notebooks, Kafka, Storm, Azure SQL Data Warehouses, Azure Databricks, Azure Data Lake, Azure Data Lake Factory, Azure Data Lake Storage, Azure Data Lake Analytics, Azure Data Links, Azure Integration Runtime (IR), Azure Data Gateway, Azure Kubernetes Services, Azure Storage Blobs, Azure Active Directory, Azure Service Principals, Azure Security Center, Azure Key Vaults, Azure Virtual Network vnet, Azure Network Interfaces, Azure Cosmos DB, Azure Cortana Intelligence Suite, Infrastructure as a Service IaaS, Platform as a Service PaaS, Confidential R Server, NLB, Key phrase extraction Azure search, Unstructured text analytics, Event hub, Streaming, Poly Base etc.
- 6 years of AWS Cloud Extensive full cycle Cloud AWS experience with full Big Data, Elasticsearch and SOLR, Machine Learning and Deep Learning development and deployment. AWS Compute E2C, FarGate, Lambda, VMware, AWS Developer Tools, AWS Management Tools, Amazon Machine Learning, AWS DeepLens, Amazon Deep Learning AIMs, Amazon TensorFlow on AWS and other components. etc.
- 2 years of IBM Cloud and IBM Cloud Private (ICP) Distributed Container based Architecture, Docker, Docker CLI, Kubernetes, Kubernetes CLI, Pods, Pods deployments. Service Deployments, Ingress, Helm Charts, Helm Charts CLI. Successfully installed and deployed an entire IBM Cloud Private ICP Cluster tan implemented and deployed ELK Elasticsearch, Logstash, Kibana, Filebeat, Kafka, Zookeeper, Cassandra, Curator on ICP IBM Private Cloud, Kubernetes, Pods using Helm Charts, Scala SBT.
- 16 years of hands on .NET development, architecture and management experience in application, real time, instrumentation, web, front end, back end, full stack, multiple products out their multiple awards
- 16 years of hands on Java development, architecture, front end, back end, full stack
- 6 years of Android mobile development and architecture with multiple apps in the app store
- 16 years of experience in SQL 7-2016, MySQL, Oracle, and other databases T-SQL, SSIS, SSRS, SSAS, OLTP, OLAP, Multidimensional Cube, MDX, PowerPivot, Tabular Model, SharePoint, PerformancePoint.
- Demonstrated experience and understanding of the best practices in all aspects of data warehousing (Inmon/Kimball approach). Solid experience in Data Warehouse
- Strong noledge and proven results in Data Warehouse and Data Mart design including Dimensional Modeling (Star & Snowflake Schemas), ER Modeling, 3 Normal Forms, Normalization and Demoralization, Logical Model and Physical Model, Fact/Dimension/Hierarchy identifications.
- From Business Case to Data Visualization, I have designed and developed solutions by combining Business Process with Information Technology.
- Firmware embedded programming, ARM, PIC, DSP, FPGA, RTOS Linux etc.
- Significant management experience including 4 years as the VP of engineering
PROFESSIONAL EXPERIENCE
Confidential, Santa Ana, California
Senior lead, Senior Solution Architect Consultant
Responsibilities:
- I lead, architected and helped developing deep learning and machine learning systems for genetic analysis and prediction system for occurring of different type of cancers with more TEMPthan 99% accuracy. For peak performance a distributed architecture were architected using 12000 CPUs with amazing results. Spark, HDFS, Scala, SBT, MLlib, TensorFlow, Keras. Some of the development were based on Multilayer perceptron classifier (MLPC) which is a classifier based on the feedforward artificial neural network. Also created other prototypes using Python, R, PySpark and Scala libraries like scikit-learn, Pandas, Deeplearning4j, Sparkling Water ML, Caffe2, MxNet etc. Different algorithms were used like K-Means, Random Forest, Gradient Boosting algorithms (GBM, XGBoost, XGBoost and CatBoost)
- For hardware processing of the Big Data, the Deep Learning and Machine Learning different architectures were tested on IBM Neteeza, Teradata and distributed architecture with 12000 CPUs and memory (Spark). The result were absolutely clear, their is no comparison the distributed architecture is far more superior and is the future
- The Big Data part used Hadoop, Spark on mainly MapR but also Cloudera and Hortonworks, Storm, KafKa, Hive, Pig, Impala, Flume, Sqoop, MapReduce, Pig, HBase, NiFi, oozie, Tableau Power BI and Cloudera visualization, QlikView etc.
- I lead, architected and helped developing a hybrid system of Elasticsearch and HDFS using Logstash, Rsync and Kafka. The Elasticsearch was sharded over 25 nodes but later deployed on AWS. Three type of data were indexed and inputted into the Elasticsearch (Kibana):
- The hardware and system logs for real time (Kafka) hardware and system monitoring
- The patient and claim data from HDFS, Hive and HBase for search and quick BI visualization in Kibana.
- Data Export files from MarkLogic
- For basic search used by different system via an API on top of the Elasticsearch
- I lead, architected and helped developing a massive data ingest system from different providers with a permanent and real time pipeline (Kafka) using Delta and Flip methods for an ongoing uninterrupted data ingest. The source data were RDBMS, Hive, HBase, MarkLogic, flat data and log files etc.
- Azure Cloud, I architected, led and did actual implementation of massive Azure Cloud project with extensive full cycle Cloud Azure experience covering the Data Ingestion, Data Transformation and Data Consumption with Machine Learning Deep Learning. This was a massive project on a large scale on Azure which covered a full range of areas including but not limited to Big Data, Machine Learning Deep Learning, Azure Machine Learning Studio, Azure Power BI, Azure Search, and Elasticsearch on Azure development and deployment. Comprehensive deployments of massive Azure infrastructures from inside the Visual Studio 2017 using ARM Templates. Deployment of many Azure projects for different companies and here are some of practical Hands on Component dat I have personally used, Azure Spark Hive HBase HDInsight clusters, Domain-joined HDInsight clusters, Azure Zeppelin notebooks, Azure Jupyter notebooks, Kafka, Storm, Azure SQL Data Warehouses, Azure Databricks, Azure Data Lake, Azure Data Lake Factory, Azure Data Lake Storage, Azure Data Lake Analytics, Azure Data Links, Azure Integration Runtime (IR), Azure Data Gateway, Azure Kubernetes Services, Azure Storage Blobs, Azure Active Directory, Azure Service Principals, Azure Security Center, Azure Key Vaults, Azure Virtual Network vnet, Azure Network Interfaces, Azure Cosmos DB, Azure Cortana Intelligence Suite, Infrastructure as a Service IaaS, Platform as a Service PaaS, Confidential R Server, NLB, Key phrase extraction Azure search, Unstructured text analytics, Event hub, Streaming, Poly Base etc..
- Genome Analysis Toolkit 4 (GATK4) from Broad Institute, ADAM Genomics Berkeley AMPLab, BAM, SAM, VCF genome variant. Worked also with mango, gnocchi, deca, avocado, quinine, cannoli etc
- Using SOLR / Elasticsearch created a detail analytical graphical dashboard in Kibana for Patient Data, Claim Data and Provider Data.
- Development languages, Extensive Scala SBT (My Preference), Java Maven, Eclipse Intellij (my preference), Python, R, Jupyter, PySpark, Ruby and even some, Linux Shell Script, Shell Scripts
- Successfully installed and deployed an entire IBM Cloud Private ICP Cluster tan implemented and deployed ELK Elasticsearch, Logstash, Kibana, Filebeat, Kafka, Zookeeper, Cassandra, Curator on ICP IBM Private Cloud, Kubernetes, Pods using Helm Charts, Scala SBT. IBM Cloud and IBM Cloud Private (ICP) is a Distributed Container based Architecture, Docker, Docker CLI, Kubernetes, Kubernetes CLI, Pods, Pods deployments. Service Deployments, Ingress, Helm Charts, Helm Charts CLI.
Confidential, El Segundo, California
Senior lead, Senior Solution Architect Consultant
Responsibilities:
- I lead, architected and helped developing a Big Data system with 256 nodes using Hadoop, Spark on Hortonworks and later migrated to Cloudera, Storm, KafKa, Hive, Pig, Impala, Flume, Sqoop, MapReduce, Pig, HBase, oozie, Tableau Power BI and Cloudera visualization, QlikView
- Development languages, extensive Scala SBT (my preference), Java Maven, Eclipse Intellij, Python, R, PySpark, Ruby, C#, Unix Shell Scripts, Linux Shell Scripts
- I lead, architected and help developing the SOLR/Lucene Search system which was later migrated to Elasticsearch with 15 nodes sharding. The system was initially developed and maintained in house but later deployed to AWS cloud and prototyped on Azure. The development and test of the 15 node done on VirtualBox machines, physical machines before deployment to the Cloud. The SOLR development was done in two different phases, initially we did indexing directly on top of the metadata extracted from various files with Apache Tika, Apache Flume and scoop. I wrote a scheduler in Java dat run delta indexing periodically every few hours. We had customized faceting and tan the API would grab the top N results from the XML. The search worked better TEMPthan expected, the indexing was slow but the search was extremely fast in fraction of a second. On the second phase we stored all the raw documents in HDFS and create indexing and tan use HBase to store the index files in HDFS. Also an API was developed in JAVA with a .NET wrapper with SOLR search calls into the SOLR engine.
- I lead, architected and help developing an advanced Machine Learning system initially in Spark MLlib tan Weka and ultimately a Mahout Machine learning and recommendation system using both ItemSimilarity and UserNeighborhood. I personally favored and created porotypes using Spark MLlib, TensorFlow, Keras, Python libraries like scikit-learn, Pandas but in this case Mahout worked very well. The .NET API would record every time a product was clicked or purchased. The data was recorded in the database and tan the metadata was created and the mahout would create a scoring table (0-10) for product and region. The .NET API would select top N highest score and would present it as recommendations.
- I lead, architected and helped developing a gigantic amount of data extraction, data warehousing, Big Data. The data was gatheird in access of tens of terabytes from more TEMPthan 40 top of the lines brands Ecommerce sites partnered and operated by Confidential like FRYE, Juicy couture, NYDJ, PAIGE, Splendid, Coffee Beans, Jones New York, Hudson and many more. Used SSIS ETL for SQL to port data to the Data Warehouse and tan used Sqoop for extracting from RDBMS to the HDFS, used Flume for extracting from logs files, FTP NAS files to the HDFS, used Apache Tika and Java for extracting metadata from various files into the HDFS, used Nutch for web crawling and for extraction metadata into the HDFS, used SAPI, CMU Sphinx, Kaldi for customer service voice to text conversion into the HDFS
- I architected and implemented a real time and streaming component for the Cloudera visualization using Apache Strom and Apache Kafka.
- I lead, architected and help developing an elaborate real time visualization using Tableau and Cloudera visualization for the big data portion.
- The Big data prototype was deployed both on AWS and Azure. For a number reasons the final decision for the cloud deployment was made for deployment into the AWS not Azure.
- I lead and oversaw the conversion of part of the SOLR search project to Elasticsearch and benchmarked the performance. Although I liked working with JASON for various reason SOLR was preferred initially.
- I lead, architected and help developing Kibana 4.5 visualization on top of both SOLR and Elasticsearch.
- Wrote and oversaw a development of combination of batch files, python and Ruby scripts for SOLR/Lucene and Big Data deployment and configurations. I have to add dat I started the conversion of batch file to Python but their were simply not enough time.
- Did extensive prototyping and benchmarking and helped evaluating the performance of the big data on Massively Parallel Processing (MPP) and other Data Warehouse Appliances such as IBM Netezza, Teradata, APS (PDW), Oracle Exadata
Confidential, California
Senior Big Data, DW and BI Lead Solution Architect Consultant
Responsibilities:
- Led multiple large scale Big Data, Enterprise Data Warehouse EDW and Business Intelligence BI projects on Teradata utilizing Spark, Hadoop, Hortonworks, Cloudera, Hive, Impala, Flume, Sqoop, Map Reduce, Pig, HDInsight, HBase, oozie, and facilitating the real-time data analysis by the data scientist.
- Led multiple EDW projects, prototyped and evaluated their performance on the Massively Parallel Processing (MPP), other Data Warehouse Appliances such as IBM Netezza, Teradata, APS (PDW), Oracle Exadata
- Development languages, extensive Scala SBT (my preference), Java Maven, Eclipse Intellij, Python, R, PySpark, Ruby, C#, Unix Shell Script, Linux Shell Scripts
- Leading the team, I designed architected and implemented the migrating from legacy information warehouse to a modern high performance Big Data and Data Warehouse running on multiple DW appliances. Drafted a BI/DW prioritized implementation roadmap working with the business and finance department.
- Leading the team we migrated and deployed 5 projects to Azure Cloud. I was personally involved in the full cycle of vendor selection, requirement gathering, design, development and the deployment of these projects. The migration included different aspects of the projects from front, backend, and integration. We went through thorough research before selecting the Azure cloud for this project and also utilized cutting edge utilities to perform the migration and deployment.
- Drafted a BI/DW prioritized implementation roadmap while taking input from internal divisional service plans, business and IT strategy documentation, as well as corporate BI Strategy and the Financial Planning and Reporting System
- Designed Enterprise Information Management (EIM) solutions for retail operation. Led technical teams and designed various BI solutions including loyalty programs, card management, POS data management, customer behavioral analysis, store dashboards, finance, ecommerce, cyber security analytic.
- Defined the data governance strategy, designed security patterns, implemented data standards and procedures across the enterprise; drafted business specific methodology to establish business stakeholder-driven data stewardship through MDM
- Conducted BI maturity assessment of the organization. Architected DW&BI Program Structure, defined the role of DW&BI Program Steering Committee, it's mission, objectives, roles and responsibilities, monitored regular improvements to help manage risks, evaluate trends, and develop capacity and capability to achieve the Program mission
Confidential, Los Alamitos, California
Senior Big Data, DW and BI Lead Solution Architect Consultant
Responsibilities:
- Led several Big Data projects on massive amount of transmitted and logged data from the satellite network to the ground station. These projects were developed utilizing Cloudera, Hadoop, Spark, Hive, Impala, Flume, Sqoop, Storm, Pig, HDInsight, HBase, oozie. Due to the real time nature of the project Apache Storm and Apache Kafka was used for handling of the streaming and the real time data feed.
- I led the team, designed, architected and implemented an elaborate Data Warehouse and Data Mart using Dimensional Modelling Star Schema for satellite data aggregation, data storage, data log, real time operation status data and other needs.
- Utilized the Cloudera Visualizations, Dashboards, and Reports to monitor the operation of the satellites and any warning issues due to any errors, miss functions or failures. Other visualization tools were also created using Java and Android.
- Development languages, extensive Scala SBT (my preference), Java Maven, Eclipse Intellij, Python, R, PySpark, Ruby, C#, Shell Scripts
- Led the team developed multiple real time Android Apps and middleware using Android Studio and Eclipse, Android SDK and Java, RESTful APIs, Retrofit, GSON, JSON, Regex, JGroups IP Multicast, Apache Thrift, Python. Also used the following technologies and systems, Xilinx FPGA, Verilog, TI DSP, ARM® Cortex®-A9 Cores: i.MX 6 Series Multicore Processors etc.
Confidential, Morgan Hill, California
Senior Big Data, DW and BI Lead Solution Architect
Responsibilities:
- Led multiple large scale Big Data, Enterprise Data Warehouse EDW and Business Intelligence BI projects utilizing, Hadoop, Cloudera, Hive, Impala, Flume, Sqoop, Map Reduce, Pig, HDInsight, HBase, oozie, and facilitating real-time data analysis by data scientist.
- Leading the team, I designed architected and implemented the migration from legacy normalized SQL, FoxPro, medical device manufacturing, ERP, MRP, CRM, sales, finance and other information warehouses to a consolidated modern high performance Big Data Warehouses running on multiple DW appliances.
- Development languages, extensive Scala SBT (my preference), Java Maven, Eclipse Intellij, Python, R, PySpark, Ruby, C#, Shell Scripts
- Leading the team we migrated and deployed multiple projects to Azure Cloud. I was personally involved in the full cycle of vendor selection, requirement gathering, design, development and the deployment of these projects. The migration included different aspects of the projects from front, backend, and integration. We went through thorough research before selecting the Azure cloud for this project and also utilized cutting edge utilities to perform the migration and deployment.
- Using a combination of WPF C# application GUI and the Cloudera Visualizations, Dashboards, and Reports created advanced data visualization and data entry tools for ERP, MRP, CRM, sales, finance and other departments.
- I lead, architected and help developing a SOLR/Lucene Search for the huge amount of ERP, MRP and CRM. The SOLR project was later converted to Elasticsearch. The Elasticsearch /Lucene system was architected with 5 nodes sharding. It was developed and tested on 5 node VirtualBox machines and tan deployed to AWS cloud. Created an API in C# .NET for calls to the search engine. Also a GUI was developed in C# .NET for search calls to the Elasticsearch.
- Developed a customized SOLR indexing scheduler in C# which would run periodically to do the delta indexing.
- Drafted a BI/DW prioritized implementation roadmap while taking input from internal divisional service plans, business and IT strategy documentation, as well as corporate BI Strategy and the Financial Planning and Reporting System
- Designed Enterprise Information Management (EIM) solutions for the manufacturing process, customer support and retail operation. Led technical teams and designed various BI solutions including medical device manufacturing tracking process, component reliability analysis, vendor analysis, customer behavioral analysis, finance, ecommerce, cyber security analytic.
- Conducted BI maturity assessment of the organization. Architected DW&BI Program Structure, defined the role of DW&BI Program Steering Committee, it's mission, objectives, roles and responsibilities, monitored regular improvements to help manage risks, evaluate trends, and develop capacity and capability to achieve the Program mission
- Led the team and developed multiple applications including medical device, ERP, MRP applications with big data architecture. Used NET 4.5, C#, WPF, WCF, WF, MVVM Light, Telrik, MVC 4 Razor Entity Framework 6.0 TFS, SQL 2012.
Confidential, Redmond WA
Senior Big Data, DW and BI Lead Solution Architect, .NET Architect Consultant
Responsibilities:
- Led multiple Azure Cloud Big Data, NoSQL Riak, MongoDB SIP Trunk VOIP projects doing analysis on massive amount of voice to text converted data utilizing Hadoop/HDInsight, PDW, Map/Reduce jobs, Hive, and Sqoop.
- Development languages, extensive C++, Scala SBT (my preference), Java Maven, Eclipse Intellij, Python, R, PySpark, Ruby, C#, Shell Scripts
- Created real time multithreaded C# code using C++ Dubango Library, SIP, TCP, UDP, RTP the VOIP telephony voice was recorded and using SAPI converted to text. The text was tan stored into key value and document tables using Riak and MangoDB. The voice data gatheird from Cisco/IPCC telephone systems. Integrated with Cisco Verint for VOIP call recording, quality monitoring (QM), and speech analytics.
- Confidential SQL Server Parallel Data Warehouse (SQL Server PDW) was chosen as the main appliance for the Big Data processing due to its Massively Parallel Processing (MPP) architecture designed for Big Data Processing.
- Confidential Power BI in conjunction with a .NET application is used for data visualization.
- Led the design and development of the Workforce Management (WFM) data warehouse and BI solution to optimize adherence and attendance in the contact center. The predictive analytic component accurately forecasts the number of CSRs needed in the call center to fulfill the services.
- Led the design and development of an efficient BI auditing framework dat collects the data from packages being executed and used in data flows, row counters, versioning, and error handling. The framework is crucial for monitoring, timing, troubleshooting, and auditing. Also, developed Stored Procedures, Views, and Functions for the framework to automate logging the information and error handling in the packages.
- Led the design and development of ETL processes and data mapping using SQL server, Master Data Services (MDS), SSIS to extract data from Lagan ECM and division data sources including SQL server and oracle databases, flat files, and excel sheets. The data, tan, is transformed and loaded into a data warehouse for reporting.
- Led the design and development of data quality ETL packages to correct and cleanse the data and enhance the quality of consolidated data. Wrote hundreds lines of .NET C# code, embedded in the packages, to create a rules engine dat loads business rules and apply them to the data efficiently. In addition, the data quality issues are mapped for reporting purposes.
- Led the design and development of a SQL Server SSAS Analysis cube utilizing star schema with complex MDX calculated measures, named sets and KPIs to present an analytical view for the data and data quality with multiple dimensions.
- Led the design and development of map application and report using ASP.NET/C# web application. The application loads the data from the data warehouse, combines it with geographical information, and displays the data on a map. The application communicates through restful mapping services and uses client side scripts (JavaScript and AJAX) to improve performance and user experience.
Confidential, Austin, TX
Senior Big Data, DW and BI Lead Solution Architect, .NET Architect Consultant
Responsibilities:
- Led a Big Data project on gigantic amount of taxonomy data and customer portfolio using Hadoop, Cloudera, Hive, Map Reduce, Pig, HDInsight, and facilitating real-time data which was both analyzed and also in real time restructured the Confidential website on the demographic portfolio of the customers.
- I architected, worked and help developing the SOLR/Lucene Search deployed to Azure. The indexing was done directly on top of the metadata extracted from various files with customized Java code and Apache Tika. Used customized faceting to overwrite the default search criteria.
- Development languages, extensive Java Maven, Eclipse Intellij, Python, R, Scala SBT, PySpark, Ruby, C#, Unix Shell Scripts
- Developed a customized SOLR indexing scheduler in C# which would run periodically to do delta indexing.
- Wrote variation of batch files, python for SOLR/Lucene deployment and configurations
- Leading the team, we designed architected and implemented the migrating from legacy normalized SQL taxonomy data, customer portfolio data and other data to a modern high performance Big Data Warehouses running on multiple DW appliances.
- Defined the data governance strategy, designed security patterns, implemented data standards and procedures across the enterprise; drafted business specific methodology to establish business stakeholder-driven data stewardship through MDM
- Led multiple EDW projects, prototyped and evaluated the performance on Azure cloud, AWS Amazon Cloud, Massively Parallel Processing (MPP) Data Warehouse Appliance
- I wrote complicated taxonomy algorithm in C# to load, sort the taxonomy data into huge multidimensional trees on the memory which made the data processing supper fast.
- Created Taxonomy data visualization using the Cloudera Visualizations, Dashboards, and Reports to monitor customer profile, demography and other useful data. Other visualization tools were also created using C#.
- Created data quality ETL packages to correct and cleanse the taxonomy data and enhance the quality of consolidated data. The consolidated taxonomy data tan were segmented using Hadoop and Cloudera.
- Led the design and development of a SQL Server SSAS Analysis cube utilizing star schema with complex MDX calculated measures, named sets and KPIs to present an analytical view for the data and data quality with multiple dimensions.
- Leading the team we migrated and deployed multiple projects to Azure Cloud. I was involved in the full cycle of vendor selection, requirement gathering, design, development and the deployment of these projects. The migration included different aspects of the projects from front, backend, and integration.
- In conjunction with the Big Data I was involved in multiple projects using variety of technologies including MVC 4 Razor, WPF, WF, WCF, TPL, LINQ, SQL 2012, jQuery, Android, Java, J2EE, JRE, Ajax, AngularJS, ExtJS, Entity Framework 5.0,.NET 4.5