Job Seekers, Please send resumes to resumes@hireitpeople.com
Job Details:
Must Have Skills (Top 3 technical skills only):
- 7+ years of Experience in Java, Spark, Map Reduce, RDBMS, HivePig, Scala, LinuxUnix technologies.
- Sound knowledge of relational databases (SQL) and experience with large SQL based systems
- Strong IT consulting experience in various data warehousing engagement, handling large data volumes, architecting big data environments.
Nice to have skills (Top 2 only):
- Exposure to ETL tools e.g. Informatica, NoSQL
- Assist in both external and internal audit questionnaires and application assessments.
Detailed Job Description:
- Core Java
- Java interfaces and test driven development and approach
- Multithreading
- Concurrency including structures like blocking queuesd.
- Data structures Collections, Arrays , Maps
- ORM framework Atleast one from Ibatis, hibernate etc
- Simple to medium SQL queries implementationg.JMS concepts
- Spark
- JavaSpark APIs For Dataframes, RDDs and Datasetsb.Spark job execution Spark sessions, spark context , sql Contextc.Spark API based data load from text files, avro , parquet, hive, hd
Minimum years of experience: 10
Certifications Needed: No
Top 3 responsibilities you to shoulder and execute:
- Lead and assist with the technical design architecture and implementation of the Big data cluster in various environments.
- Able to guide mentor development team for example to create custom common utilities libraries that can be reused in multiple big data development efforts.
- escx
- Liaise with Enterprise Architects to conduct research on emerging technologies, and recommend technologies that will increase operational efficiency, infrastructure flexibility and operational stability.
Interview Process (Is face to face required?): No.