Job Seekers, Please send resumes to resumes@hireitpeople.com
Detailed Job Description:
- BS degree in computer science, computer engineering or equivalent
- 5 6 years of experience delivering enterprise software solutions
- Proficient in Spark, Scala, Python, AWS Cloud technologies
- 3+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala
- Flair for data, schema, data model, how to bring efficiency in big data related life cycle
- Must be able to quickly understand technical and business requirements and can translate them into technical implementations
- Experience with Agile Development methodologies
- Experience with data ingestion and transformation
- Solid understanding of secure application development methodologies
- Experienced in developing microservices using spring framework is a plus
- Experience in with Airflow and Python will be preferred
- Understanding of automated QA needs related to Big data
- Strong object - oriented design and analysis skills
- Excellent written and verbal communication skills
Responsibilities:
- Utilize your software engineering skills including Java, Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services
- Integrate new data sources and tools
- Implement scalable and reliable distributed data replication strategies
- Ability to mentor and provide direction in architecture and design to onsite/offshore developers
- Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases
- Perform analysis of large data sets using components from the Hadoop ecosystem
- Own product features from the development, testing through to production deployment
- Evaluate big data technologies and prototype solutions to improve our data processing architecture
Experience required: 5-10 Years