Job Seekers, Please send resumes to resumes@hireitpeople.com
Detailed Job Description:
- Spark Engineer (focus on Spark and Scala)
- Solid in Scala programming
- Good at Memory management
- Familiar with data transfer
- AWS experience is preferred
- Big Data Hadoop, Spark, PySpark
- Hands on Programming Java, Scala, Python
- AWS Cloud S3, EFS, MSK, ECS, EMRLambdas
- Containerized and Microservices
- Distributed Computing constructs JoinsMapReduce
- RDBMS MySQL, Aurora and No - SQL
- Kafka Streaming
- Data Storage Architecture
- Data Formats Experience Parquet, CSV etc.
- Data Transformation constructs. - partitioningShuffling
- Agile Experience a plus
- Build data pipelines, data stores
- Azure, GCP knowledge is a plus