Minimum 4yrs of experience in Hadoop
Technologies are primarily Spark, Spark Streaming and SQL, Kafka.
Our project mainly deals with real-time data processing using Kafka with Spark
Currently, we are using Vertica and HDFS for data storage and migrating to AWS S3. So, AWS experience is required not less than 1 year.
Totally coding is in Scala. So, Scala is main. Knowledge of Akka actors, Akka framework, Traits, classes in Scala is mainly required.
Good to have knowledge in setting CI/CD pipeliness through Jenkins
13 freelancers are bidding on average $548 for this job
Hi, It seems the requirement is created just for me. I worked Spark and Kafka streaming in all Scala, Java and Python (Pyspark pykafka). Additionally I also worked in HP vertica. Thanks!
hi, I have about 5 years of working experience on Big data mainly with code on Scala and also hands on AWS technologies like S3 .I would love to work on this project. Thanks