scala object main def should have 6 arguments as below.
val awsAccessKeyId: String = args(0)
val awsSecretAccessKey: String = args(1)
val csvFilePath: String = args(2)
val host: String = args(3)
val username: String = args(4)
val password: String = args(5)
*based on the args, the job should read csv file from s3 and load into cassandra.
*you can use any csv file say 2 fields, and cassandra table has 3 columns..
* need complete project codes, dependency jars, tools used and all versions etc.
20 freelancers are bidding on average $186 for this job
Dear Customer, My name is Yuriy Tumakha. I am interested in your project. I am Senior Scala/Java Developer with 14 years of experience. You can see my code examples on GitHub [login to view URL]
Hi, I have more than 5+ years of experience in hadoop ecosystems like HDFS, MapReduce, Hive, Spark etc. I can complete your project. Please contact me for more details.
Hi i am a Data Scientist working in machine learning from past 3 years. i have done many projects like recommendation system, anomaly detection, fraud detection etc. i can do your task in Python very efficiently.
Hi, I have over 5 years of experinece in scala java spark strom hdfs. I have written may live streaming and batch mode projects. I can complete this work in 5 days. Thanks Devesh Kumar
Hi, I am interested in this project. I've more than 5 yrs of experience in big data related tools and technologies. I've already implemented the spark scala related pipelines on aws for my company.
I have been working for more than 3 years in Hadoop/Big Data stack. It will be my pleasure if you give me an opportunity to work on your project. Please connect me for further details to start working on this.