Hadoop hbase jobs

Filter

My recent searches
Filter by:
Budget
to
to
to
Type
Skills
Languages
    Job State
    2,000 hadoop hbase jobs found, pricing in INR

    We are is searching for an accountable, multitalented data engineer to facilitate the operations of our data scientists. The data engineer will be responsible for employing ...technological advancements that will improve the quality of your outputs. Data Engineer Requirements: Bachelor's degree in data engineering, big data analytics, computer engineering, or related field. Master's degree in a relevant field is advantageous. Proven experience as a data engineer, software developer, or similar. Expert proficiency in Python, C++, Java, R, and SQL. Familiarity with Hadoop or suitable equivalent. Excellent analytical and problem-solving skills. A knack for independence and group work. Scrupulous approach to duties. Capacity to successfully manage a pipeline of duties with ...

    ₹3606 / hr (Avg Bid)
    ₹3606 / hr Avg Bid
    18 bids

    ...has experience in writing on topics like AWS Azure GCP DigitalOcean Heroku Alibaba Linux Unix Windows Server (Active Directory) MySQL PostgreSQL SQL Server Oracle MongoDB Apache Cassandra Couchbase Neo4J DynamoDB Amazon Redshift Azure Synapse Google BigQuery Snowflake SQL Data Modelling ETL tools (Informatica, SSIS, Talend, Azure Data Factory, etc.) Data Pipelines Hadoop framework services (e.g. HDFS, Sqoop, Pig, Hive, Impala, Hbase, Flume, Zookeeper, etc.) Spark (EMR, Databricks etc.) Tableau PowerBI Artificial Intelligence Machine Learning Natural Language Processing Python C++ C# Java Ruby Golang Node.js JavaScript .NET Swift Android Shell scripting Powershell HTML5 AngularJS ReactJS VueJS Django Flask Git CI/CD (Jenkins, Bamboo, TeamCity, Octopus Deploy) Puppet/Ansible...

    ₹2836 (Avg Bid)
    ₹2836 Avg Bid
    23 bids

    We are leading training center Ni analytics india looking for Experienced Data Engineer to train our students online live class on weekdays / weekends. ideal candidate should have data engineer work experience of 4 to 8 years on Bigdata hadoop, spark, pyspark, kafka, azure experience etc. we are requesting interested candidates within our budget to respond as we get regular enquiry from individual or corporate firms. this is urgent requirement kindly respond quickly. thank you

    ₹30611 (Avg Bid)
    ₹30611 Avg Bid
    4 bids

    ...disk volume of a powered down vm, causing vdfs missing file. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions per ...

    ₹6923 (Avg Bid)
    ₹6923 Avg Bid
    9 bids

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    ₹2669 / hr (Avg Bid)
    ₹2669 / hr Avg Bid
    5 bids

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    ₹7090 (Avg Bid)
    ₹7090 Avg Bid
    3 bids

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    ₹1835 (Avg Bid)
    ₹1835 Avg Bid
    2 bids

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    ₹55484 (Avg Bid)
    ₹55484 Avg Bid
    7 bids

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    ₹73734 (Avg Bid)
    ₹73734 Avg Bid
    6 bids

    Digital Analyst: Job Responsibilities: The Analyst will work with lead analysts to deliver analytics by a. Building analytics products for to deliver automated, scaled insights in self-serve manner (on PBI/Tableau platform) b. Assisting with complex data pulls and data manipulation to develop Analytics dashboards or conduc...understanding of digital and data analytics • Excellent written, oral, and communication skills • Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy • Keen eye for UI on PBI/Tableau – can recommend designs independently • Can handle complicated data transformations on DBs & Big Data (Hadoop) • Familiar...

    ₹1001 (Avg Bid)
    ₹1001 Avg Bid
    2 bids

    A mini project with report, source code on any topic in HIVE and Hadoop program projects.

    ₹5422 (Avg Bid)
    ₹5422 Avg Bid
    3 bids

    Job Responsibilities: The Analyst will work with lead analysts to deliver analytics by a. Building analytics products for to deliver automated, scaled insights in self-serve manner (on PBI/Tableau platform) b. Assisting with complex data pulls and data manipulation to develop Analytics dashboards or conduct analytics deep di...understanding of digital and data analytics • Excellent written, oral, and communication skills • Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy • Keen eye for UI on PBI/Tableau – can recommend designs independently • Can handle complicated data transformations on DBs & Big Data (Hadoop) • Familiarit...

    ₹2335 (Avg Bid)
    ₹2335 Avg Bid
    6 bids

    Hadoop EMR setup and Data migration from azure to AWS

    ₹1585 / hr (Avg Bid)
    ₹1585 / hr Avg Bid
    11 bids

    Looking for a person who can help me install a Hadoop

    ₹417 / hr (Avg Bid)
    ₹417 / hr Avg Bid
    2 bids

    .../ Define the problem. Create Tables with constraints Design a Schema based on tables and explain the schema. Create primary keys, foreign keys. Create Procedures. Create functions. Create Views Create Index Use of the following Clauses: Example : order by, between, group by, having, order by, AND, OR, with Use Aggregate Functions Use of nested queries, Scalar Subquery. Part 2 has to be done in HBASE Create Tables – 4 tables with Column family and columns Column family - 5 column families: Make sure have different parameter. Ex: versions Minimum 4 Columns in each Column family Insert records Delete records Perform basic queries like your assignment1 Try to extract data using timestamp Insert partial data in a row Describe table. Check table status – enabled or disable...

    ₹12094 (Avg Bid)
    ₹12094 Avg Bid
    33 bids

    .../ Define the problem. Create Tables with constraints Design a Schema based on tables and explain the schema. Create primary keys, foreign keys. Create Procedures. Create functions. Create Views Create Index Use of the following Clauses: Example : order by, between, group by, having, order by, AND, OR, with Use Aggregate Functions Use of nested queries, Scalar Subquery. Part 2 has to be done in HBASE Create Tables – 4 tables with Column family and columns Column family - 5 column families: Make sure have different parameter. Ex: versions Minimum 4 Columns in each Column family Insert records Delete records Perform basic queries like your assignment1 Try to extract data using timestamp Insert partial data in a row Describe table. Check table status – enabled or disable...

    ₹3753 (Avg Bid)
    ₹3753 Avg Bid
    10 bids

    Linux+Hadoop cloud migration azure Data and on prem Data (Cloudera hadoop) to AWS Cloudera Azure AWS DEVOPS Database Migration from on prem to AWS

    ₹1585 / hr (Avg Bid)
    ₹1585 / hr Avg Bid
    10 bids

    ※ Please, see the attached, and offer your price quote with questions [Price and time is negotiable] ※ Will need your help from end of Dec ~ Jan, 2023 1) Manual : Creating development and installation manual for overall service implementation guideline using HDFS – Impala API >All details must be provided : command/option/setting file/Config etc. > We will use your manual to create our own HDFS used solution >Additional two to four weeks of take-over time [We can ask some questions when the process does not work under the manual process] 2. Consulting : Providing solutions for the heavy load section(date inter delay) when data is insert through HDFS >Data should be processed in 3 minutes, but sometimes it takes more time > Solutions for how we can remove or de...

    ₹83326 (Avg Bid)
    ₹83326 Avg Bid
    7 bids

    Hadoop,linux, anisible,cloud and good communication skills required

    ₹584 / hr (Avg Bid)
    ₹584 / hr Avg Bid
    1 bids

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    ₹8341 (Avg Bid)
    ₹8341 Avg Bid
    4 bids

    Need bigdata and Hadoop tools some them like spark sql, Hadoop, hive and databricks , data lakes

    ₹2502 (Avg Bid)
    ₹2502 Avg Bid
    6 bids

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    ₹8758 (Avg Bid)
    ₹8758 Avg Bid
    5 bids

    Require a developer who has good experience in devops support for 2 to 3 years, Which includes Hadoop Services windows, Linux and Ansible with little cloud touch.

    ₹667 / hr (Avg Bid)
    ₹667 / hr Avg Bid
    7 bids

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    ₹10259 (Avg Bid)
    ₹10259 Avg Bid
    4 bids

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    ₹11677 (Avg Bid)
    ₹11677 Avg Bid
    6 bids

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    ₹8091 (Avg Bid)
    ₹8091 Avg Bid
    3 bids

    The objective of this assignment is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time.

    ₹10009 (Avg Bid)
    ₹10009 Avg Bid
    16 bids

    1. Implement the straggler solution using the approach below a) Develop a method to detect slow tasks (stragglers) in the Hadoop MapReduce framework using Progress Score (PS), Progress Rate (PR) and Remaining Time (RT) metrics b) Develop a method of selecting idle nodes to replicate detected slow tasks using the CPU time and Memory Status (MS) of the idle nodes. c) Develop a method for scheduling the slow tasks to appropriate idle nodes using CPU time and Memory Status of the idle nodes. 2. A good report on the implementation with graphics 3. A recorded execution process Use any certified data to test the efficiency of the methods

    ₹15514 (Avg Bid)
    Urgent
    ₹15514 Avg Bid
    11 bids

    Stack : DATA ENG 1. AWS 2. SPARK / HADOOP 3. PYTHON 4. Terraform

    ₹1084 / hr (Avg Bid)
    ₹1084 / hr Avg Bid
    3 bids

    I have an input text file and a mapper and reducer file which outputs the total count of each word in the text file. I would like to have the mapper and reducer file output only the top 20 words (and their count) with the highest count. The files use and I wanna be able to run them in hadoop.

    ₹11528 (Avg Bid)
    ₹11528 Avg Bid
    12 bids

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals

    ₹918 / hr (Avg Bid)
    ₹918 / hr Avg Bid
    2 bids

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals

    ₹1418 / hr (Avg Bid)
    ₹1418 / hr Avg Bid
    2 bids

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals Please contact me asap Thanks David

    ₹1501 / hr (Avg Bid)
    ₹1501 / hr Avg Bid
    21 bids

    I have some problems to be completed using Hadoop

    ₹1001 (Avg Bid)
    ₹1001 Avg Bid
    1 bids

    Hi, we are looking for experienced person in "Hadoop" Need to Give Job Support By connecting remotely and taking mouse controls for Indian guy living in US USD- 300$/Month 2hrs/day 5days/week Timings- Anytime After 7P.M IST will works Any 2hrs Before 10a,m IST

    ₹20852 (Avg Bid)
    ₹20852 Avg Bid
    1 bids

    Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure

    ₹34115 (Avg Bid)
    ₹34115 Avg Bid
    15 bids

    Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure

    ₹10760 (Avg Bid)
    ₹10760 Avg Bid
    8 bids

    ...ORDER BY AVG(d_year) Consider a Hadoop job that processes an input data file of size equal to 179 disk blocks (179 different blocks, not considering HDFS replication factor). The mapper in this job requires 1 minute to read and fully process a single block of data. Reducer requires 1 second (not minute) to produce an answer for one key worth of values and there are a total of 3000 distinct keys (mappers generate a lot more key-value pairs, but keys only occur in the 1-3000 range for a total of 3000 unique entries). Assume that each node has a reducer and that the keys are distributed evenly. The total cost will consist of time to perform the Map phase plus the cost to perform the Reduce phase. How long will it take to complete the job if you only had one Hadoop worker n...

    ₹16682 (Avg Bid)
    ₹16682 Avg Bid
    1 bids

    I need someone to solve the attached questions They're about map reduce and Hadoop and pig and require python skills as well I attached an example of some expected solutions

    ₹1668 (Avg Bid)
    ₹1668 Avg Bid
    10 bids

    I can successfully run the Mapreduce job on the server. But when I want to send this job as yarn remote client with java(via yarn Rest api), I get the following error. I want to submit this job successfully via Remote Client(Yarn Rest Api.)

    ₹1001 (Avg Bid)
    ₹1001 Avg Bid
    3 bids

    Looking for Python and Scala expert, Candidate should have knowledge in Big data domains such as Hadoop, spark, hive, etc. Knowledge of Azure Cloud is a plus. Share your CV.

    ₹59304 (Avg Bid)
    ₹59304 Avg Bid
    8 bids

    block matrix addition should be done using map reduce

    ₹5005 (Avg Bid)
    ₹5005 Avg Bid
    2 bids

    block matrix addition should be done using map reduce

    ₹15848 (Avg Bid)
    ₹15848 Avg Bid
    11 bids

    Current use technology stack – Apache Hadoop – (3. 1version ) cluster production. Urgently deployment of a resource who has experience in Azure Data lake migration , Hadoop, Kafka and NiFi. Setup the minimum required services and setup the data lake on azure then migrates sample data that customer will provide. We're looking for Hadoop Developer for Three Months contract role. It's purely work from home and flexible timings. Please get back to us if you're interested. Job Description is given below. 1. Current use technology stack – Apache Hadoop – (3. 1version ) cluster production. 2. Urgently deployment of a resource who has experience in Azure Data lake migration , Hadoop, Kafka and NiFi. 3. Setup the minimum r...

    ₹1084 / hr (Avg Bid)
    ₹1084 / hr Avg Bid
    4 bids

    Job Description: Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models a...Data Science or other quantitative field is preferred. 3-5 Years of proven experience as a Data Scientist. Experience in DataRobot or any similar tool Experience in data mining Understanding of machine-learning and operations research Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentat...

    ₹1835 / hr (Avg Bid)
    ₹1835 / hr Avg Bid
    21 bids

    programming language python -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka -Tools that are needed to be familiar with Bit bucket, Tectia(edgenode), sql developer, oozie, Git, Jenkins

    ₹644 / hr (Avg Bid)
    ₹644 / hr Avg Bid
    6 bids

    programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka -Tools that are needed to be familiar with Bit bucket, Tectia(edgenode), sql developer, oozie, Git, Jenkins, sonarqube

    ₹584 / hr (Avg Bid)
    ₹584 / hr Avg Bid
    7 bids

    I want to a help in a hadoop based big data project

    ₹2169 (Avg Bid)
    ₹2169 Avg Bid
    3 bids

    programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka -Tools that are needed to be familiar with Bit bucket, Tectia(edgenode), sql developer, oozie, Git, Jenkins, sonarqube

    ₹334 / hr (Avg Bid)
    ₹334 / hr Avg Bid
    2 bids