How user testing can make your product great
Get your product into the hands of test users and you'll walk away with valuable insights that could make the difference between success and failure.
Data Mining is the process of collecting and analysing data sets to identify previously unknown patterns, correlations and trends. By using various techniques from statistics and probability to algorithms, data markets leverage data collection technology to extract interpretable insights from unstructured data. Data Mining also provides predictive analytics capabilities which can be used in a variety of industries such as market research, customer relationship management and risk management.
Data Mining experts with experience in market research, customer relations, analytics and machine learning are in high demand right now. They are able to quickly discover the hidden “gold” in large sets of data and help companies extract richer insights which they can use to make more informed decisions.
Here’s some projects that our expert Data Mining Experts have made real:
Data Mining helps clients unlock their full potential by giving them access to powerful insights which would otherwise be inaccessible. By leveraging Data Mining Experts on Freelancer.com, clients can get their own projects done to suit their own individual needs. We invite you to post your project today on Freelancer.com and hire an expert Data Mining Expert to help you with your needs.
From 167,126 reviews, clients rate our Data Mining Experts 4.9 out of 5 stars.Data Mining is the process of collecting and analysing data sets to identify previously unknown patterns, correlations and trends. By using various techniques from statistics and probability to algorithms, data markets leverage data collection technology to extract interpretable insights from unstructured data. Data Mining also provides predictive analytics capabilities which can be used in a variety of industries such as market research, customer relationship management and risk management.
Data Mining experts with experience in market research, customer relations, analytics and machine learning are in high demand right now. They are able to quickly discover the hidden “gold” in large sets of data and help companies extract richer insights which they can use to make more informed decisions.
Here’s some projects that our expert Data Mining Experts have made real:
Data Mining helps clients unlock their full potential by giving them access to powerful insights which would otherwise be inaccessible. By leveraging Data Mining Experts on Freelancer.com, clients can get their own projects done to suit their own individual needs. We invite you to post your project today on Freelancer.com and hire an expert Data Mining Expert to help you with your needs.
From 167,126 reviews, clients rate our Data Mining Experts 4.9 out of 5 stars.I need a complete machine-learning pipeline that can look at medical images—specifically plain-film X-rays—and tell me whether each study is of the chest, abdomen, or an extremity. All input files will be standard hospital exports (mostly DICOM, occasionally PNG/JPEG), so the model must handle typical variations in resolution and contrast. What I’m after is a reproducible, well-documented solution: data preparation, augmentation, model architecture (a CNN in TensorFlow, Keras, or PyTorch is fine), training, and evaluation. Please include class-balanced splits, explain any preprocessing you apply, and show the metrics you achieve on an unseen validation set. Deliverables • Python code with clear comments for preprocessing, training, and inference • Trained ...
I’m looking for a data engineer who can take full ownership of a daily web-scraping workflow aimed at ongoing market research. The job centers on extracting selected data points from public web pages, transforming them into a clean, structured format, and making them available for analysis every 24 hours. Here’s what I need you to handle from end to end: • Source acquisition – fetch HTML from the URLs I provide, even when content is hidden behind JavaScript (a headless browser such as Playwright or Selenium is fine). • Parsing & cleansing – pull the specific fields I’ll list (product name, price, SKU, availability, and a time-stamp), remove duplicates, and standardize values. • Storage & delivery – load the daily output into ...
I’m preparing a targeted marketing campaign for hair-care products and need a fresh list of real, working braiders from a selection of U.S. cities that I’ll specify once we start. Your task is to dive into Instagram hashtags such as #braids, #boxbraids, #knotlessbraids (and any related tags you know work well), vet each profile for genuine, recent activity—at least one post within the past 30 days—and capture four data points: • Name (or the public display name) • City • State • Direct link to the Instagram profile Please skip anyone who looks inactive, spammy, or clearly headquartered outside the United States. A quick scroll through their feed should confirm they are taking clients and posting new work regularly. Drop everything into...
I need a fresh batch of 100 leads for companies that hire fully-remote talent inside the United States—please do not slip in even one onsite opportunity or the file will be rejected. Every contact must be a senior decision-maker: CEO, CTO, COO, Co-Founder or Founder. Focus sectors and split: • 25% UI/UX design opportunities • 25% WordPress (custom builds or page builders) • 25% pure Front-end development • 25% Shopify development Essential data points for each lead: 1. Work email address 2. Direct phone number 3. Exact job title Acceptance criteria • All addresses must pass a verification tool with an overall bounce rate below 3%. • Numbers must connect to the person or their executive assistant, not a generic switchboard. &b...
I need the entire contents of a specific website captured in a single pass. That means every piece of on-page text, all publicly visible image files, and every internal or external hyperlink. Once scraped, the information should be organised into a clean CSV file—one row per page—with columns for page URL, full body text, image file names, and link destinations. Please download the images themselves as well and bundle them in a separate folder (a simple ZIP is fine); the CSV should reference the exact filenames so everything lines up. I’m happy for you to use Python with BeautifulSoup, Scrapy, Selenium or whichever stack you prefer, as long as the final output meets these acceptance criteria: • Complete CSV containing text, image names, and link URLs for each ...
I am currently using apify for $1.5/1000 leads. Need things at scale - around 50k emails, this need cost effective solution. Bid on this proposal and I shall DM you, need to know cost for: 1. Apollo emails 2. Linkedin emails
I need a one-time, UK-wide scrape that captures every wedding-related business you can find across England, Wales, Scotland and Northern Ireland—no single directory limitations, so feel free to pull from any public site that meets the brief. Deliverable • A single Excel file containing the following columns: URL, Business Name, Full Address, Post Code, Telephone, and every email address that appears on the site (not just the first one you find). • The sheet should be neatly de-duplicated and ready for filter/sort. Business types to include • Wedding & Bridal Wear • Wedding Planners / Services • Wedding Cars, Horse & Carriages • Wedding Venues • Photographers & Videographers • Florists & Wedding Flowers •...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
I need a detail-oriented annotator to prepare a new dataset for machine-learning experiments. The raw data and exact labeling guidelines will be shared once we start, but expect a mix of files that may include images, text, or audio. Consistent, high-quality labels are the priority; clear documentation of any edge-case decisions you make is part of the job. You are free to work any time of the day. When you deliver, I will run a random sample quality check—please aim for at least 98 % agreement with the guideline examples. If you have a proven track record in data labeling and can start right away, let’s talk about scope and timelines so we can lock in milestones and begin.
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce scrapi...
We want to do this in a consulting / facilitators / builders format in which we work with the facilitator / consultant / trainer for 3-6 hours a week for 3-6 months in order to help us collaboratively create various agents for our private equity business. The only billed time will be the time spent on the video call with our team, unless specifically approved otherwise. we want to be able to create a screen scrape tool to average certain cost items of specific real estate proejcts We also want to compare legal documents vs term sheets and excel spreadsheets Data sources • Company databases (SQL, flat files, Excel exports) - Dropbox all our files are in drop box • Extensive web scraping for competitor benchmarks and investment-market signals If you have ideas for safely add...
Key Responsibilities: • Design, develop, and deploy AI/ML solutions end-to-end • Lead AI architecture and solution design for enterprise applications • Build and optimize machine learning and deep learning models • Deploy and monitor models in production environments • Collaborate with cross-functional teams including product and engineering • Mentor junior AI engineers and contribute to technical leadership • Conduct research and implement state-of-the-art AI techniques • Ensure data quality, security, and model performance optimization Required Skills & Qualifications: • 10+ years of experience in AI/ML or Software Engineering roles • Strong proficiency in Python and data processing libraries (NumPy, Pandas) • Hands-on experienc...
I need webscraping expert to scrape data and export to excel from Indiegogo. Details I need for the projects are: Title: Project title. Category: The category of the project based on Indiegogo categorization system. Category: The sub-category of the project based on Indiegogo categorization system. Close Date: Close data of the campaign. Open Date: Open date of the campaign. Currency: Currency used for collected funds. Funds Raised: The amounts of funds raised. Funds Raised Percent: The percent of funds raised from the targeted funds. Funding Target: The targeted amounts of funds by the campaign initiator to be collected. Country: Country in which the project is based. Publisher: The name of the campaign initiator. Backers: The number of people who decided to fund the campaign. Updates: ...
I need to build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...
I need to obtain hard-to-reach details—specifically the IP address, associated phone number, and any location-related information—linked to one particular Telegram account. Standard OSINT searches have already been exhausted, so I’m explicitly open to advanced, purely technical hacking techniques that dig directly into Telegram traffic or MTProto behaviour. If this is within your skill set, tell me how you would approach the task, which tools or exploits you prefer to leverage, and what minimal input you require from my side (e.g., username, recent message, session file). Deliverables • Verified current or last-seen IP address for the target account • Recovered phone number (or clear statement if technically impossible) • Any additional address or geo...
We are looking to hire an experienced freelancer for B2B contact data scraping using Apollo.io. Project Requirements Scrape contact data using Apollo filters provided by us Data must be extracted only after confirming filters are correct We will start with one state, and if the data quality is good, we will assign more states Data Fields Required Each contact must include: Full Name Job Title (Decision Makers only) Company Name Business Email (Verified) Phone Number / Mobile (where available) Company Revenue Location (City, State, Country) Company Website / LinkedIn Quality Expectations No dummy or generic emails No duplicate records Clean, structured, and fresh data Apollo-sourced data only Process We provide filters Freelancer applies filters and shares sample data ...
I want to turn my existing catalogues and knowledge base into smart, intuitive WhatsApp agents. Whether you prefer OpenAI’s Agent Builder or an n8n flow is up to you—as long as the final bots handle automation for user support and information dissemination flawlessly. Users should be able to ask a question, receive the right document or answer instantly, and feel as if they are speaking with a well-trained human agent. Alongside the chat experience, I need an end-to-end AI pipeline that automatically extracts raw data from the web, aggregates and cleans it, performs analysis, and then publishes clear visualisations—including map views—so insights are always one step away. I’m comfortable with tools such as Python, Pandas, LangChain, Node, SQL, Power BI, Table...
I am looking for a data entry specialist who has experience with hail maps. The project is small and simple. I am providing the sample data inside the attachment. Please look into the file. I know that this data is extracted from a hail trace map and it's free. But I don't know the map and don't know how to extract it. You need to show me this. Deliverable • Be able to extract geo targeted data selected from the map. • A video shown how to extract the exact data from the hail map. My budget is $20 for showing me this.
For an upcoming market research study, I need a fully-automated workflow that gathers and enriches data from well over 500 LinkedIn profiles. The automation should locate the profiles that match criteria I will provide, pull the key public details, then append reliable off-platform contact information so I can reach those professionals directly. Please design the script or low-code sequence with any reliable stack you prefer—Python, Selenium, PhantomBuster, Sales Navigator API, or comparable tools are fine as long as the method is repeatable and respects rate limits. Deliverables • CSV/Excel file containing one row per person with: – Current job title – Company name – Verified email (and phone, when available) • Source code or workflow fi...
I need ongoing help a few hours a day with collecting, cleaning, and summarising data for several market intelligence projects. You’ll log into shared Google Sheets, pull information from public sources or APIs, verify accuracy, and present concise insights that let me make quick decisions. Experience with Excel or Sheets functions (LOOKUPs, pivot tables, basic charts) and any lightweight statistical tool such as Python/pandas or R is a plus, but solid attention to detail matters most. I’ll set clear weekly objectives up front. Typical deliverables include: • A cleaned spreadsheet ready for analysis • A brief written summary (no more than one page) highlighting key findings and anomalies Before we start, send a short note telling me your favourite dataset you&rs...
Hiring Freelancers – AI / Computer Vision & Edge Systems Confidential Enterprise Project (NDA Required) We’re looking to engage a few experienced freelancers for a confidential, enterprise-grade technology project involving computer vision and edge AI. The work is part of a real production deployment (not a research project or demo). Detailed project information will be shared only after shortlisting and NDA signing. Engagement Details Type: Freelance / Contract Duration: ~3–4 months (extension possible) Commitment: Full-time preferred (role dependent) Work mode: Remote (India preferred) Start: Immediate / near-term Open Roles 1. Computer Vision / Machine Learning Engineer What we’re looking for: Hands-on experience with object detection / visual recognition Exp...
I need ongoing help a few hours a day with collecting, cleaning, and summarising data for several market intelligence projects. You’ll log into shared Google Sheets, pull information from public sources or APIs, verify accuracy, and present concise insights that let me make quick decisions. Experience with Excel or Sheets functions (LOOKUPs, pivot tables, basic charts) and any lightweight statistical tool such as Python/pandas or R is a plus, but solid attention to detail matters most. I’ll set clear weekly objectives up front. Typical deliverables include: • A cleaned spreadsheet ready for analysis • A brief written summary (no more than one page) highlighting key findings and anomalies Before we start, send a short note telling me your favourite dataset you&rs...
I have an urgent need for a clean, well-structured dataset containing the listing agent’s first name, last name, mailing address, and phone number for well over 500 active Zillow listings. Speed is critical, but accuracy matters just as much; the final file should be ready for immediate import into my CRM. You are free to use whichever stack you prefer—Python with BeautifulSoup or Scrapy, Selenium, residential proxies, even the unofficial Zillow API—so long as rate-limits are respected and the data is complete. I don’t need property details or price history; the focus is strictly on the agent contact fields. Deliverables • CSV or XLSX with a separate column for each required field • A short read-me explaining the script or method so I can rerun it la...
============================================================ ROLE: REAL ESTATE LISTINGS DATA & QA SPECIALIST (REMOTE) ============================================================ THIS IS NOT A GENERAL VA ROLE. This role is focused on structured, rule-based review and upload of real estate property listings. Accuracy and consistency matter more than speed. ------------------------------------------------------------ WHAT YOU WILL DO ------------------------------------------------------------ - Review and upload property listings using provided data - Verify price, location, property type, specs, and images - Identify missing, inconsistent, or suspicious information - Flag duplicate or low-quality listings - Follow written SOPs exactly (no improvising) - Record clear QA notes when is...
I need AI trainer. The goal is to arrive at production-ready machine-learning models. You will receive full access to the data, a brief describing the business context behind each task. What matters most is a clear, well-documented workflow: data cleaning, feature engineering, model selection, hyper-parameter tuning, cross-validation, and benchmark reporting. I also need concise guidance notes so my in-house team can rerun or extend the work later. Deliverables • Trained model files saved with versioning and clear naming • A short report (PDF or Markdown) summarising methodology, metrics, and next-step recommendations I’ll be available for quick feedback cycles and can spin up extra compute if needed. Let’s turn this diverse data into smart, reliable insights.
I’m interested in buying an existing dataset you’ve already scraped from TruePeopleSearch. What I specifically need is the contact information section—phone numbers for sure, and any email addresses or other direct lines you may have captured at the same time. I don’t need the address history or relatives/associates fields, so feel free to leave those out if they’re present. This is not a fresh scrape request; I only want data that you currently have on hand. Anything compiled within roughly the last twelve months is perfect, but older archives could still be useful if they’re large and well-structured. I’m flexible on format: CSV, Excel, or JSON all work. Just let me know which one your files are in, when the data was pulled, and roughly how man...
Get your product into the hands of test users and you'll walk away with valuable insights that could make the difference between success and failure.
Learn how to hire and collaborate with a freelance Typeform Specialist to create impactful forms for your business.