Selenium webdriver jobs
...set target URLs, endpoints, queries, or connection strings myself—either via a simple config file or command-line arguments—without touching the core code each time. • Reliability: It should tolerate pagination, rate limits, and simple anti-bot measures. If a CAPTCHA or login wall appears, the script should at least flag the record and keep moving. • Tech stack: Python (Scrapy, BeautifulSoup, Selenium, or Requests) is my default preference, but I’m open to Node.js, Go, or similar if you can demonstrate equal reliability. Acceptance test 1. I provide a small list of sample targets across the three source types. 2. Your solution runs on my machine (Windows 10) or a Docker container without extra licensing costs. 3. The produced Excel file contains...
...Features Web scraping or API integration for multiple websites Extracted fields: Full Name Phone Number Address (Optional: Religion tag “Muslim” or source website) Data cleaning (remove duplicates, verify phone number format) Export to Excel/CSV Scheduled or manual runs Log file/report of extracted data 3. Tools & Technology Task Tool / Library Web Scraping Python (Beautiful Soup / Scrapy / Selenium) Data Storage CSV / Excel / SQLite / MySQL Scheduling Cron job or simple button trigger Hosting (optional) AWS EC2 or local machine Interface (optional) Simple web dashboard or CLI 4. Workflow Step 1: Input target website URLs (e.g., local directories, listings, community sites) Step 2: The tool crawls and identifies name, phone, and address patterns Step 3: Ext...
I have real-time arbitrage signals coming in through an API and I need a single script that can turn those opportunities into automatically executed bets on and Pinnacle.bet.br. The feed covers both pre-match and live in-play odds, so the solution has to act fast, log in, find the right market, and place the stake without manual input. You are free to use Python with Selenium, Playwright, or a comparable headless browser framework as long as the final tool runs reliably on a VPS, keeps my credentials secure, and gives me clear log output for every action it performs. The script should read the JSON payload from the API, calculate the correct stake for each side of the arb (flat amount for now; I can adjust the formula later), then navigate each bookmaker’s interface and conf...
I already automate parts of WhatsApp Web with Python + Selenium, but I now need it wrapped into a clean Windows-only desktop application. Your task is to take my existing logic, extend a few predefined features I will share (think scheduled message blasts, dynamic contact selection, and media attachments), and present every parameter in an intuitive GUI. The stack must stay in Python and Selenium; for the interface you may choose Tkinter, PyQt, or Kivy—whichever lets you deliver a responsive, installer-ready EXE with no command-line setup for the user. Core expectations • Integrate my current scripts and add the extra functions I outline. • Build a stable GUI that lets users log in to WhatsApp Web, adjust each feature, and monitor progress in real time. ...
...row in the spreadsheet, with column headers that clearly label every field you have pulled. Accuracy is key: the sheet must mirror the live site, without missing or duplicated entries. If a page is paginated, please make sure all pages are traversed. Dynamic content that loads as you scroll should be handled as well. A lightweight, well-commented script (Python with BeautifulSoup, Scrapy or Selenium—use whichever fits best) accompanying the file would be appreciated (not mandatory) so I can rerun the extraction whenever the site updates. Deliverables • Excel (.xlsx) file containing the full data set • Source script and brief usage notes • Quick walkthrough of any environment setup needed to re-run the scrape Once I verify the counts and spot-check a s...
...community that focuses on electronics sold through a popular Italian e-commerce site. I need a small, always-on tool that watches the product pages I feed it and pings a Discord channel the moment a price changes. Real-time means seconds or, at most, a couple of minutes after the site updates—daily or weekly reports are not enough. You are free to build this in Python (requests / BeautifulSoup, Selenium, Playwright) or Node.js (Puppeteer, Playwright). It just has to run reliably on a Linux VPS, survive Cloudflare or other anti-bot hurdles, and keep lightweight logs so I can see when something fails. Core flow I have in mind: • The script reads a JSON/CSV list of electronics URLs. • On each cycle it grabs the current price, compares it to the last stored value ...
...sites use, so smart throttling, proxy rotation, user-agent randomisation, and solid CAPTCHA handling are essential. I will trigger the run manually (CLI or simple dashboard is fine) and receive structured output—CSV and JSON are both acceptable—with clear field names for SKU, title, price, availability, rating, review text, and timestamps. Source code should be clean Python (Scrapy, Playwright, Selenium, or a comparable stack), containerised for easy deployment, and accompanied by a short README that explains setup, proxy configuration, and how to add new domains later. Deliverables • Full source code in a Git repo • Dockerfile or environment file for repeatable builds • README covering installation, usage, and extension • One successfu...
I need to add major updates to my automated order placement system using Python and selenium. Orders are received via Amazon API, then passed on to database which is further picked up by automation system to be further placed from another site. I have long list of enhancements and updates not limited to fixing broken order tracking, categorizing coupons and applying them based on discount else need to go to affiliate, fixing other minor automation issues. More details to be shared with selected candidate.
...catalogue from captured in the next 6–7 hours. There are about thirty paginated pages; every medicine on them must be pulled into a single-sheet Excel file. Each row should include exactly four columns—Medicine Name, Generic Name, Price and Company Name—nothing more, nothing less. Python is my usual stack, so requests, BeautifulSoup, Scrapy or a quick Selenium pass are all fine as long as they get the job done fast and accurately. Every price has to match what’s on the site, and no product can be skipped. Deliverables • XLSX file with clean headers and all records on one worksheet • Well-commented scraping script (optional but appreciated so I can rerun later) This is an hourly engagement and I’m ready to start immediately. When you r...
...rate limiting, and user-agent spoofing to prevent detection. Integrate request queuing and parallel processing for scalable performance. Ensure that only publicly accessible information is scraped (no login or private data). Create data export functionality (CSV/JSON/API). Optimize for speed and safety while maintaining data accuracy and stability. Requirements: Strong experience in Python, Selenium, and asynchronous scraping frameworks (e.g., Playwright, Asyncio, or Scrapy). Experience with IP rotation and proxy management solutions. Proven track record building high-volume scraping or data automation systems. Knowledge of browser automation and human-behavior simulation. Secure data handling and compliance awareness regarding scraping ethics and platform policies. Ab...
...получать список номеров из простого источника (CSV, Excel или Google Sheets) и без лишних манипуляций начинать рассылку; • корректно прикладывать медиафайлы/документы к каждому сообщению; • соблюдать задержки, чтобы избежать блокировок, и выводить минимальный лог: отправлено / не отправлено. Технология реализации не принципиальна: можно опереться на официальное WhatsApp Business API, Twilio, Selenium + Chromium или иной проверенный подход — главное, чтобы запускался на Windows или Linux-сервере и не требовал редких платных библиотек. Приму работу, когда: 1. я ввожу текст и загружаю файлы → бот без ошибок доставляет их всем номерам из тестового CSV; 2. вижу отчёт с количеством успешных и неуспешных отправок; 3. получаю исходный код с лаконичной...
I need a clean, one-off scrape of every stock ticker symbol listed on the site Ticker.finology.in. The final deliverable is a single CSV file that contains, at minimum, the ticker code; including the corresponding company name is welcome if it can be captured in the same pass. Please use any reliable web-scraping stack you prefer—Python with requests/BeautifulSoup or Selenium is fine as long as the script respects the site’s structure and returns accurate, deduplicated data. I’m only looking for a single run, so scheduling or cron logic isn’t necessary; clarity, correctness, and a well-commented script matter more. Acceptance criteria: • CSV opens without errors and lists every active ticker currently shown on Ticker.finology.in. • No blan...
...sites so I can carry out ongoing market research. The scraper must collect the essentials—product title, price, availability, SKU, product URL, category, and image link—then save everything in a structured CSV or JSON file I can drop straight into Excel or a database. Pagination, infinite-scroll pages, and any JavaScript-rendered content have to be handled automatically. Python with Scrapy, Selenium, or BeautifulSoup fits my workflow best, though I am open to Node.js with Puppeteer if that speeds development. Regardless of the stack, the code should be well commented and easy for me to tweak: a simple configuration or clearly marked selectors will let me add new target URLs without rewriting logic. Because I plan to run the scrape regularly, please build in sens...
...description, current price and images. • Deliver the results in a single Excel workbook, neatly structured—one row per product with clear column headers and no duplicates. • Make sure text is UTF-8 clean and numbers stay numeric so I can sort and filter without extra fixes. I will share the target URL and any navigation pointers once we start. Feel free to use Python, Scrapy, BeautifulSoup, Selenium, or another reliable toolset; the only point that matters is an accurate, complete export that matches what users see on the live site. Please allow for basic checks on your side to catch missing fields or pages that load via lazy-scroll. If the site uses pagination or Ajax calls, handle those so nothing is skipped. When you hand off the Excel file, I’ll s...
I need a Python script that...dates and pull every file for the last 2 years. Key requirements I want covered • Format: save each dataset as CSV, keeping the original column structure intact. • Reliability: if a download fails the script should retry immediately • Clean output: place files in dated sub-folders or use a clear filename pattern so nothing is overwritten. Feel free to use requests, pandas, BeautifulSoup, selenium, or any combination that achieves consistent downloads; just keep external dependencies minimal and listed in a requirements.txt. Deliverables 1. Well-commented .py file that does both the daily and bulk historic download. 2. Google collab preferred 2. README 3. A quick demonstration run AI proposals will be ignored lol [[type 6...
I need a cost-effective developer who can turn small scraping and browser-automation requests into simple, working code at around $2 an hour. Things like logging into a site, collecting fields, exporting them to CS...browser-automation requests into simple, working code at around $2 an hour. Things like logging into a site, collecting fields, exporting them to CSV, or automating a short click-through sequence. I am looking for a long time collaboration. write work as you first word in your bid. You can use whichever stack you prefer: • Python with Requests, BeautifulSoup, Selenium, or Playwright • VB.NET with HttpClient, WebBrowser, or Selenium If you’re reliable, comfortable working at this rate, and ready to start right away, I’d love to discuss the ...
I need a complete scrape of the public directory at covering every Italian region. The site presents each record in a consistent structure, so a straightforward script should do the job quickly. Final deliverable: • One Excel file containin...for every school in the directory. For each school include: • Name • Email • Website • Contact number • Address • School type • All listed locations (if more than one campus is shown, capture each) I only need the raw extraction; no additional data validation or cleaning is required. Feel free to use the scraping stack you are most comfortable with—Python + BeautifulSoup, Scrapy, Selenium, or similar—so long as the output is complete and well-organized in Excel.
...Remove duplicates and clearly standardize school names. Prioritize official school websites / government education portals for contact info. If email/phone comes from secondary sources (directories), mark confidence and include source link. No automated spam — data must be collected ethically (publicly available info only). Preferred skills Web scraping & data mining (BeautifulSoup/Requests, Selenium, Puppeteer, or Google Maps scraping) Experience with education sector datasets or directories is a plus Excel skills — clean formatting, no merged cells, column headers as specified Good communication and ability to deliver a sample quickly...
... but I’d like a fresh set of eyes to verify the core flows and uncover anything I may have missed. Here’s what I need, kept intentionally lean so it fits the current scope: • Manual exploratory pass over the main user journeys (sign-up, login, key features, and logout). • A concise bug log with clear reproduction steps, screenshots, and severity tags. • 3–5 lightweight automation scripts—Selenium, Playwright, or any comparable tool you’re comfortable with—that cover the most critical paths. • A brief summary of overall quality, plus your top priorities for the next test cycle. You’re free to test on web, mobile, or desktop builds depending on what you feel will surface the most value first; I’ll supply cr...
...Remove duplicates and clearly standardize school names. Prioritize official school websites / government education portals for contact info. If email/phone comes from secondary sources (directories), mark confidence and include source link. No automated spam — data must be collected ethically (publicly available info only). Preferred skills Web scraping & data mining (BeautifulSoup/Requests, Selenium, Puppeteer, or Google Maps scraping) Experience with education sector datasets or directories is a plus Excel skills — clean formatting, no merged cells, column headers as specified Good communication and ability to deliver a sample quickly...
...preserving any hierarchy or grouping that appears on the page. • Keep the code clean—functions, comments, and a brief README. Provide a so I can create a virtual environment and run python without surprises on my own Docker. • Handle edge cases such as missing pop-ups, site timeouts, or updated selectors with graceful error messages rather than silent failures. You may use Selenium, Playwright, BeautifulSoup, or a headless browser—whatever ensures reliable deep scraping. Just avoid solutions that depend on paid APIs. I’ll test by running each script locally, comparing the JSON output against the live site, and checking that all links are captured. If everything matches, we’re done; if not, we’ll iterate once for fixes. Show me past...
...— not from third-party databases or APIs. • Delivering clean, structured data in Excel (.xlsx) with all available public fields (name, address, contact, rating, reviews, hours, website, coordinates, social links, etc.). This job is not about leads — it’s about accuracy, completeness, and data structure. If you use browser automation tools or scraping APIs, specify which (for example: Python Selenium, Maps API, DataForSEO, BrightData, etc.). Important: • You must have completed at least one similar full-country Google Maps scrape before. • Include a sample of your past Google Maps work in your bid (Excel format). • You must deliver both countries separately (Country 1 first for testing). Deliverables: • Excel format (.xlsx), one r...
...third-party web app, extracts its page schema and API calls, and makes the data accessible directly through our own website. Scope of Work Step 1 — One-Click Login Automation • Create a secure backend API endpoint that listens for a button click. • On click, it should: o Fetch the target URL, login ID, and password from our internal database. o Use a headless browser or automation framework (Selenium, Playwright, or Puppeteer) to: Open the target web application. Auto-fill credentials into the HTML login form. Submit the form and confirm successful login (e.g., via page redirect or success indicator). • Include robust error handling for: o Wrong credentials o Time-outs / CAPTCHA o Changed page structure Step 2 — Page Schema Extraction • Onc...
...internal API should automatically pull the target URL, login ID, and password from our database and complete the sign-in on a web application that uses a standard HTML login form. Here is what I need built: • A lightweight API endpoint that receives the button event, queries the database for the correct credentials, and returns them securely. • A headless browser or equivalent automation (e.g., Selenium, Playwright, Puppeteer—use whichever you prefer) that opens the URL, fills in the username and password fields, submits the form, and confirms success. • Error handling for wrong credentials, time-outs, or changed page structure. • Clear setup instructions plus concise documentation of the code so my team can maintain it. A working prototype runni...
Automation Script Developer for Android App (Appium / Selenium) ⸻ Job Description: I’m looking for an experienced automation developer to create and set up a fully functional automation script for a specific Android application. The goal is to automate user interactions inside the app — clicking buttons, navigating screens, entering data, etc. The automation must use element recognition (via accessibility IDs, XPath, or resource IDs) rather than image-based detection (no OCR or template matching). You’ll be responsible for: • Setting up the automation environment (Appium / Selenium / or similar framework). • Writing a clean, maintainable script that interacts with the app using element selectors. • Ensuring the script can detect and ha...
...production-grade AI solution, so I’m looking for someone who understands model behaviour as well as classic software QA practices. Key expectations • Hands-on functional and regression testing of the AI tool we are releasing. • Ability to probe model outputs for accuracy, bias and stability, then translate findings into clear defect reports. • Solid grasp of automation; if you have scripts in Selenium, Appium or similar frameworks, all the better. • Performance validation experience—load, stress or scalability testing—will help you stand out. • Clear, concise communication with both data-science and engineering teams. Deliverables 1. On-boarding onsite within one week of selection. 2. Written test strategy covering functi...
I’m looking for an experienced Python developer with strong Selenium skills to build a reliable, well-documented script that can create an X (Twitter) account using a proxy and an email address that I will provide. This project is for legitimate automation/testing purposes only — the final deliverable must respect X’s Terms of Service and must not be used for spam, evasion, or abusive activity. Key requirements (must-have) Python 3.9+ implementation using Selenium (or Selenium + undetected-chromedriver if needed). Support for HTTP(S) & SOCKS proxies (able to accept proxy credentials). Accept an email address (or list of addresses) I provide and use it for registration (assume email confirmation step; script should optionally wait for or poll IMAP/...
I need hands-on guidance to get a basic Selenium Java environment running on my Windows laptop. The focus is exclusively on setting up the local test bed—once it works, I’ll handle the scripting myself. Here’s what I’d like you to walk me through in a quick screen-sharing session: • Install and configure the JDK along with an IDE of your choice (Eclipse or IntelliJ). • Create a simple Maven project and add Selenium WebDriver plus TestNG or JUnit dependencies. • Download ChromeDriver and EdgeDriver, set PATH variables, and verify the browsers launch correctly. • Run a sample smoke test to confirm everything is wired up. No framework architecture or extensive automation is required—just a clean, repeatable setup ...
...robust automated testing and monitoring. Here’s what I expect: • A GitHub Actions pipeline that builds a Docker image on every push, runs unit tests, and deploys the container to my AWS environment (ECR + ECS or a comparable service you recommend). • Infrastructure-as-Code for any AWS resources you create so the setup is reproducible. • An automated test suite: browser tests in either Selenium, Playwright, or Cypress, plus API tests in Postman or Rest Assured, all wired into the pipeline so failures block a deploy. • Basic monitoring hooked up with Grafana, Prometheus, or ELK—enough dashboards and alerts to confirm uptime, latency, and error rates. • A concise hand-off document explaining how to extend or troubleshoot the pipeline...
...Validate UI/UX consistency across browsers and devices. Suggest and implement QA process improvements to enhance product quality. Track defects and testing progress using tools like JIRA, Trello, or Asana. Requirements Proven experience (3+ years) as a QA Engineer or Senior QA Engineer. Strong understanding of QA methodologies, SDLC, and STLC. Hands-on experience with testing tools such as Selenium, Postman, JMeter, or Cypress. Familiarity with API testing and browser developer tools. Excellent communication and documentation skills. Ability to work independently in a remote environment. Bonus Skills (Preferred): Experience in mobile app testing (Android/iOS). Basic knowledge of CI/CD pipelines and automation frameworks. Familiarity with performance testing and cross...
...day one. From there, you will maintain and expand the suite, troubleshoot failures, and keep our pipelines green. Essential skills • Deep, hands-on Python coding—live coding is part of our interview, so please send only engineers who are comfortable writing clean, idiomatic code on the spot. • Pytest mastery for structuring, parametrizing, and reporting tests. Nice to have Familiarity with Selenium, Appium, or Jenkins, as well as experience executing tests in cloud, local, or CI environments. These aren’t mandatory, but they will make onboarding smoother. Engagement details • Duration: 6 months, extendable. • Scope: build the initial framework, automate the highest-risk flows, and provide ongoing fixes and updates. • Communication: da...
I need a script ...live signals—price movements, network difficulty, and pool performance. Here’s what I still need: • API integration to pull those three data points from reliable public sources at set intervals. • Logic that compares the latest values against thresholds I can edit easily (JSON, YAML, or an .env file is fine). • A clean function that logs in to each ASIC’s web UI (the current draft uses Selenium) and updates the primary/backup pool fields without interrupting hashing. • Simple logging so I can see when a decision was made and why. • A short read-me so I can change API keys, thresholds, or add more miners later. I’m comfortable testing, but I’ll rely on you for a working, this will be installed on a ...
...risk) • Generate and send the necessary keyboard events to steer the snake intelligently • Run unattended until the round ends, then report the final score in the console or a log file Technical notes – A headless option is welcome, but the bot must also work in visible mode so behaviour can be observed while testing. – JavaScript with Puppeteer or a lightweight Python solution (Selenium + webdriver, Playwright, or direct JS injection) are all acceptable; choose whichever gives the most stable control. – The code should be self-contained: one command to install dependencies, one command to run. Acceptance criteria 1. From a fresh clone and install, I can launch the script and watch the snake move autonomously. 2. The bot consiste...
...optional add-ons we can discuss later. The script must run reliably in a browser or headless environment, respect queueing logic, and surface results fast enough for me to react manually. A simple dashboard or console read-out showing match, section, row, and price is ideal, but I’m open to whatever interface you find most stable. Deliverables • Source code with clear setup instructions (Python + Selenium, Playwright, or a comparable stack) • Read-me outlining any dependencies and how to tweak search criteria (dates, seating tiers, price caps) • Short demo video or live walk-through proving the bot detects an available ticket Acceptance Criteria ✔ Consistently polls the site without triggering bans or captchas during a 48-hour test window ✔ Displ...
...Includes: Manual Testing: Writing detailed test cases, exploratory and regression testing. Automation: Selenium, Cucumber, JUnit, and TestNG for UI and functional test automation. API Testing: Postman and Rest Assured for functional and contract validation. Performance Testing: JMeter for load and stress tests. Database: SQL and MongoDB for data validation and backend integrity. CI/CD: GitLab & Jenkins pipeline integration for automated execution and reporting. Defect Tracking: Jira for bug management, traceability, and sprint QA reporting. Approach: Review the current build and define a critical test case matrix. Identify automation candidates and build a maintainable Selenium-Cucumber framework. Deliver concise bug reports (steps, logs, priority) and demo-...
I have access to an online business directory containing roughly 30,000 company profiles. I need every publicly visible field on each profile—think phone numbers, email addresses, physical addresses, descriptions, website links and any other details the page exposes—captured and delivered in a single Google Sheets file. Please build or run an automated scraper (Python + BeautifulSoup, Scrapy, Selenium, or a comparable stack) that can: • Crawl every profile, including deeper pages reached via pagination or “load more” buttons. • Respect the site’s structure and timing so we stay under any rate-limit or anti-bot radar. • Deduplicate records and keep data clean (no broken lines, hidden HTML tags, or merged cells). • Push the fin...
I need an experienced Java SDET who can take full ownership of our automated test suite. The current stack is Java 11, Selenium WebDriver, Cucumber with Page Object Model, and Oracle on the back end. All tests are executed through GitLab CI/CD, and the pipeline must post detailed results to a dedicated Slack channel. Your day-to-day work will revolve around expanding and refactoring the existing suite, so solid mastery of JUnit, TestNG, Cucumber, and Maven is essential. I also expect you to fine-tune database validations against Oracle, troubleshoot flakey tests, and keep the Page Objects clean and reusable. Our DevOps flow is already in GitLab; the missing piece is a robust Slack integration that highlights pass/fail status, links to reports, and tags the relevant team m...
My website is almost ready for launch and now needs a usability-focused QA sweep that also covers functionality, responsiveness, and cross-browser consistency. The build must perform flawlessly on Chrome, Edge, Safari, and Firefox, both desktop and mobile. Usabil...perfect. If you spot opportunities to streamline flows, clarify copy, or tidy up the UI, include a short recommendations section. These suggestions are optional but welcome. Deliverables • Structured bug report with severity tags • Visual evidence (screenshots or brief videos) for every issue • Summary of UX/UI improvement ideas Let me know in your message which tools you’d like to use—Selenium, Cypress, BrowserStack, or pure manual—and the timeframe you need for the first full...
...layout. • Include a brief demo run or sample dataset that proves the script successfully captures the fields above from at least one social-media platform. Nice-to-have (but not strictly required): retry logic, proxy support, and lightweight scheduling so I can trigger the job daily. I’m looking to move quickly, so please outline your proposed approach, relevant experience with tools like Selenium, Playwright, BeautifulSoup, Scrapy, or similar, and any past examples of social-media scraping work. If you're an experienced developer, you can use your best approach to automate the process. Here's the full cycle to automate:
NOTE: PLEASE DON'T CONTACT ME ON LINKEDIN, EMAIL, TEXT OR WHATSAPP ===================================================== I’m looking for a detail-minded QA specialist to run functionality checks o...real browser (or an equivalent API tool when UI is not required). 3. Clear, reproducible defect reports with steps, expected vs. actual results, and screenshots or screen recordings. 4. A final summary indicating overall pass/fail status and any blockers. I don’t need automated suites right now—manual end-to-end passes are enough, provided they are thorough. Familiarity with tools such as Selenium, Playwright, or Postman is a plus if they help you work faster, but they’re not mandatory. If you can start quickly and provide reliable feedback on these cor...
...the entire operation to fail. My goal is to restore the script to peak performance — faster execution, higher data accuracy, and a verification process that succeeds every single time. To achieve this, the code needs performance profiling, refactoring, and improvements in how it handles Amazon’s constantly changing front end. If you’re comfortable optimizing Python performance using tools like Selenium, Requests, BeautifulSoup, or similar, you’ll be a perfect fit for this project. All the libraries currently used in the project and the bot that needs optimization are shown in the attached image within the project details. Deliverables: Optimized and well-documented Python code Performance report showing measurable improvements in speed and accuracy R...
I’m leading one of our small, fully-owned feature teams at Gameplay Interactive, where we ship fast-paced 3D game experiences to players on both mobile browsers and the web. To keep that momentum without sacrificing quality, I need a tester who can design and maintain a solid automation suite—preferably in Selenium—that covers every critical user flow our players rely on.
I need our web portal’s core workflows covered by an automated functional test suite. The immediate focus is on two areas: • User login & registration • General navigation and user interface I’d like the scripts written in Playwright so we can run them headless in our existing CI pipeline, but I’m open to Selenium or Cypress if you can justify a clear advantage. Scope of work 1. Review the current portal and outline the test scenarios that matter most to real users. 2. Build maintainable, well-structured tests covering positive and negative paths for the sections listed above. 3. Configure reporting so failures are easy to trace (e.g., screenshots, console logs). 4. Provide a concise setup guide and walkthrough so my in-house team can ext...
...care about—specifically the headings, paragraphs, tables, and lists—without missing a beat. The data flow has to be live; as soon as something changes on the page I want my local store or endpoint to reflect it. Here’s how I see the engagement unfolding: • First, you’ll analyse the page structure and choose the best approach—native DOM parsing, a headless-browser setup (Puppeteer, Playwright, Selenium), or another reliable method you’re comfortable with. Stability and low latency are key. • Next, you’ll build a lightweight extractor that streams the cleaned text in real time. A small proof-of-concept is fine to start, but it must be ready to slot into a broader workflow via REST, WebSocket, or a file drop—whichever is q...
**Project Overview:** I need a custom automation tool that can quickly access data fr...and rate limiting - User-friendly interface for non-technical users **Technical Specifications:** - Must work around anti-bot measures - Should use residential proxies or similar undetectable methods - Fast response times even during peak traffic - Support for JavaScript-heavy websites - Data export capabilities (CSV, JSON, or database) **Preferred Technology Stack:** - Python (BeautifulSoup, Selenium, Playwright, Scrapy) - Residential proxy integration - Headless browser automation - Any other tools you recommend for this purpose **Deliverables:** 1. Complete source code 2. Installation/setup documentation 3. User guide 4. Technical support for 2 weeks post-completion **Timeline:** as soon a...
...reuse or extend the code later without hassle. Here’s what I expect the scraper to capture for every item it encounters: product name, full description, all available images, and the specifications shown on the page. Accuracy is critical; the data must mirror exactly what the site displays at the time of scraping. I’m comfortable with Python-based solutions that rely on requests, BeautifulSoup, Selenium, Scrapy, or an equivalent stack, provided the final script is well-commented and easy to schedule. Feel free to suggest a different language or library if it offers a clear advantage for this task. Deliverables • A fully functional script (and any helper modules) that logs in or navigates as needed and reliably extracts the fields above • Structured ou...
We’re looking for a Python developer experienced with Selenium (or similar libraries) to build a script that runs on our VPS (Ubuntu/CentOS). What the script should do: Accept a JSON config with multiple search entries (30–50) containing: category_id keyword max_price Continuously monitor eBay UK for matching Buy It Now or Best Offer listings. Only include sellers located in the UK. When a new match appears, send a webhook notification with: Title Description Price (including postage) Listing URL Image link Requirements: Must run 24/7 on VPS. JSON config should be easily editable to add/remove keywords or categories. Efficient and lightweight handling of multiple search terms. If you’ve built eBay-related scrapers or alert systems before, please s...
I need every publicly visible email address listed on gather...exactly as requested). Quality I will quickly spot-check 10–15 random addresses; any bounce rate above 10 % or more than five obvious errors will require a revision. Timeline Let me know how soon you can finish. A concise status update after the first 30 % of pages scraped is appreciated so we can correct course early if needed. Tools & Method Whether you use Python, BeautifulSoup, Scrapy, Selenium, or manual collection is up to you—as long as the final list is accurate and complete. If the site uses dynamic loading, be prepared to handle it. If you’ve scraped WLW or similar B2B portals before, mention it; past success there will weigh heavily in my decision.
...validity—all without manual clicks. Nothing has to interact with other platforms right now; everything happens inside the existing portal. Only registered users (no guests or admins-only workflows) should benefit from this automation, so the solution must respect the current login and session model. What I expect to receive: • A runnable script, plugin, or lightweight web app—your choice of tech (Selenium, Puppeteer, Python Requests with proper cookies handling, etc.)—that reliably completes the extension process. • Clear setup instructions so I can deploy and schedule it on my own Windows or Linux box (a simple README is fine). • A short test report or screen-capture proving it extends at least one booking in a staging or live environment...
...list of Reports for that Order Upload those files into the same Order under the Attachments section We already have: A structured spreadsheet that contains the Orders and report types needed A video walkthrough showing the full process step-by-step We need: A script / tool / robotic automation that executes this process end-to-end Can be Python, RPA (UiPath, Power Automate, Zapier, Make), Selenium, or equivalent Goal: Fully automate this repetitive task so reports automatically get pulled and attached to the Order record. Please apply if you have experience with ERP automation, web automation, or RPA. Video walkthrough will be provided on award. Please watch the video this is the process. There will be 1178 Orders with Multiple files to upload