top of page

Our Data Extraction and Delivery Process

Web Scraping at a large scale is not easy. But Our battle-tested process makes data extraction as smooth as it can get. Here is a detailed step by step overview of our process. Try us :)

1. Initial interaction

Once you reach out to us - we will send a few questions via email to gather more information about the problem you are trying to solve.

2. Feasibility Analysis

Once we have enough details of your project - we will perform a feasibility analysis and design a solution that works best for you.

3. Pricing and Payment

Our pricing depends on the complexity of the source websites, and the volume of the data being scrapped.

​

Once pricing and terms of engagement are set, we will send you an invoice that is payable by credit card, wire transfer or Paypal.

​

Get Data or your money-back - That's our promise.

4. Communication channel

Once the payment is complete - we will create an account for you on our customer support portal. We use freshdesk to deliver a fantastic customer experience.

 

You will be able to directly communicate with our data mining engineers and customer success manager to ensure the success of the project.

5. Setup and Data sample delivery

Once you are signed in to the portal - our data mining team will configure scrapers for you on our platform and deliver a sample data for verification.

6. Approval of Sample

If there are any issues found in the sample - the data mining team will fix them. Most common issues are format changes, parsing changes and other small issues.

 

The approval of the sample usually happens on the second or third iteration.

7. Full Data Extraction

Once you approve the sample data - we will do a full scrape on our distributed data extraction platform. The extracted data will be pushed to our Quality assurance tool.

8. Quality Assurance

Our two state Q&A process uses a machine learning based tool and real people to verify the data.

​

The tool first checks for any faulty data in the output. If it is found, it is sent back to the data mining team for correction.

​

If there is no faulty data - the Q&A team will check random records to verify the quality once more.

9. Data Delivery

Once the data passes Quality checks - It will be delivered to you via most common data sharing sources like Amazon S3, Dropbox, Box, FTP upload or even via a custom API.

10. Support and Maintenance

Our customers get free maintenance of their data scrapers as part of their subscription. 

 

If you need data on a recurring basis - we can schedule it on our platform and data will be gathered and shared automatically.

11. Customer success Meeting

We will work closely with you to help you succeed. Given our expertise in handling large volumes of data and building highly scalable softwares - we advise our customers on how to build software that uses a lot of data.

bottom of page