The World Wide Web is made up of billions of interlinked documents, widely known as web pages. The source text of the web pages is written in Hypertext Markup Language (HTML). The HTML source text is a mixture of human-readable information and machine-readable code, the so-called tags. The web browser – e.g. B. Chrome, Firefox, Safari or Edge – processes the source text, interprets the tags and presents the information contained therein for the user.
Special software is used to specifically extract only the information that is of interest to people from the source text. These programs, known as “web scrapers”, “crawlers”, “spiders” or simply “bot”, search the source text of websites for given patterns and extract the information contained therein. The information obtained through web scraping is summarized, combined, evaluated or saved for further use.
Below, I explain why the Python language is particularly suitable for creating web scrapers and introducing a corresponding overview.
Why Use Python For Web Scraping?
The popular Python programming language works well for creating web scraping software. As websites are constantly being adapted, web content changes over time. For example, the design is adapted or new page components are added. A web scraper is written for the specific structure of a page. If the structure of the page changes, the scraper must be adapted. This is particularly easy with Python.
Python is also extremely strong in word processing and web resource retrieval; both are technical foundations for web scraping. In addition, Python is an established standard for data analysis and processing. Also, to the general suitability of the language, Python impresses with a flourishing programming ecosystem. This includes libraries, open-source projects, documentation and language references, forum posts, bug reports and blog articles.
In particular, there are several well-developed tools for web scraping with Python. We present the three well-known tools Scrapy, Selenium, and BeautifulSoup. As a practical exercise, you can use our web scraping with Python tutorial based on BeautifulSoup. So you can understand the scraping process very directly.
An overview of web scraping
The basic scheme of the scraping process is quickly explained. First, the scraper developer analyzes the HTML source text of the interesting page. There are usually clear patterns that can be used to extract the information you want. The scraper is programmed for these patterns. The rest of the work is done automatically by the scraper:
- Call up the website at the URL address
- Automatically extract structured data according to the patterns
- Summarize, save, evaluate, combine etc. the extracted information.
Use cases for web scraping
Web scraping is very versatile. In addition to search engine indexing, web scraping and the like are used. a. used for the following purposes:
- Create contact databases
- Monitor and compare prices of online offers
- Merge data from various online sources
- Track online presence and reputation
- Collect financial, weather and other data
- Monitor web content for changes
- Collect data for research purposes
- Data Mining perform
A vivid example of web scraping
Imagine a website that offers used cars for sale. If you go to the site in the browser, you will be shown a list of cars. A web scraper can search the used vehicle listing available online.
Depending on the creator’s intent, the scraper looks for a specific model; for example, if it is a Volkswagen Beetle. In the source text, the specifications for the car’s make and model will be marked with the CSS classes ‘car-make’ and ‘car-model’. The information you want can be easily scraped using the class names.
Legal Risks in Web Scraping
As convenient as web scraping is, it also carries legal risks. Since the website operator actually makes his page available for human use, automated retrieval using a web scraper can violate the terms of use. This is particularly true if the request takes place in high volume on several pages at the same time or in high order. A human would not be able to interact with the site in this way.
Furthermore, the automated retrieval, storage and evaluation of the data published on a website may violate copyright law. Suppose the scraped information is personally identifiable data. In that case, the storage and evaluation without the person’s consent violate applicable data protection regulations. For example, it is not allowed to scrape Facebook profiles in order to collect personal data.
Note: When violations of data protection and copyright, stiff penalties threatened to make sure you are not breaking any law when using web scraping. Existing technical barriers must never be bypassed. You may read this interesting real story Computer Fraud and Abuse Act Against Web Scraping
Technical Limitations Of Web Scraping
Website operators often have an interest in limiting the automated scraping of their online offers. On the one hand, the massive access to the website by scrapers can have a negative effect on the performance of the site. On the other hand, there are often internal areas of a website that should not appear in search results.
The robots.txt standard has established itself to limit access by scrapers: The website operator stores a text file called robots.txt in the website’s main directory. Special entries within the file determine which scrapers or bots can access which areas of the website. The entries in robots.txt always apply to an entire domain.
The robots.txt is a voluntary restriction. The bots should adhere to the guidelines; however, this cannot be technically enforced. In order to effectively regulate access by web scrapers, website operators therefore also use more aggressive techniques: On the one hand, access by web scrapers can be limited by means of a throughput limit; on the other hand, in the event of repeated access, contrary to the specifications, the scraper’s IP address can be blocked.
APIs As An Alternative To Web Scraping
While web scraping is useful, it is not the preferred approach to pulling data from websites. There is often a better way: Many website operators provide data in a structured, machine-readable format. Access the data via special programming interfaces, the so-called application programming interfaces (APIs).
If an API is available and provides complete data, this is the preferred way of accessing data. Nevertheless, the following applies: With scraping, in principle, all texts can be accessed that are presented on websites in a way that is legible for humans.