The version of

python after the python2.x and python3.x versions, either version, on the python crawler related knowledge mastery, a crawler script this convenient finishing off a lot of valuable tutorial, Xiaobian through this article to give a summary of the related knowledge about Python crawler, the following is the entire contents:

python crawler based overview of


what is the crawler crawler, namely Web Spider, is a very image of the name. The Internet is figurative of a spider web, and Spider is a spider crawling on the Internet. The web spider is looking for a web page through a link address of a web page. From the site of a page (usually home) to read the content of a web page, find other links in the page address, and then through these links to find the address of a web page, so the cycle continues, until all the pages of this site are crawling. If the whole Internet is regarded as a web site, web spiders can use this principle to capture all the web pages on the Internet. In this way, the web crawler is a crawler, a program that grabs web pages. The basic operation of the web crawler is to grab web pages.


"in the process of browsing user browsing the web, we may see many beautiful pictures, such as , therefore, the essence of web pages that users see is made up of HTML code. Crawlers crawl in these contents. By analyzing and filtering these HTML codes, we can get the resources of pictures and texts. The meaning of



, which is a uniform resource locator, that is. We say, a uniform resource locator is the location and access method to get from Internet resources on a concise representation, is the standard resources on the Internet. Every file on the Internet has a unique URL, which contains the information that points out the location of the file and how it should be handled by the browser. The format of

URL consists of three parts: the first part of the

is the protocol (or the service mode). The second part of the

is the host IP address (and sometimes the port number) that has the resource. The third part of the

3 is the specific address of the host resource, such as the directory and the file name. When

crawler crawls data, it must have a target URL to get data. Therefore, it is the basic basis for crawlers to get data. Accurate understanding of its meaning is very helpful for crawler learning.


Python learning environment configuration, of course, the configuration of the environment, first I used Notepad++, but found it prompts is too weak, so I used the PyCharm in Windows, in Linux, I used EclipseforPython, but there are a few more excellent IDE. We can refer to this article to learn Python recommendation IDE. Good development tools is the forward propeller, I hope everyone can find their own IDE

share on the Python environment to build tutorial we can refer to: Python




simple installation of Python environment Python

"_blank" >Win10 and

Win7 to build graphic tutorial Python development environment (Python, Pip, install the


Linux to build a python environment

Xiangjie Linux install python3

Linux Python

Linux virtualenv

linux environment Python installation diagram (setuptools)

Urllib library using

Urllib is Python's built-in HTTP request library includes the following module urllib.request request module, exception handling module, urllib.error urllib.parse URL urllib.robotparser robots.txt analysis module, analysis module, a script for everyone finishing some tutorials on Urllib Library:

Python Urllib the basic use of Library Tutorial of

Python Urllib Library Some of the urllib library

Python crawler in

advanced learning

Python3 learning to use the example of urllib


the exception handling is the third major knowledge learning Python crawler, tutorials in detail below:

Python in the treatment of abnormal URLError

Python crawler hyperlink URL contains Chinese error and solution of

Cookie module, as the name suggests, is used to operate Cookie module. Cookie, a small cake, who has played with Web, knows that it is a slice of information used by Server to keep a conversation with Client. The Http protocol itself is stateless, that is, the two request sent by the same client is not directly related to the Web server. In this case, some people will ask, since Http is stateless, why do some web pages only access user names and passwords after verification? That is because: for authenticated users, Server will secretly sent to add Cookie Client data, Cookie generally save a identifies the Client only ID, Client in the next request to the server, the ID to Cookie in the form of a concurrent to the Server, Server from the back back Cookie ID was extracted and the corresponding user is bound up, in order to achieve authentication. As a matter of fact, Cookie is a string that passes between the server and the client. The following is a script for everyone finishing on the treatment of Cookie Python learning tutorial:

python Cookie cookie

"_blank" > Python explain the use of


detailed using Cookie Python program in the

python analog login and to keep the cookie

regular expressions

regular expressions is a logical formula for string manipulation, is to use some special characters, and the specific character of the combination of well defined, to form a "string", the "string" used to express a logical filtering on the string.

is used to match the regular expression string tool is very powerful, in other programming languages have the same concept of regular expressions, Python is no exception, the use of regular expressions, we want to return to the page content from the extract content we want is easy. The approximate matching process of

regular expression is:

1. takes out the expression in comparison with the characters in the text,

2., if every character can match, the match is successful; once the matching character is not successful, the match fails.

3. is a little different if there is a quantifier or boundary in the expression.

tutorial below is a regular expression on the Python crawler: regular expression

Python in


python3 crawler based on regular expression and

use regular the expression of Python in

Beautiful Soup

usage in simple terms, Beautiful Soup is a library of python, the main function is to capture data from a web page. The official explanation is as follows:

Beautiful Soup provides some simple, Python - style functions to handle navigation, search, and modification of the analysis tree. It is a toolbox, which provides users with the data to be grabbed by parsing documents, because it is simple, so it doesn't need much code to write a complete application.

Beautiful Soup automatically converts the input document to Unicode encoding, and the output document is converted to UTF-8 encoding. You do not have to consider encoding, unless the document does not specify a coding method, then Beautiful Soup can not automatically identify the encoding. Then, you just need to explain the original coding way.

Beautiful Soup has become an excellent Python interpreter as well as lxml and html6lib, providing users with flexible analytical strategies or strong speed. Use the Beautiful Soup

target= "_blank" >Python in the super detailed tutorial "_blank"

python BeautifulSoup

Python Beautiful Soup

python BeautifulSoup

crawl the web to specify the content of the above is for everyone in the US learning Python crawlers need to understand each 5 big points based on knowledge, and organize the relevant detailed tutorial on the 5 knowledge points for everyone, we face as we finish the related Python crawler video tutorial, also hope to also help to you:

2017 Python3.6 the latest web crawler (case the basic framework of Distributed Combat + + +) full set of video tutorials

this is a little far That course is most suitable for white Python crawler learning system is very complete, use the Python3.6 version, use Anaconda to develop the python program, the teacher to explain the very detailed, curriculum system is very good, is explained from the shallow into the deep a little bit, starting from the Python crawler environment installation, on the most the basic urllib package to use, how to parse the request request content, selection of useful data, such as Ajax, post, HTML, JSON and so on are very careful to explain one by one, and then gradually in-depth how to use cookie, IP proxy pool technology to solve and prevent the blocked login authentication and so on skills. Finally, through the study of Python crawler framework and distributed technology to build a highly available crawler system, technology system from a small demo to a complete set of system needs a little master The. At the same time, the teacher also cooperates with multiple cases to practice the practice

This concludes the body part