Web crawler functions: Difference between revisions
Mr. MacKenty (talk | contribs) |
Mr. MacKenty (talk | contribs) |
||
(9 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
Web search engines and some other sites use Web crawling or spidering software to update their web content or indices of others sites' web content. Web crawlers copy pages for processing by a search engine which indexes the downloaded pages so users can search more efficiently.<ref>https://en.wikipedia.org/wiki/Web_crawler</ref> | Web search engines and some other sites use Web crawling or spidering software to update their web content or indices of others sites' web content. Web crawlers copy pages for processing by a search engine which indexes the downloaded pages so users can search more efficiently.<ref>https://en.wikipedia.org/wiki/Web_crawler</ref> | ||
web crawler | In the context of THIS TOPIC, the terms web crawler web spider bot web robot are synonymous. | ||
== | == How a web crawler functions == | ||
Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone. The largest use of bots is in web spidering (web crawler), in which an automated script fetches, analyzes and files information from web servers | Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone. The largest use of bots is in web spidering (web crawler), in which an automated script fetches, analyzes and files information from web servers. | ||
The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web. | The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web. | ||
In [[pseudocode]], we might imagine a web crawler working like this<ref>http://blog.mischel.com/2011/12/16/writing-a-web-crawler-crawling-models/</ref>: | I am very grateful to Mr. Mischel whose example I use with permission below. In [[pseudocode]], we might imagine a web crawler working like this<ref>http://blog.mischel.com/2011/12/16/writing-a-web-crawler-crawling-models/</ref>: | ||
<syntaxhighlight lang="python"> | <syntaxhighlight lang="python"> | ||
queue = LoadSeed(); | queue = LoadSeed(); | ||
Line 26: | Line 26: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Web crawler and meta-data == | Here is another example in pseudocode <ref>https://gist.github.com/demetriusnunes/d2cef3cd249167ac94400fc591d31f03</ref>: | ||
<syntaxhighlight lang="python"> | |||
import List, Queue, Hash from lang.data | |||
import fetch, normalize_url from http.utils | |||
import write_file from lang.io | |||
function crawl(start_url) { | |||
crawled = new List | |||
queue = new Queue | |||
visited = new Hash | |||
start_url = normalize_url(start_url) | |||
queue.push(start_url) | |||
while (not queue.empty?) { | |||
url = queue.pop() | |||
page = fetch(url) | |||
visited[url] = true | |||
for asset in page.assets { | |||
data = fetch(asset) | |||
write_file(data) | |||
} | |||
for link in page.links { | |||
link = normalize_url(link) | |||
queue.push(link) if not visited[link] | |||
} | |||
crawled.append({ url, page.assets }) | |||
} | |||
return crawled | |||
} | |||
</syntaxhighlight> | |||
== Video == | |||
A video to help you understand this can be found at this link: https://www.youtube.com/watch?v=CDXOcvUNBaA | |||
== Web crawler and meta-data == | |||
Metadata web indexing involves assigning keywords or phrases to web pages or web sites within a metadata tag (or "meta-tag") field, so that the web page or web site can be retrieved with a search engine that is customized to search the keywords field. This may or may not involve using keywords restricted to a controlled vocabulary list. This method is commonly used by search engine indexing.<ref>https://en.wikipedia.org/wiki/Web_indexing</ref>. | |||
A list of metadata tags can be seen below<ref>https://gist.github.com/kevinSuttle/1997924</ref>: | |||
<syntaxhighlight lang="html"> | |||
<meta charset='UTF-8'> | |||
<meta name='keywords' content='your, tags'> | |||
<meta name='description' content='150 words'> | |||
<meta name='subject' content='your website's subject'> | |||
<meta name='copyright' content='company name'> | |||
<meta name='language' content='ES'> | |||
<meta name='robots' content='index,follow'> | |||
<meta name='revised' content='Sunday, July 18th, 2010, 5:15 pm'> | |||
<meta name='abstract' content=''> | |||
<meta name='topic' content=''> | |||
<meta name='summary' content=''> | |||
<meta name='Classification' content='Business'> | |||
<meta name='author' content='name, email@hotmail.com'> | |||
<meta name='designer' content=''> | |||
<meta name='reply-to' content='email@hotmail.com'> | |||
<meta name='owner' content=''> | |||
<meta name='url' content='http://www.websiteaddrress.com'> | |||
<meta name='identifier-URL' content='http://www.websiteaddress.com'> | |||
<meta name='directory' content='submission'> | |||
<meta name='pagename' content='jQuery Tools, Tutorials and Resources - O'Reilly Media'> | |||
<meta name='category' content=''> | |||
<meta name='coverage' content='Worldwide'> | |||
<meta name='distribution' content='Global'> | |||
<meta name='rating' content='General'> | |||
<meta name='revisit-after' content='7 days'> | |||
<meta name='subtitle' content='This is my subtitle'> | |||
<meta name='target' content='all'> | |||
<meta name='HandheldFriendly' content='True'> | |||
<meta name='MobileOptimized' content='320'> | |||
<meta name='date' content='Sep. 27, 2010'> | |||
<meta name='search_date' content='2010-09-27'> | |||
<meta name='DC.title' content='Unstoppable Robot Ninja'> | |||
<meta name='ResourceLoaderDynamicStyles' content=''> | |||
<meta name='medium' content='blog'> | |||
<meta name='syndication-source' content='https://mashable.com/2008/12/24/free-brand-monitoring-tools/'> | |||
<meta name='original-source' content='https://mashable.com/2008/12/24/free-brand-monitoring-tools/'> | |||
<meta name='verify-v1' content='dV1r/ZJJdDEI++fKJ6iDEl6o+TMNtSu0kv18ONeqM0I='> | |||
<meta name='y_key' content='1e39c508e0d87750'> | |||
<meta name='pageKey' content='guest-home'> | |||
<meta itemprop='name' content='jQTouch'> | |||
<meta http-equiv='Expires' content='0'> | |||
<meta http-equiv='Pragma' content='no-cache'> | |||
<meta http-equiv='Cache-Control' content='no-cache'> | |||
<meta http-equiv='imagetoolbar' content='no'> | |||
<meta http-equiv='x-dns-prefetch-control' content='off'> | |||
</syntaxhighlight> | |||
== Parallel web crawling == | |||
A parallel crawler is a crawler that runs multiple processes in parallel. The goal is to maximize the download rate while minimizing the overhead from parallelization and to avoid repeated downloads of the same page. To avoid downloading the same page more than once, the crawling system requires a policy for assigning the new URLs discovered during the crawling process, as the same URL can be found by two different crawling processes.<ref>https://en.wikipedia.org/wiki/Web_crawler#Parallelization_policy</ref> | |||
== Standards == | == Standards == | ||
These standards are used from the IB Computer Science Subject Guide<ref>IB Diploma Programme Computer science guide (first examinations 2014). Cardiff, Wales, United Kingdom: International Baccalaureate Organization. January 2012.</ref> | |||
* Describe how a web crawler functions. | * Describe how a web crawler functions. |
Latest revision as of 10:44, 14 November 2019
A Web crawler, sometimes called a spider, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering).
Web search engines and some other sites use Web crawling or spidering software to update their web content or indices of others sites' web content. Web crawlers copy pages for processing by a search engine which indexes the downloaded pages so users can search more efficiently.[2]
In the context of THIS TOPIC, the terms web crawler web spider bot web robot are synonymous.
How a web crawler functions[edit]
Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone. The largest use of bots is in web spidering (web crawler), in which an automated script fetches, analyzes and files information from web servers.
The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.
I am very grateful to Mr. Mischel whose example I use with permission below. In pseudocode, we might imagine a web crawler working like this[3]:
queue = LoadSeed();
while (queue is not empty)
{
dequeue url
request document
store document for later processing
parse document for links
add unseen links to queue
}
Here is another example in pseudocode [4]:
import List, Queue, Hash from lang.data
import fetch, normalize_url from http.utils
import write_file from lang.io
function crawl(start_url) {
crawled = new List
queue = new Queue
visited = new Hash
start_url = normalize_url(start_url)
queue.push(start_url)
while (not queue.empty?) {
url = queue.pop()
page = fetch(url)
visited[url] = true
for asset in page.assets {
data = fetch(asset)
write_file(data)
}
for link in page.links {
link = normalize_url(link)
queue.push(link) if not visited[link]
}
crawled.append({ url, page.assets })
}
return crawled
}
Video[edit]
A video to help you understand this can be found at this link: https://www.youtube.com/watch?v=CDXOcvUNBaA
Web crawler and meta-data[edit]
Metadata web indexing involves assigning keywords or phrases to web pages or web sites within a metadata tag (or "meta-tag") field, so that the web page or web site can be retrieved with a search engine that is customized to search the keywords field. This may or may not involve using keywords restricted to a controlled vocabulary list. This method is commonly used by search engine indexing.[5].
A list of metadata tags can be seen below[6]:
<meta charset='UTF-8'>
<meta name='keywords' content='your, tags'>
<meta name='description' content='150 words'>
<meta name='subject' content='your website's subject'>
<meta name='copyright' content='company name'>
<meta name='language' content='ES'>
<meta name='robots' content='index,follow'>
<meta name='revised' content='Sunday, July 18th, 2010, 5:15 pm'>
<meta name='abstract' content=''>
<meta name='topic' content=''>
<meta name='summary' content=''>
<meta name='Classification' content='Business'>
<meta name='author' content='name, email@hotmail.com'>
<meta name='designer' content=''>
<meta name='reply-to' content='email@hotmail.com'>
<meta name='owner' content=''>
<meta name='url' content='http://www.websiteaddrress.com'>
<meta name='identifier-URL' content='http://www.websiteaddress.com'>
<meta name='directory' content='submission'>
<meta name='pagename' content='jQuery Tools, Tutorials and Resources - O'Reilly Media'>
<meta name='category' content=''>
<meta name='coverage' content='Worldwide'>
<meta name='distribution' content='Global'>
<meta name='rating' content='General'>
<meta name='revisit-after' content='7 days'>
<meta name='subtitle' content='This is my subtitle'>
<meta name='target' content='all'>
<meta name='HandheldFriendly' content='True'>
<meta name='MobileOptimized' content='320'>
<meta name='date' content='Sep. 27, 2010'>
<meta name='search_date' content='2010-09-27'>
<meta name='DC.title' content='Unstoppable Robot Ninja'>
<meta name='ResourceLoaderDynamicStyles' content=''>
<meta name='medium' content='blog'>
<meta name='syndication-source' content='https://mashable.com/2008/12/24/free-brand-monitoring-tools/'>
<meta name='original-source' content='https://mashable.com/2008/12/24/free-brand-monitoring-tools/'>
<meta name='verify-v1' content='dV1r/ZJJdDEI++fKJ6iDEl6o+TMNtSu0kv18ONeqM0I='>
<meta name='y_key' content='1e39c508e0d87750'>
<meta name='pageKey' content='guest-home'>
<meta itemprop='name' content='jQTouch'>
<meta http-equiv='Expires' content='0'>
<meta http-equiv='Pragma' content='no-cache'>
<meta http-equiv='Cache-Control' content='no-cache'>
<meta http-equiv='imagetoolbar' content='no'>
<meta http-equiv='x-dns-prefetch-control' content='off'>
Parallel web crawling[edit]
A parallel crawler is a crawler that runs multiple processes in parallel. The goal is to maximize the download rate while minimizing the overhead from parallelization and to avoid repeated downloads of the same page. To avoid downloading the same page more than once, the crawling system requires a policy for assigning the new URLs discovered during the crawling process, as the same URL can be found by two different crawling processes.[7]
Standards[edit]
These standards are used from the IB Computer Science Subject Guide[8]
- Describe how a web crawler functions.
- Discuss the relationship between data in a meta-tag and how it is accessed by a web crawler.
- Discuss the use of parallel web crawling.
References[edit]
- ↑ http://www.flaticon.com/
- ↑ https://en.wikipedia.org/wiki/Web_crawler
- ↑ http://blog.mischel.com/2011/12/16/writing-a-web-crawler-crawling-models/
- ↑ https://gist.github.com/demetriusnunes/d2cef3cd249167ac94400fc591d31f03
- ↑ https://en.wikipedia.org/wiki/Web_indexing
- ↑ https://gist.github.com/kevinSuttle/1997924
- ↑ https://en.wikipedia.org/wiki/Web_crawler#Parallelization_policy
- ↑ IB Diploma Programme Computer science guide (first examinations 2014). Cardiff, Wales, United Kingdom: International Baccalaureate Organization. January 2012.