Difficulty: Easy
Correct Answer: A program that catalogs websites
Explanation:
Introduction / Context:
The term “spider” (also called web crawler or robot) is fundamental to how search engines discover, read, and organize pages. Knowing what a spider does explains why pages appear in search results and how website changes are detected over time.
Given Data / Assumptions:
Concept / Approach:
A spider systematically fetches webpages, follows hyperlinks, and sends content to an indexer. The indexer extracts words, metadata, links, and signals (titles, headings) to build a searchable catalog. Spiders obey rules placed in robots.txt and meta directives to respect crawling limits.
Step-by-Step Solution:
Identify the domain: search/indexing technology.Recall definition: a spider is an automated crawler that discovers and downloads webpages.Match to option describing “program that catalogs websites.”
Verification / Alternative check:
Search engines publicly document their crawlers (for example, user-agent strings) and how robots.txt controls their behavior. This confirms the spider’s role as a discovery and cataloging tool.
Why Other Options Are Wrong:
A computer virus — malware that infects systems; not the same as a crawler.A hacker — a human attacker, unrelated to automated indexing.An application for viewing websites — that is a web browser.A firewall — a network security control, not a discovery bot.
Common Pitfalls:
Confusing “crawler” with “browser.” Browsers render pages for humans; spiders fetch pages for indexing. Also, spiders do not judge truth or quality on their own; ranking happens later in the search pipeline.
Final Answer:
A program that catalogs websites
Discussion & Comments