screenshot
:
Optional Arguments

About the Site Navigability Analyzer

How Navigable is your website? What would a spider see? The Site Navigability Analyzer uses the Datayze SpiderBot to crawl your site, analyzing it's navigability. The Datayze SpiderBot calculates the shortest path from your splash page to any internal page, the overall connectivity of your site, highly connected hubs and destination leaf nodes. Let the spider find your linking errors for you. The Datayze SpiderBot is fully customizable, crawling only the pages you ask it to, and letting you define 'hub' and 'leaf' thresholds.

The Site Navigability Analyzer can also be used to improve your site's search engine optimization (SEO) ranking. View a list of referrers and the anchor text. Is your linking schema internally consistent? Do your branded links always have the same capitalization and punctuation? Do you provide alternative descriptive links to help lost website visitors find their way?

About the Spider
In order for this tool to work, we must crawl the site or page you want analyzed. We do this with DatayzeBot, the datayze spider.

Our spider crawls at a leisurely rate of 1 page ever 1.5 seconds. While the spider doesn't keep track of the contents of the pages it crawls, it does keep track of the number of requests issued by each visitor. Currently the crawler is limited to 1000 pages per user per day. Since the DatayzeBot does not index or cache any pages it crawls, rerunning the Site Navigability Analyzer will count against your daily allowed number of page crawls. You can get around the cap by pausing the crawler and resuming it another day.

DatayzeBot now respects the robots exclusion standard. To specifically allow (or disallow) the crawler to access a page or directory, create a new set of rules for "DatayzeBot" in your robots.txt file. DatayzeBot will follow the longest matching rule for a specified page, rather than the first matching rule. If no matching rule is found, DatayzeBot assumes it is allowed to crawl the page. Not sure if a page is excluded by your robots.txt file? The Index/No Index app will parse HTML headers, meta tags and robots.txt and summarize the results for you.

Interested in Web Development? Try our other tools, like the Site Validator which can summarize the types of HTML errors on your site, as well as provide a page by page breakdown. It can analyze your anchor text diversity and find the length shortest path to any page. The Thin Content Checker can analyze your site's content, let you know the percentage of unique phrases per page, and generate a histogram of page content lengths. A common need among web developers is to know which pages of theirs are being indexed, and thus which are not. Thus we created the Sitemap Index Analyzer.