eWPTXv3 - Notes
GitHubPortfolioTwitter/X MediumCont@ctHome
  • 📝eWPTXv3
    • Web Application Penetration Testing Methodology
      • 1.1 Introduction to Web App Security Testing
        • 1.1.1 Web Application
        • 1.1.2 Web App Architecture
        • 1.1.3 HTTP/HTTPS
      • 1.2 Web App Pentesting Methodology
    • Web Application Reconnaissance
      • 2.1 Information Gathering
        • 2.1.1 DNS Recon
          • 2.1.1.1 DNS Zone Transfer
          • 2.1.1.2 Subdomain Enumeration
        • 2.1.2 WAF Recon
      • 2.2 Passive Crawling & Spidering
      • 2.3 Web Server Fingerprinting
        • 2.3.1 File & Directory Brute-Force
      • 2.4 Web Proxies
        • 2.4.1 Burp Suite
        • 2.4.2 OWASP ZAP
    • Authentication Attacks
      • 6.1 HTTP Attacks
        • 6.1.1 HTTP Method Tampering
        • 6.1.2 Attacking HTTP Authentication
      • 6.2 Session Attacks
        • 6.2.1 Session Hijacking
        • 6.2.2 Session Fixation
        • 6.2.3 Session Hijacking via Cookie Tampering
      • 6.3 JWT Attacks
      • 6.4 CSRF
    • Injection Vulnerabilities
      • 4.1 Command Injection
      • 4.2 Cross-Site Scripting (XSS)
        • 4.2.1 XSS Anatomy
        • 4.2.2 Reflected XSS
        • 4.2.3 Stored XSS
        • 4.2.4 DOM-Based XSS
        • 4.2.5 Identifying & Exploiting XSS with XSSer
      • 4.3 ​SQL Injection (SQLi)
        • 4.3.1 DB & SQL Introduction
        • 4.3.2 SQL Injection (SQLi)
        • 4.3.3 In-Band SQLi
        • 4.3.4 Blind SQLi
        • 4.3.5 NoSQL
        • 4.3.6 SQLMap
        • 4.3.7 Mitigation Strategies
    • API Penetration Testing
      • 5.1 API Testing
    • Server-Side Attacks
      • 6.1 Server-side request forgery (SSRF)
      • 6.2 Deserialization
      • 6.3 ​File & Resource Attacks
        • 6.1 File Upload Vulnerability
        • 6.2 Directory Traversal
        • 6.3 File Inclusion (LFI and RFI)
          • 6.3.1 Local File Inclusion (LFI)
          • 6.3.2 Remote File Inclusion (RFI)
        • 6.4 CMS Pentesting
          • 6.4.1 Wordpress, Drupal & Magento
    • Filter Evasion & WAF Bypass
      • 7.1 Obfuscating attacks using encodings
    • 📄Report
      • How to write a PT Report
  • 🛣️RoadMap / Exam Preparation
  • 📔eWPTX Cheat Sheet
Powered by GitBook
On this page
  • Crawling
  • Spidering
  • Spidering with OWASP ZAP
  1. eWPTXv3
  2. Web Application Reconnaissance

2.2 Passive Crawling & Spidering

Previous2.1.2 WAF ReconNext2.3 Web Server Fingerprinting

Crawling and Spidering are techniques used to automatically browse and index web content, typically performed by search engines or web crawlers. Here's a breakdown of each:

Crawling

  • Crawling is the process of systematically browsing the web to discover and index web pages.

  • A crawler, also known as a web spider or web robot, starts with a list of seed URLs and then follows hyperlinks from one page to another, recursively.

  • Crawlers retrieve web pages, extract links from them, and add new URLs to their queue for further crawling.

  • They often adhere to a set of rules specified in a file called robots.txt to determine which parts of a website they are allowed to crawl and index.

  • We can utilize 's passive crawler to map out the web app and understand its structure (navigating on website an retrieve the site map).

Spidering

  • Spidering is a term often used interchangeably with crawling, referring to the automated process of fetching web pages and following links.

  • It derives from the analogy of a spider weaving a web by moving from one location to another and creating connections.

  • Spidering involves systematically traversing the web, exploring web pages and their links to index content for search engines or other purposes.

  • Spidering algorithms may vary based on the specific objectives, such as optimizing for depth-first or breadth-first traversal, or prioritizing certain types of content.

  • We can utilize Burp Suite Pro or 's Spider to automate the process of spidering a web application to map out the web app and understand its structure.

Spidering with OWASP ZAP

  • Abilitate Foxy Proxy on browser (Firefox)

  • Refresh webpage to retrieve Sites Link on ZAP

  • Click on Tools -> Spider

  • Select URL and abilitate recurse

  • Start Scan

We obtained more URIs and we can save them into .csv format for a better exportation and manipulation.

📝
Burp Suite
OWASP ZAP