Importing URLs and HTTP Requests Prior to a Scan

 You can import a list of URLs and HTTP requests prior to launching the scan manually or by importing them from a proxy log in the Imported Links node, which is shown in the below screenshot.

Screenshot 7  – Import URLs Manually or from Proxy Logs

Screenshot 7 – Import URLs Manually or from Proxy Logs

 It is possible to import a list of URLs from the following formats:

  • Fiddler session archives (*.saz)

  • Burp Saved Items (*.xml)

  • Paros log files (*.txt)

  • Web Services Definition Files (*.wsdl)

  • HTTP archive files (*.har)

  • Swagger Files (*.json)

  • REST WADL (Web Application Description Language) Files (*.wadl)

  • ASP.NET Web Forms Project Files (*.csproj,*.vbproj)

  • Postman Files (*.json)

You should specify or import a list of links if there are pages or areas of the website which cannot be crawled automatically, maybe because they are not linked from anywhere else.

Netsparker will always follow the scan scope configuration, including inclusion and exclusion of URLs. Therefore if you are scanning and your import has links to, only the links for will be imported.

Previous Page Next Page