google開源web安全掃描工具skipfish簡介
1、簡介
Skipfish 是一個積極的Web應用程序的安全性偵察工具。 它準備了一個互動為目標的網站的站點地圖進行一個遞歸爬網和基于字典的探頭。 然后,將得到的地圖是帶注釋的與許多活性(但希望非破壞性的)安全檢查的輸出。 最終報告工具生成的是,作為一個專業的網絡應用程序安全評估的基礎。
http://zone.wooyun.org/content/2628
2、項目
https://github.com/spinkham/skipfish
http://code.google.com/p/skipfish/
3、安裝部署
a、安裝所需軟件庫:
yum install pcre-devel openssl-devel libidn-devel libidn2-devel
b、下載源碼skipfish
#https://github.com/spinkham/skipfish(http://code.google.com/p/skipfish/)獲取源碼
#解壓,源碼文件如下:
c、編譯
make
生成的可執行文件如下:
d、安裝部署成功
4、使用
//編譯完成,在目錄中生成skipfish可執行程序
cp dictionaries/complete.wl skipfish.wl
//拷貝其中一個字典,用來掃描
./skipfish -o data http://mall.midea.com/detail/index
//其中data是輸出目錄,掃描結束后可打開data目錄下的index.html查看掃描結果
5、相關參數
skipfish web application scanner - version 2.10b
Usage: /home/admin/workspace/skipfish/skipfish [ options ... ] -W wordlist -o output_dir start_url [ start_url2 ... ]
Authentication and access options:
驗證和訪問選項:
-A user:pass - use specified HTTP authentication credentials
使用特定的http驗證
-F host=IP - pretend that 'host' resolves to 'IP'
-C name=val - append a custom cookie to all requests
對所有請求添加一個自定的cookie
-H name=val - append a custom HTTP header to all requests
對所有請求添加一個自定的http請求頭
-b (i|f|p) - use headers consistent with MSIE / Firefox / iPhone
偽裝成IE/FIREFOX/IPHONE的瀏覽器
-N - do not accept any new cookies
不允許新的cookies
--auth-form url - form authentication URL
--auth-user user - form authentication user
--auth-pass pass - form authentication password
--auth-verify-url - URL for in-session detection
Crawl scope options:
-d max_depth - maximum crawl tree depth (16)最大抓取深度
-c max_child - maximum children to index per node (512)最大抓取節點
-x max_desc - maximum descendants to index per branch (8192)每個索引分支抓取后代數
-r r_limit - max total number of requests to send (100000000)最大請求數量
-p crawl% - node and link crawl probability (100%) 節點連接抓取幾率
-q hex - repeat probabilistic scan with given seed
-I string - only follow URLs matching 'string' URL必須匹配字符串
-X string - exclude URLs matching 'string' URL排除字符串
-K string - do not fuzz parameters named 'string'
-D domain - crawl cross-site links to another domain 跨域掃描
-B domain - trust, but do not crawl, another domain
-Z - do not descend into 5xx locations 5xx錯誤時不再抓取
-O - do not submit any forms 不嘗試提交表單
-P - do not parse HTML, etc, to find new links 不解析HTML查找連接
Reporting options:
-o dir - write output to specified directory (required)
-M - log warnings about mixed content / non-SSL passwords
-E - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
-U - log all external URLs and e-mails seen
-Q - completely suppress duplicate nodes in reports
-u - be quiet, disable realtime progress stats
-v - enable runtime logging (to stderr)
Dictionary management options:
-W wordlist - use a specified read-write wordlist (required)
-S wordlist - load a supplemental read-only wordlist
-L - do not auto-learn new keywords for the site
-Y - do not fuzz extensions in directory brute-force
-R age - purge words hit more than 'age' scans ago
-T name=val - add new form auto-fill rule
-G max_guess - maximum number of keyword guesses to keep (256)
-z sigfile - load signatures from this file
Performance settings:
-g max_conn - max simultaneous TCP connections, global (40) 最大全局TCP鏈接
-m host_conn - max simultaneous connections, per target IP (10) 最大鏈接/目標IP
-f max_fail - max number of consecutive HTTP errors (100) 最大http錯誤
-t req_tmout - total request response timeout (20 s) 請求超時時間
-w rw_tmout - individual network I/O timeout (10 s)
-i idle_tmout - timeout on idle HTTP connections (10 s)
-s s_limit - response size limit (400000 B) 限制大小
-e - do not keep binary responses for reporting 不報告二進制響應
Other settings:
-l max_req - max requests per second (0.000000)
-k duration - stop scanning after the given duration h:m:s
--config file - load the specified configuration file