PyHacker實現網站後臺掃描器編寫指南

包括如何處理假的200頁面/404智能判斷等

喜歡用Python寫腳本的小夥伴可以跟著一起寫一寫呀。

編寫環境:Python2.x

00×1:模塊

需要用到的模塊如下:

import request

00×2:請求基本代碼

先將請求的基本代碼寫出來:

import requests
def dir(url):
    headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
    req = requests.get(url=url,headers=headers)
    print req.status_code
dir('http://www.hackxc.cc')

00×3:設置

設置超時時間,以及忽略不信任證書

import urllib3
urllib3.disable_warnings()
req = requests.get(url=url,headers=headers,timeout=3,verify=False)

再加個異常處理

調試一下

再進行改進,如果為200則輸出

if req.status_code==200:
    print "[*]",req.url

00×4:200頁面處理

難免會碰到假的200頁面,我們再處理一下

處理思路:

首先訪問hackxchackxchackxc.php和xxxxxxxxxx記錄下返回的頁面的內容長度,然後在後來的掃描中,返回長度等於這個長度的判定為404

def dirsearch(u,dir):
    try:
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
        #假的200頁面進行處理
        hackxchackxchackxc = '/hackxchackxchackxc.php'
        hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers)
        # print len(hackxchackxchackxc_404.content)
        xxxxxxxxxxxx = '/xxxxxxxxxxxx'
        xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers)
        # print len(xxxxxxxxxxxx_404.content)
        #正常掃描
        req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False)
        # print len(req.content)
        if req.status_code==200:
            if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content):
                print "[+]",req.url
            else:
                print u+dir,404
    except:
        pass

很nice

00×5:保存結果

再讓結果自動保存

0x06:完整代碼

#!/usr/bin/python
#-*- coding:utf-8 -*-
import requests
import urllib3
urllib3.disable_warnings()
urls = []
def dirsearch(u,dir):
    try:
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
        #假的200頁面進行處理
        hackxchackxchackxc = '/hackxchackxchackxc.php'
        hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers)
        # print len(hackxchackxchackxc_404.content)
        xxxxxxxxxxxx = '/xxxxxxxxxxxx'
        xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers)
        # print len(xxxxxxxxxxxx_404.content)
        #正常掃描
        req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False)
        # print len(req.content)
        if req.status_code==200:
            if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content):
                print "[+]",req.url
                with open('success_dir.txt','a+')as f:
                    f.write(req.url+"\n")
            else:
                print u+dir,404
        else:
            print u + dir, 404
    except:
        pass
if __name__ == '__main__':
    url = raw_input('\nurl:')
    print ""
    if 'http' not in url:
        url = 'http://'+url
    dirpath = open('rar.txt','r')
    for dir in dirpath.readlines():
        dir = dir.strip()
        dirsearch(url,dir)

以上就是PyHacker實現網站後臺掃描器編寫指南的詳細內容,更多關於PyHacker網站後臺掃描器的資料請關註WalkonNet其它相關文章!

推薦閱讀: