Principle
Use mitm to retrieve questions and simulate requests to search for answers. The question search API comes from Universal.
Implementation Code
import sys
from mitmproxy import http
import json
import requests
from mitmproxy.tools.main import mitmdump
# 全局变量来存储题目和选项
global_question_data = {}
class Addon:
# 请求拦截处理函数
def response(self, flow: http.HTTPFlow) -> None:
# 监听请求 https://theory.8531.cn/api/question/begin
# 以及 https://theory.8531.cn/api/question/next?type=0
if ("theory.8531.cn/api/question/begin" in flow.request.url or
"theory.8531.cn/api/question/next" in flow.request.url):
# 尝试解析响应内容
try:
response_data = json.loads(flow.response.content)
if response_data["code"] == 0:
question_content = response_data["data"]["question_content"]
option_list = response_data["data"]["option_list"]
# 保存题目和选项
global global_question_data
global_question_data = {
"question": question_content,
"options": option_list
}
# 打印获取到的题目和选项,保留输出
print(f"题目: {question_content}")
print(f"选项: {option_list}")
# 调用模拟请求函数
send_mock_request(question_content, option_list)
except Exception as e:
print(f"解析响应出错: {e}")
# 模拟请求的函数 注意替换key
def send_mock_request(question, options):
url = "https://lyck6.cn/scriptService/api/autoAnswer/在这里替换你的万能key?gpt=-1"
headers = {
"sec-ch-ua-platform": "\"Windows\"",
"sec-ch-ua": "\"Microsoft Edge\";v=\"131\", \"Chromium\";v=\"131\", \"Not_A Brand\";v=\"24\"",
"sec-ch-ua-mobile": "?0",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0",
"accept": "application/json, text/plain, */*",
"dnt": "1",
"content-type": "application/json;charset=UTF-8",
"version": "5.0.9.8",
"origin": "",
"sec-fetch-site": "cross-site",
"sec-fetch-mode": "cors",
"sec-fetch-dest": "empty",
"referer": "h",
"accept-encoding": "gzip, deflate, br, zstd",
"accept-language": "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6",
"priority": "u=1, i"
}
payload = {
"plat": None,
"qid": None,
"question": question,
"options": options,
"options_id": [],
"type": 0,
"location": ""
}
try:
response = requests.post(url, headers=headers, json=payload)
if response.status_code == 200:
response_data = response.json()
if response_data["code"] == 0:
answers = response_data["result"]["answers"]
if isinstance(answers, list) and all(isinstance(ans, int) for ans in answers):
answer_str = ''.join([chr(97 + ans) for ans in answers]) # 0 -> 'a', 1 -> 'b', etc.
print(f"答案: {answer_str}")
else:
print("模拟请求返回的答案格式不正确。")
else:
print(f"模拟请求失败,服务器返回错误: {response_data['message']}")
else:
print(f"模拟请求失败,HTTP状态码: {response.status_code}")
except Exception as e:
print(f"发送模拟请求出错: {e}")
addons = [
Addon()
]
if __name__ == "__main__":
sys.argv.append('-s')
sys.argv.append(__file__)
sys.argv.append('--quiet') # 添加 --quiet 参数以减少无关的监听日志输出
mitmdump()
Usage
Run directly after replacing the Universal key. You need to configure the certificate and proxy on your phone in advance. The listening port is 8080.
Note
The answers come from the Universal question bank. Not all questions have answers, and the accuracy of the answers is not guaranteed.
Disclaimer
1. Responsibility for Use:
This crawler script is for learning and research purposes only. Users are responsible for all actions and consequences arising from the use of this script. The use of this script must not be used for illegal purposes, including but not limited to unauthorized web scraping, data theft, or malicious attacks on target websites.
2. Legality Statement:
When using this crawler script, users should ensure compliance with relevant laws and regulations, including but not limited to the "Cybersecurity Law of the People's Republic of China", "Copyright Law", etc. At the same time, users should comply with the target website's "Terms of Service", "Privacy Policy", and Robots.txt regulations. Unauthorized scraping or acquisition of data from the target website may infringe upon the intellectual property rights, privacy rights, and other legal rights of others, and users shall bear the legal liabilities arising therefrom.
3. No Abuse:
This script is limited to legal and compliant usage scenarios. Do not use this script to maliciously crawl the target website, such as excessive scraping, distributed attacks, frequent requests, etc., to avoid negative impacts on the target website. Users are advised to reasonably control the scraping frequency to avoid interfering with the normal service of the website.
4. Website Ownership:
Any data scraped by this script belongs to the property of the target website and its data providers. Users should respect the intellectual property rights of others. Users shall bear full responsibility for any consequences arising from unauthorized reprinting or misuse of this data.
5. Script Modification and Distribution:
This script can be modified according to personal needs, but it must not be illegally disseminated or used for malicious purposes in any form. If you publish or share this script with others, please be sure to attach the disclaimer and remind other users to comply with the above terms.
6. No Liability:
The author and developer of this script are not responsible for any direct or indirect loss, damage, data loss, service interruption, or any other legal liability caused by the use of this script. By using this script, you agree to assume all risks yourself.
7. Prohibition of Commercial Use:
The use of this script should not involve any unauthorized commercial behavior. Without permission, this script may not be used for commercial purposes or to seek economic benefits for others in any way.
Please read the above disclaimer carefully and ensure that you comply with relevant laws and regulations when using this crawler script. If in doubt, please consult a legal professional.