Principle
Every time you click the next question, a request like https://kamabeisai.8531.cn/api/questions/14065655622981?result=D is sent, where 14065655622981 is the question ID and result=D is the option selected by the user. At the same time, a response containing the is_right field is returned; a value of 1 indicates the user answered correctly, while 0 indicates an incorrect answer. The response also includes the result containing the correct answer, meaning we can obtain the correct answer for that question through the response of each submission.
Since there is no limit on the number of practice attempts, we can obtain a large number of question IDs and their corresponding correct answers through multiple attempts. Then, by intercepting the request sent every time the next question is clicked and replacing it with the request links containing the IDs and correct answers we collected, we can achieve a perfect score.
Specific Code
Getting Correct Answers Part
import sys
from mitmproxy import http
import re
import json
import os
from mitmproxy.tools.main import mitmdump
# 用于匹配请求的正则表达式,问题ID和答案部分为可变部分
TARGET_PATTERN = re.compile(r"https://kamabeisai\.8531\.cn/api/questions/\d+\?result=[A-Za-z]+")
# 文件名,用于保存符合条件的请求链接
ANSWER_FILE = "answer.txt"
# 加载已有的链接以避免重复
if os.path.exists(ANSWER_FILE):
with open(ANSWER_FILE, "r") as f:
existing_links = set(f.read().splitlines())
else:
existing_links = set()
def response(flow: http.HTTPFlow) -> None:
# 判断请求链接是否符合目标格式
if TARGET_PATTERN.match(flow.request.url):
# 尝试解析响应的 JSON 数据
try:
data = flow.response.json()
# 检查"is_right"字段的值
if "data" in data and "question" in data["data"]:
question = data["data"]["question"]
if "is_right" in question:
with open(ANSWER_FILE, "a") as f:
if question["is_right"] == 1:
# 如果is_right为1,直接保存请求链接(确保不重复)
if flow.request.url not in existing_links:
f.write(flow.request.url + "\n")
existing_links.add(flow.request.url)
elif question["is_right"] == 0 and "result" in question:
# 如果is_right为0,将响应中的result替换请求链接中的result
parsed_url = re.match(r"(https://kamabeisai\.8531\.cn/api/questions/\d+\?result=)([A-Za-z]+)", flow.request.url)
if parsed_url:
new_url = parsed_url.group(1) + question["result"]
if new_url not in existing_links:
f.write(new_url + "\n")
existing_links.add(new_url)
except ValueError:
# 如果响应不是JSON格式,则跳过
pass
addons = [
response
]
if __name__ == "__main__":
sys.argv.append('-s')
sys.argv.append(__file__)
mitmdump()
The obtained correct answer request links will be saved to the answer.txt file in the script directory.
Replacement Part
import sys
import random
from mitmproxy import http
import re
import os
from mitmproxy.tools.main import mitmdump
# 用于匹配请求的正则表达式,问题ID和答案部分为可变部分
TARGET_PATTERN = re.compile(r"https://kamabeisai\.8531\.cn/api/questions/\d+\?result=[A-Za-z]+")
# 文件名,用于保存符合条件的请求链接
ANSWER_FILE = "answer.txt"
# 加载已有的链接用于随机选择且避免重复
if os.path.exists(ANSWER_FILE):
with open(ANSWER_FILE, "r") as f:
available_links = list(f.read().splitlines())
else:
available_links = []
USED_LINKS = set()
def request(flow: http.HTTPFlow) -> None:
# 判断请求链接是否符合目标格式
if TARGET_PATTERN.match(flow.request.url) and available_links:
# 随机选择一个未使用的链接替换请求链接
unused_links = list(set(available_links) - USED_LINKS)
if unused_links:
selected_link = random.choice(unused_links)
flow.request.url = selected_link
USED_LINKS.add(selected_link)
addons = [
request
]
if __name__ == "__main__":
sys.argv.append('-s')
sys.argv.append(__file__)
mitmdump()
This script will randomly but non-repetitively extract request links from answer.txt and perform the replacement.
Disclaimer
1. Responsibility of Use: This crawler script is for learning and research purposes only. Users are responsible for all actions and consequences arising from the use of this script. This script must not be used for illegal purposes, including but not limited to unauthorized web scraping, data theft, or malicious attacks on target websites.
2. Legality Statement: When using this crawler script, users should ensure compliance with relevant laws and regulations, including but not limited to the "Cybersecurity Law of the People's Republic of China", "Copyright Law", etc. At the same time, users should comply with the target website's "Terms of Service", "Privacy Policy", and Robots.txt regulations. Unauthorized scraping or acquisition of data from the target website may infringe upon the intellectual property rights, privacy rights, and other legal rights of others, and users shall bear the legal liabilities arising therefrom.
3. No Abuse: This script is limited to legal and compliant usage scenarios. Please do not use this script to maliciously crawl the target website, such as excessive scraping, distributed attacks, frequent requests, etc., to avoid negative impacts on the target website. Users are advised to reasonably control the scraping frequency to avoid interfering with the normal service of the website.
4. Website Ownership: Any data scraped by this script belongs to the property of the target website and its data providers, and users should respect the intellectual property rights of others. Users shall bear full responsibility for any consequences arising from unauthorized reprinting or misuse of this data.
5. Script Modification and Distribution: This script can be modified according to personal needs, but it must not be illegally disseminated or used for malicious purposes in any form. If you publish or share this script with others, please be sure to attach the disclaimer and remind other users to comply with the above terms.
6. No Liability: The author and developer of this script are not responsible for any direct or indirect loss, damage, data loss, service interruption, or any other legal liability caused by the use of this script. Use of this script implies that you agree to bear all risks yourself.
7. Prohibition of Commercial Use: The use of this script should not involve any unauthorized commercial behavior. Without permission, this script may not be used for commercial purposes or to seek economic benefits for others in any way.
Please read the above disclaimer carefully and ensure that you comply with relevant laws and regulations when using this crawler script. If you have any questions, please consult a legal professional.