Compare commits

...

No commits in common. "v2.9.1-e3a43d4" and "master" have entirely different histories.

65 changed files with 677 additions and 4471 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

View File

@ -7,6 +7,7 @@ on:
paths:
- version.py
- .github/workflows/build-windows.yml
- windows/**
jobs:
Windows-build:
@ -20,7 +21,7 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install wheel numpy==1.23.5 pyparsing==3.0.9 wxpython==4.2.0 pyinstaller==5.7.0
git clone --depth=1 -b master https://github.com/jxxghp/nas-tools --recurse-submodule
git clone --depth=1 -b master https://github.com/NAStool/nas-tools --recurse-submodule
cd nas-tools
pip install -r requirements.txt
echo ("NASTOOL_CONFIG=D:/a/nas-tools/nas-tools/nas-tools/config/config.yaml") >> $env:GITHUB_ENV
@ -94,4 +95,4 @@ jobs:
message: |
*v${{ env.app_version }}*
${{ github.event.commits[0].message }}
${{ github.event.commits[0].message }}

216
README.md
View File

@ -1,10 +1,10 @@
![logo-blue](https://user-images.githubusercontent.com/51039935/197520391-f35db354-6071-4c12-86ea-fc450f04bc85.png)
# NAS媒体库资源归集、整理自动化工具
# NAS媒体库管理工具
[![GitHub stars](https://img.shields.io/github/stars/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/network/members)
[![GitHub issues](https://img.shields.io/github/issues/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/issues)
[![GitHub license](https://img.shields.io/github/license/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/blob/master/LICENSE.md)
[![GitHub stars](https://img.shields.io/github/stars/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/network/members)
[![GitHub issues](https://img.shields.io/github/issues/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/issues)
[![GitHub license](https://img.shields.io/github/license/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/blob/master/LICENSE.md)
[![Docker pulls](https://img.shields.io/docker/pulls/jxxghp/nas-tools?style=plastic)](https://hub.docker.com/r/jxxghp/nas-tools)
[![Platform](https://img.shields.io/badge/platform-amd64/arm64-pink?style=plastic)](https://hub.docker.com/r/jxxghp/nas-tools)
@ -13,34 +13,12 @@ Dockerhttps://hub.docker.com/repository/docker/jxxghp/nas-tools
TG频道https://t.me/nastool
WIKIhttps://github.com/jxxghp/nas-tools/wiki
API: http://localhost:3000/api/v1/
## 功能:
本软件的初衷是实现影视资源的自动化管理,释放双手、聚焦观影。需要有良好的网络环境及私有站点才能获得较好的使用体验。
### 1、资源检索和订阅
* 站点RSS聚合想看的加入订阅资源自动实时追新。
* 通过微信、Telegram、Slack、Synology Chat或者WEB界面聚合资源搜索下载最新热门资源一键搜索或者订阅。
* 与豆瓣联动,在豆瓣中标记想看后台自动检索下载,未出全的自动加入订阅。
### 2、媒体库整理
* 监控下载软件,下载完成后自动识别真实名称,硬链接到媒体库并重命名。
* 对目录进行监控,文件变化时自动识别媒体信息硬链接到媒体库并重命名。
* 解决保种与媒体库整理冲突的问题专为中文环境优化支持国产剧集和动漫重命名准确率高改名后Emby/Jellyfin/Plex完美刮削海报墙。
### 3、站点养护
* 全面的站点数据统计,实时监测你的站点流量情况。
* 全自动化托管养站,支持远程下载器(本工具内建刷流功能仅为日常养站使用,如果追求数据建议使用更加强大的刷流工具:<a href="https://github.com/vertex-app/vertex" target="_blank">Vertex</a>)。
* 站点每日自动登录保号。
### 4、消息服务
* 支持微信、Telegram、Slack、Synology Chat、Bark、PushPlus、爱语飞飞等近十种渠道图文消息通知
* 支持通过微信、Telegram、Slack、Synology Chat远程控制订阅和下载。
* Emby/Jellyfin/Plex播放状态通知。
NAS媒体库管理工具。
## 安装
@ -55,7 +33,7 @@ docker pull jxxghp/nas-tools:latest
### 2、本地运行
python3.10版本需要预安装cython如发现缺少依赖包需额外安装
```
git clone -b master https://github.com/jxxghp/nas-tools --recurse-submodule
git clone -b master https://github.com/NAStool/nas-tools --recurse-submodule
python3 -m pip install -r requirements.txt
export NASTOOL_CONFIG="/xxx/config/config.yaml"
nohup python3 run.py &
@ -64,7 +42,7 @@ nohup python3 run.py &
### 3、Windows
下载exe文件双击运行即可会自动生成配置文件目录
https://github.com/jxxghp/nas-tools/releases
https://github.com/NAStool/nas-tools/releases
### 4、群晖套件
添加矿神群晖SPK套件源直接安装
@ -72,181 +50,3 @@ https://github.com/jxxghp/nas-tools/releases
https://spk.imnks.com/
https://spk7.imnks.com/
## 配置
### 1、申请相关API KEY
* 申请TMDB用户在 https://www.themoviedb.org/ 申请用户得到API KEY。
* 申请消息通知服务
1) 微信(推荐):在 https://work.weixin.qq.com/ 申请企业微信自建应用获得企业ID、自建应用secret、agentid 微信扫描自建应用二维码可实现在微信中使用消息服务,无需打开企业微信
2) Telegram推荐关注BotFather申请机器人获取token关注getuserID拿到chat_id。该渠道支持远程控制详情参考"5、配置微信/Telegram/Slack/Synology Chat远程控制"。
3) Slack在 https://api.slack.com/apps 申请应用,该渠道支持远程控制,详情参考频道说明。
4) Synology Chat在群晖中安装Synology Chat套件点击Chat界面"右上角头像->整合->机器人"创建机器人,"传出URL"设置为:"NAStool地址/synology""传入URL"及"令牌"填入到NAStool消息服务设置中该渠道支持远程控制。
5) 其它仍然会持续增加对通知渠道的支持API KEY获取方式类似不一一说明。
### 2、基础配置
* 文件转移模式说明目前支持六种模式复制、硬链接、软链接、移动、RCLONE、MINIO。
1) 复制模式下载做种和媒体库是两份多占用存储下载盘大小决定能保多少种好处是媒体库的盘不用24小时运行可以休眠
2) 硬链接模式不用额外增加存储空间,一份文件两份目录,但需要下载目录和媒体库目录在一个磁盘分区或者存储空间;软链接模式就是快捷方式,需要容器内路径与真实路径一致才能正常使用;
3) 移动模式会移动和删除原文件及目录;
4) RCLONE模式只针对RCLONE网盘使用场景**注意使用RCLONE模式需要自行映射rclone配置目录到容器中**,具体参考设置项小问号说明;
5) MINIO只针对S3/云原生场景,**注意使用MINIO媒体库应当设置为/bucket名/类别名**,例如,bucket的名字叫cloud,电影的分类文件夹名叫movie则媒体库电影路径为/cloud/movie,最好母集用s3fs挂载到/cloud/movie只读就行。
* 启动程序并配置Docker默认使用3000端口启动群晖套件默认3003端口默认用户密码admin/passworddocker需要参考教程提前映射好端口、下载目录、媒体库目录。登录管理界面后在设置中根据每个配置项的提示在WEB页面修改好配置并重启生效基础设置中有标红星的是必须要配置的如TMDB APIKEY等每一个配置项后都有小问号点击会有详细的配置说明推荐阅读。
### 3、设置媒体库服务器
支持 Emby推荐、Jellyfin、Plex设置媒体服务器后可以对本地资源进行判重避免重复下载同时能标识本地已存在的资源
* 在Emby/Jellyfin/Plex的Webhook插件中设置地址为http(s)://IP:PORT/emby、jellyfin、plex用于接收播放通知可选
* 将Emby/Jellyfin/Plex的相关信息配置到”设置-》媒体服务器“中
* 如果启用了默认分类需按如下的目录结构分别设置好媒体库如是自定义分类请按自己的定义建立好媒体库目录分类定义请参考default-category.yaml分类配置文件模板。注意开启二级分类时媒体库需要将目录设置到二级分类子目录中可添加多个子目录到一个媒体库也可以一个子目录设置一个媒体库否则媒体库管理软件可能无法正常搜刮识别。
> 电影
>> 精选
>> 华语电影
>> 外语电影
>> 动画电影
>
> 电视剧
>> 国产剧
>> 欧美剧
>> 日韩剧
>> 动漫
>> 纪录片
>> 综艺
>> 儿童
### 4、配置下载器及下载目录
支持qbittorrent推荐、transmission、aria2、115网盘、pikpak网盘等右上角按钮设置好下载目录。
### 5、配置同步目录
* 目录同步可以对多个分散的文件夹进行监控,文件夹中有新增媒体文件时会自动进行识别重命名,并按配置的转移方式转移到媒体库目录或指定的目录中。
* 如将下载软件的下载目录也纳入目录同步范围的,建议关闭下载软件监控功能,否则会触发重复处理。
### 5、配置微信/Telegram/Slack/Synology Chat远程控制
配置好微信、Telegram、Slack或Synology Chat机器人后可以直接通过移动端发送名字实现自动检索下载以及通过菜单控制程序运行。
1) **微信消息推送及回调**
* 配置消息推送代理
由于微信官方限制2022年6月20日后创建的企业微信应用需要有固定的公网IP地址并加入IP白名单后才能接收到消息使用有固定公网IP的代理服务器转发可解决该问题
如使用 Nginx 搭建代理服务,需在配置中增加以下代理配置:
```
location /cgi-bin/gettoken {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/message/send {
proxy_pass https://qyapi.weixin.qq.com;
}
```
如使用 Caddy 搭建代理服务,需在配置中增加以下代理配置(`{upstream_hostport}` 部分不是变量,不要改,原封不动复制粘贴过去即可)。
```
reverse_proxy https://qyapi.weixin.qq.com {
header_up Host {upstream_hostport}
}
```
如使用 Traefik 搭建代理服务,需在额外配置:
```
loadBalancer.passHostHeader=false
```
注意:代理服务器仅适用于在微信中接收工具推送的消息,消息回调与代理服务器无关。
* 配置微信消息接收服务
在企业微信自建应用管理页面-》API接收消息 开启消息接收服务:
1) 在微信页面生成Token和EncodingAESKey并在NASTool设置->消息通知->微信中填入对应的输入项并保存。
2) **重启NASTool**
3) 微信页面地址URL填写http(s)://IP:PORT/wechat点确定进行认证。
* 配置微信菜单控制
通过菜单远程控制工具运行在https://work.weixin.qq.com/wework_admin/frame#apps 应用自定义菜单页面按如下图所示维护好菜单,菜单内容为发送消息,消息内容随意。
**一级菜单及一级菜单下的前几个子菜单顺序需要一模一样**,在符合截图的示例项后可以自己增加别的二级菜单项。
![image](https://user-images.githubusercontent.com/54088512/218261870-ed15b6b6-895f-45e4-913c-4dda75144a9a.png)
2) **Telegram Bot机器人**
* 在NASTool设置中设置好本程序的外网访问地址根据实际网络情况决定是否打开Telegram Webhook开关。
**注意WebHook受Telegram限制程序运行端口需要设置为以下端口之一443, 80, 88, 8443且需要有以网认证的Https证书非WebHook模式时不能使用NAStool内建的SSL证书功能。**
* 在Telegram BotFather机器人中按下表维护好bot命令菜单要选选择菜单或输入命令运行对应服务输入其它内容则启动聚合检索。
3) **Slack**
* 详情参考频道说明
**命令与功能对应关系**
| 命令 | 功能 |
|---------| ---- |
| /rss | RSS订阅 |
| /ssa | 订阅搜索 |
| /ptt | 下载文件转移 |
| /ptr | 自动删种 |
| /pts | 站点签到 |
| /udt | 系统更新 |
| /tbl | 清理转移缓存 |
| /trh | 清理RSS缓存 |
| /rst | 目录同步 |
| /db | 豆瓣想看 |
| /utf | 重新识别 |
4) **Synology Chat**
* 无需额外设置,注意非同一服务器搭建的,还需要在基础设置->安全中调整IP地址限制策略。
### 6、配置索引器
配置索引器,以支持搜索站点资源:
* 本工具内建索引器目前已支持大部分主流PT站点及部分公开站点建议启用内建索引器。
* 同时支持Jackett/Prowlarr需额外搭建对应服务并获取API Key以及地址等信息配置到设置->索引器->Jackett/Prowlarr中。
### 7、配置站点
本工具的电影电视剧订阅、资源搜索、站点数据统计、刷流、自动签到等功能均依赖于正确配置站点信息,需要在“站点管理->站点维护”中维护好站点RSS链接以及Cookie等。
其中站点RSS链接生成时请尽量选择影视类资源分类且勾选副标题。
### 8、整理存量媒体资源
如果你的存量资源所在的目录与你目录同步中配置的源路径目的路径相同则可以通过WEBUI或微信/Telegram的“目录同步”按钮触发全量同步。
如果不相同则可以按以下说明操作,手工输入命令整理特定目录下的媒体资源:
说明:-d 参数为可选,如不输入则会自动区分电影/电视剧/动漫分别存储到对应的媒体库目录中;-d 参数有输入时则不管类型,都往-d目录中转移。
* Docker版本宿主机上运行以下命令nas-tools修改为你的docker名称修改源目录和目的目录参数。
```
docker exec -it nas-tools sh
python3 /nas-tools/app/filetransfer.py -m link -s /from/path -d /to/path
```
* 群晖套件版本ssh到后台运行以下命令同样修改配置文件路径以及源目录、目的目录参数。
```
export NASTOOL_CONFIG=/var/packages/NASTool/target/config/config.yaml
/var/packages/py3k/target/usr/local/bin/python3 /var/packages/NASTool/target/app/filetransfer.py -m link -s /from/path -d /to/path
```
* 本地直接运行的cd 到程序根目录,执行以下命令,修改配置文件、源目录和目的目录参数。
```
export NASTOOL_CONFIG=config/config.yaml
python3 app/filetransfer.py -m link -s /from/path -d /to/path
```
## 鸣谢
* 程序UI模板及图标来源于开源项目<a href="https://github.com/tabler/tabler">tabler</a>,此外项目中还使用到了开源模块:<a href="https://github.com/igorcmoura/anitopy" target="_blank">anitopy</a><a href="https://github.com/AnthonyBloomer/tmdbv3api" target="_blank">tmdbv3api</a><a href="https://github.com/pkkid/python-plexapi" target="_blank">python-plexapi</a><a href="https://github.com/rmartin16/qbittorrent-api">qbittorrent-api</a><a href="https://github.com/Trim21/transmission-rpc">transmission-rpc</a>
* 感谢 <a href="https://github.com/devome" target="_blank">nevinee</a> 完善docker构建
* 感谢 <a href="https://github.com/tbc0309" target="_blank">tbc0309</a> 适配群晖套件
* 感谢 PR 代码、完善WIKI、发布教程的所有大佬

View File

@ -192,8 +192,10 @@ class BrushTask(object):
else:
log.info("【Brush】%s RSS获取数据%s" % (site_name, len(rss_result)))
# 同时下载数
max_dlcount = rss_rule.get("dlcount")
success_count = 0
new_torrent_count = 0
if max_dlcount:
downloading_count = self.__get_downloading_count(downloader_cfg) or 0
new_torrent_count = int(max_dlcount) - int(downloading_count)

View File

@ -46,14 +46,11 @@ class ModuleConf(object):
"qbittorrent": DownloaderType.QB,
"transmission": DownloaderType.TR,
"client115": DownloaderType.Client115,
"aria2": DownloaderType.Aria2,
"pikpak": DownloaderType.PikPak
}
# 索引器
INDEXER_DICT = {
"prowlarr": IndexerType.PROWLARR,
"jackett": IndexerType.JACKETT,
"builtin": IndexerType.BUILTIN
}
@ -624,36 +621,6 @@ class ModuleConf(object):
}
}
},
"aria2": {
"name": "Aria2",
"img_url": "../static/img/aria2.png",
"background": "bg-green",
"test_command": "app.downloader.client.aria2|Aria2",
"config": {
"host": {
"id": "aria2.host",
"required": True,
"title": "IP地址",
"tooltip": "配置IP地址如为https则需要增加https://前缀",
"type": "text",
"placeholder": "127.0.0.1"
},
"port": {
"id": "aria2.port",
"required": True,
"title": "端口",
"type": "text",
"placeholder": "6800"
},
"secret": {
"id": "aria2.secret",
"required": True,
"title": "令牌",
"type": "text",
"placeholder": ""
}
}
},
"pikpak": {
"name": "PikPak",
"img_url": "../static/img/pikpak.png",
@ -787,64 +754,7 @@ class ModuleConf(object):
}
# 索引器
INDEXER_CONF = {
"jackett": {
"name": "Jackett",
"img_url": "./static/img/jackett.png",
"background": "bg-black",
"test_command": "app.indexer.client.jackett|Jackett",
"config": {
"host": {
"id": "jackett.host",
"required": True,
"title": "Jackett地址",
"tooltip": "Jackett访问地址和端口如为https需加https://前缀。注意需要先在Jackett中添加indexer才能正常测试通过和使用",
"type": "text",
"placeholder": "http://127.0.0.1:9117"
},
"api_key": {
"id": "jackett.api_key",
"required": True,
"title": "Api Key",
"tooltip": "Jackett管理界面右上角复制API Key",
"type": "text",
"placeholder": ""
},
"password": {
"id": "jackett.password",
"required": False,
"title": "密码",
"tooltip": "Jackett管理界面中配置的Admin password如未配置可为空",
"type": "password",
"placeholder": ""
}
}
},
"prowlarr": {
"name": "Prowlarr",
"img_url": "../static/img/prowlarr.png",
"background": "bg-orange",
"test_command": "app.indexer.client.prowlarr|Prowlarr",
"config": {
"host": {
"id": "prowlarr.host",
"required": True,
"title": "Prowlarr地址",
"tooltip": "Prowlarr访问地址和端口如为https需加https://前缀。注意需要先在Prowlarr中添加搜刮器同时勾选所有搜刮器后搜索一次才能正常测试通过和使用",
"type": "text",
"placeholder": "http://127.0.0.1:9696"
},
"api_key": {
"id": "prowlarr.api_key",
"required": True,
"title": "Api Key",
"tooltip": "在Prowlarr->Settings->General->Security-> API Key中获取",
"type": "text",
"placeholder": ""
}
}
}
}
INDEXER_CONF = {}
# 发现过滤器
DISCOVER_FILTER_CONF = {

View File

@ -477,72 +477,4 @@ class SiteConf:
}
}
# 公共BT站点
PUBLIC_TORRENT_SITES = {
'rarbg.to': {
"parser": "Rarbg",
"proxy": True,
"language": "en"
},
'dmhy.org': {
"proxy": True
},
'eztv.re': {
"proxy": True,
"language": "en"
},
'acg.rip': {
"proxy": False
},
'thepiratebay.org': {
"proxy": True,
"render": True,
"language": "en"
},
'nyaa.si': {
"proxy": True
},
'1337x.to': {
"proxy": True,
"language": "en"
},
'ext.to': {
"proxy": True,
"language": "en",
"parser": "RenderSpider"
},
'torrentgalaxy.to': {
"proxy": True,
"language": "en"
},
'mikanani.me': {
"proxy": False
},
'gaoqing.fm': {
"proxy": False
},
'www.mp4ba.vip': {
"proxy": False,
"referer": True
},
'www.miobt.com': {
"proxy": True
},
'katcr.to': {
"proxy": True,
"language": "en"
},
'btsow.quest': {
"proxy": True
},
'www.hdpianyuan.com': {
"proxy": False
},
'skrbtla.top': {
"proxy": False,
"referer": True,
"parser": "RenderSpider"
},
'www.comicat.org': {
"proxy": False
}
}
PUBLIC_TORRENT_SITES = {}

View File

@ -41,7 +41,7 @@ class MainDb:
"""
config = Config().get_config()
init_files = Config().get_config("app").get("init_files") or []
config_dir = os.path.join(Config().get_root_path(), "config")
config_dir = Config().get_script_path()
sql_files = PathUtils.get_dir_level1_files(in_path=config_dir, exts=".sql")
config_flag = False
for sql_file in sql_files:

View File

@ -1,345 +0,0 @@
# -*- coding: utf-8 -*-
import xmlrpc.client
DEFAULT_HOST = 'localhost'
DEFAULT_PORT = 6800
SERVER_URI_FORMAT = '%s:%s/rpc'
class PyAria2(object):
_secret = None
def __init__(self, secret=None, host=DEFAULT_HOST, port=DEFAULT_PORT):
"""
PyAria2 constructor.
secret: aria2 secret token
host: string, aria2 rpc host, default is 'localhost'
port: integer, aria2 rpc port, default is 6800
session: string, aria2 rpc session saving.
"""
server_uri = SERVER_URI_FORMAT % (host, port)
self._secret = "token:%s" % (secret or "")
self.server = xmlrpc.client.ServerProxy(server_uri, allow_none=True)
def addUri(self, uris, options=None, position=None):
"""
This method adds new HTTP(S)/FTP/BitTorrent Magnet URI.
uris: list, list of URIs
options: dict, additional options
position: integer, position in download queue
return: This method returns GID of registered download.
"""
return self.server.aria2.addUri(self._secret, uris, options, position)
def addTorrent(self, torrent, uris=None, options=None, position=None):
"""
This method adds BitTorrent download by uploading ".torrent" file.
torrent: bin, torrent file bin
uris: list, list of webseed URIs
options: dict, additional options
position: integer, position in download queue
return: This method returns GID of registered download.
"""
return self.server.aria2.addTorrent(self._secret, xmlrpc.client.Binary(torrent), uris, options, position)
def addMetalink(self, metalink, options=None, position=None):
"""
This method adds Metalink download by uploading ".metalink" file.
metalink: string, metalink file path
options: dict, additional options
position: integer, position in download queue
return: This method returns list of GID of registered download.
"""
return self.server.aria2.addMetalink(self._secret, xmlrpc.client.Binary(open(metalink, 'rb').read()), options,
position)
def remove(self, gid):
"""
This method removes the download denoted by gid.
gid: string, GID.
return: This method returns GID of removed download.
"""
return self.server.aria2.remove(self._secret, gid)
def forceRemove(self, gid):
"""
This method removes the download denoted by gid.
gid: string, GID.
return: This method returns GID of removed download.
"""
return self.server.aria2.forceRemove(self._secret, gid)
def pause(self, gid):
"""
This method pauses the download denoted by gid.
gid: string, GID.
return: This method returns GID of paused download.
"""
return self.server.aria2.pause(self._secret, gid)
def pauseAll(self):
"""
This method is equal to calling aria2.pause() for every active/waiting download.
return: This method returns OK for success.
"""
return self.server.aria2.pauseAll(self._secret)
def forcePause(self, gid):
"""
This method pauses the download denoted by gid.
gid: string, GID.
return: This method returns GID of paused download.
"""
return self.server.aria2.forcePause(self._secret, gid)
def forcePauseAll(self):
"""
This method is equal to calling aria2.forcePause() for every active/waiting download.
return: This method returns OK for success.
"""
return self.server.aria2.forcePauseAll()
def unpause(self, gid):
"""
This method changes the status of the download denoted by gid from paused to waiting.
gid: string, GID.
return: This method returns GID of unpaused download.
"""
return self.server.aria2.unpause(self._secret, gid)
def unpauseAll(self):
"""
This method is equal to calling aria2.unpause() for every active/waiting download.
return: This method returns OK for success.
"""
return self.server.aria2.unpauseAll()
def tellStatus(self, gid, keys=None):
"""
This method returns download progress of the download denoted by gid.
gid: string, GID.
keys: list, keys for method response.
return: The method response is of type dict and it contains following keys.
"""
return self.server.aria2.tellStatus(self._secret, gid, keys)
def getUris(self, gid):
"""
This method returns URIs used in the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getUris(self._secret, gid)
def getFiles(self, gid):
"""
This method returns file list of the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getFiles(self._secret, gid)
def getPeers(self, gid):
"""
This method returns peer list of the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getPeers(self._secret, gid)
def getServers(self, gid):
"""
This method returns currently connected HTTP(S)/FTP servers of the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getServers(self._secret, gid)
def tellActive(self, keys=None):
"""
This method returns the list of active downloads.
keys: keys for method response.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.tellActive(self._secret, keys)
def tellWaiting(self, offset, num, keys=None):
"""
This method returns the list of waiting download, including paused downloads.
offset: integer, the offset from the download waiting at the front.
num: integer, the number of downloads to be returned.
keys: keys for method response.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.tellWaiting(self._secret, offset, num, keys)
def tellStopped(self, offset, num, keys=None):
"""
This method returns the list of stopped download.
offset: integer, the offset from the download waiting at the front.
num: integer, the number of downloads to be returned.
keys: keys for method response.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.tellStopped(self._secret, offset, num, keys)
def changePosition(self, gid, pos, how):
"""
This method changes the position of the download denoted by gid.
gid: string, GID.
pos: integer, the position relative which to be changed.
how: string.
POS_SET, it moves the download to a position relative to the beginning of the queue.
POS_CUR, it moves the download to a position relative to the current position.
POS_END, it moves the download to a position relative to the end of the queue.
return: The response is of type integer, and it is the destination position.
"""
return self.server.aria2.changePosition(self._secret, gid, pos, how)
def changeUri(self, gid, fileIndex, delUris, addUris, position=None):
"""
This method removes URIs in delUris from and appends URIs in addUris to download denoted by gid.
gid: string, GID.
fileIndex: integer, file to affect (1-based)
delUris: list, URIs to be removed
addUris: list, URIs to be added
position: integer, where URIs are inserted, after URIs have been removed
return: This method returns a list which contains 2 integers. The first integer is the number of URIs deleted. The second integer is the number of URIs added.
"""
return self.server.aria2.changeUri(self._secret, gid, fileIndex, delUris, addUris, position)
def getOption(self, gid):
"""
This method returns options of the download denoted by gid.
gid: string, GID.
return: The response is of type dict.
"""
return self.server.aria2.getOption(self._secret, gid)
def changeOption(self, gid, options):
"""
This method changes options of the download denoted by gid dynamically.
gid: string, GID.
options: dict, the options.
return: This method returns OK for success.
"""
return self.server.aria2.changeOption(self._secret, gid, options)
def getGlobalOption(self):
"""
This method returns global options.
return: The method response is of type dict.
"""
return self.server.aria2.getGlobalOption(self._secret)
def changeGlobalOption(self, options):
"""
This method changes global options dynamically.
options: dict, the options.
return: This method returns OK for success.
"""
return self.server.aria2.changeGlobalOption(self._secret, options)
def getGlobalStat(self):
"""
This method returns global statistics such as overall download and upload speed.
return: The method response is of type struct and contains following keys.
"""
return self.server.aria2.getGlobalStat(self._secret)
def purgeDownloadResult(self):
"""
This method purges completed/error/removed downloads to free memory.
return: This method returns OK for success.
"""
return self.server.aria2.purgeDownloadResult(self._secret)
def removeDownloadResult(self, gid):
"""
This method removes completed/error/removed download denoted by gid from memory.
return: This method returns OK for success.
"""
return self.server.aria2.removeDownloadResult(self._secret, gid)
def getVersion(self):
"""
This method returns version of the program and the list of enabled features.
return: The method response is of type dict and contains following keys.
"""
return self.server.aria2.getVersion(self._secret)
def getSessionInfo(self):
"""
This method returns session information.
return: The response is of type dict.
"""
return self.server.aria2.getSessionInfo(self._secret)
def shutdown(self):
"""
This method shutdowns aria2.
return: This method returns OK for success.
"""
return self.server.aria2.shutdown(self._secret)
def forceShutdown(self):
"""
This method shutdowns aria2.
return: This method returns OK for success.
"""
return self.server.aria2.forceShutdown(self._secret)

View File

@ -1,167 +0,0 @@
import os
import re
from app.utils import RequestUtils, ExceptionUtils, StringUtils
from app.utils.types import DownloaderType
from config import Config
from app.downloader.client._base import _IDownloadClient
from app.downloader.client._pyaria2 import PyAria2
class Aria2(_IDownloadClient):
schema = "aria2"
client_type = DownloaderType.Aria2.value
_client_config = {}
_client = None
host = None
port = None
secret = None
def __init__(self, config=None):
if config:
self._client_config = config
else:
self._client_config = Config().get_config('aria2')
self.init_config()
self.connect()
def init_config(self):
if self._client_config:
self.host = self._client_config.get("host")
if self.host:
if not self.host.startswith('http'):
self.host = "http://" + self.host
if self.host.endswith('/'):
self.host = self.host[:-1]
self.port = self._client_config.get("port")
self.secret = self._client_config.get("secret")
if self.host and self.port:
self._client = PyAria2(secret=self.secret, host=self.host, port=self.port)
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.client_type] else False
def connect(self):
pass
def get_status(self):
if not self._client:
return False
ver = self._client.getVersion()
return True if ver else False
def get_torrents(self, ids=None, status=None, **kwargs):
if not self._client:
return []
ret_torrents = []
if ids:
if isinstance(ids, list):
for gid in ids:
ret_torrents.append(self._client.tellStatus(gid=gid))
else:
ret_torrents = [self._client.tellStatus(gid=ids)]
elif status:
if status == "downloading":
ret_torrents = self._client.tellActive() or [] + self._client.tellWaiting(offset=-1, num=100) or []
else:
ret_torrents = self._client.tellStopped(offset=-1, num=1000)
return ret_torrents
def get_downloading_torrents(self, **kwargs):
return self.get_torrents(status="downloading")
def get_completed_torrents(self, **kwargs):
return self.get_torrents(status="completed")
def set_torrents_status(self, ids, **kwargs):
return self.delete_torrents(ids=ids, delete_file=False)
def get_transfer_task(self, **kwargs):
if not self._client:
return []
torrents = self.get_completed_torrents()
trans_tasks = []
for torrent in torrents:
name = torrent.get('bittorrent', {}).get('info', {}).get("name")
if not name:
continue
path = torrent.get("dir")
if not path:
continue
true_path = self.get_replace_path(path)
trans_tasks.append({'path': os.path.join(true_path, name), 'id': torrent.get("gid")})
return trans_tasks
def get_remove_torrents(self, **kwargs):
return []
def add_torrent(self, content, download_dir=None, **kwargs):
if not self._client:
return None
if isinstance(content, str):
# 转换为磁力链
if re.match("^https*://", content):
try:
p = RequestUtils().get_res(url=content, allow_redirects=False)
if p and p.headers.get("Location"):
content = p.headers.get("Location")
except Exception as result:
ExceptionUtils.exception_traceback(result)
return self._client.addUri(uris=[content], options=dict(dir=download_dir))
else:
return self._client.addTorrent(torrent=content, uris=[], options=dict(dir=download_dir))
def start_torrents(self, ids):
if not self._client:
return False
return self._client.unpause(gid=ids)
def stop_torrents(self, ids):
if not self._client:
return False
return self._client.pause(gid=ids)
def delete_torrents(self, delete_file, ids):
if not self._client:
return False
return self._client.remove(gid=ids)
def get_download_dirs(self):
return []
def change_torrent(self, **kwargs):
pass
def get_downloading_progress(self, **kwargs):
"""
获取正在下载的种子进度
"""
Torrents = self.get_downloading_torrents()
DispTorrents = []
for torrent in Torrents:
# 进度
try:
progress = round(int(torrent.get('completedLength')) / int(torrent.get("totalLength")), 1) * 100
except ZeroDivisionError:
progress = 0.0
state = "Downloading"
_dlspeed = StringUtils.str_filesize(torrent.get('downloadSpeed'))
_upspeed = StringUtils.str_filesize(torrent.get('uploadSpeed'))
speed = "%s%sB/s %s%sB/s" % (chr(8595), _dlspeed, chr(8593), _upspeed)
DispTorrents.append({
'id': torrent.get('gid'),
'name': torrent.get('bittorrent', {}).get('info', {}).get("name"),
'speed': speed,
'state': state,
'progress': progress
})
return DispTorrents
def set_speed_limit(self, **kwargs):
"""
设置速度限制
"""
pass

View File

@ -45,7 +45,7 @@ class Downloader:
'app.downloader.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【Downloader】: 已经加载下载器:{self._downloader_schema}")
log.debug(f"【Downloader】加载下载器:{self._downloader_schema}")
self.init_config()
def init_config(self):
@ -639,7 +639,7 @@ class Downloader:
# 选中一个单季整季的或单季包括需要的所有集的
if item.tmdb_id == need_tmdbid \
and (not item.get_episode_list()
or set(item.get_episode_list()).issuperset(set(need_episodes))) \
or set(item.get_episode_list()).intersection(set(need_episodes))) \
and len(item.get_season_list()) == 1 \
and item.get_season_list()[0] == need_season:
# 检查种子看是否有需要的集
@ -1020,8 +1020,6 @@ class Downloader:
:return: 集数列表种子路径
"""
site_info = self.sites.get_site_attr(url)
if not site_info.get("cookie"):
return [], None
# 保存种子文件
file_path, _, _, files, retmsg = Torrent().get_torrent_info(
url=url,

View File

@ -56,137 +56,7 @@ class _IIndexClient(metaclass=ABCMeta):
"""
根据关键字多线程检索
"""
if not indexer or not key_word:
return None
if filter_args is None:
filter_args = {}
# 不在设定搜索范围的站点过滤掉
if filter_args.get("site") and indexer.name not in filter_args.get("site"):
return []
# 计算耗时
start_time = datetime.datetime.now()
log.info(f"{self.index_type}】开始检索Indexer{indexer.name} ...")
# 特殊符号处理
search_word = StringUtils.handler_special_chars(text=key_word,
replace_word=" ",
allow_space=True)
api_url = f"{indexer.domain}?apikey={self.api_key}&t=search&q={search_word}"
result_array = self.__parse_torznabxml(api_url)
if len(result_array) == 0:
log.warn(f"{self.index_type}{indexer.name} 未检索到数据")
self.progress.update(ptype='search', text=f"{indexer.name} 未检索到数据")
return []
else:
log.warn(f"{self.index_type}{indexer.name} 返回数据:{len(result_array)}")
return self.filter_search_results(result_array=result_array,
order_seq=order_seq,
indexer=indexer,
filter_args=filter_args,
match_media=match_media,
start_time=start_time)
@staticmethod
def __parse_torznabxml(url):
"""
从torznab xml中解析种子信息
:param url: URL地址
:return: 解析出来的种子信息列表
"""
if not url:
return []
try:
ret = RequestUtils(timeout=10).get_res(url)
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
return []
if not ret:
return []
xmls = ret.text
if not xmls:
return []
torrents = []
try:
# 解析XML
dom_tree = xml.dom.minidom.parseString(xmls)
root_node = dom_tree.documentElement
items = root_node.getElementsByTagName("item")
for item in items:
try:
# indexer id
indexer_id = DomUtils.tag_value(item, "jackettindexer", "id",
default=DomUtils.tag_value(item, "prowlarrindexer", "id", ""))
# indexer
indexer = DomUtils.tag_value(item, "jackettindexer",
default=DomUtils.tag_value(item, "prowlarrindexer", default=""))
# 标题
title = DomUtils.tag_value(item, "title", default="")
if not title:
continue
# 种子链接
enclosure = DomUtils.tag_value(item, "enclosure", "url", default="")
if not enclosure:
continue
# 描述
description = DomUtils.tag_value(item, "description", default="")
# 种子大小
size = DomUtils.tag_value(item, "size", default=0)
# 种子页面
page_url = DomUtils.tag_value(item, "comments", default="")
# 做种数
seeders = 0
# 下载数
peers = 0
# 是否免费
freeleech = False
# 下载因子
downloadvolumefactor = 1.0
# 上传因子
uploadvolumefactor = 1.0
# imdbid
imdbid = ""
torznab_attrs = item.getElementsByTagName("torznab:attr")
for torznab_attr in torznab_attrs:
name = torznab_attr.getAttribute('name')
value = torznab_attr.getAttribute('value')
if name == "seeders":
seeders = value
if name == "peers":
peers = value
if name == "downloadvolumefactor":
downloadvolumefactor = value
if float(downloadvolumefactor) == 0:
freeleech = True
if name == "uploadvolumefactor":
uploadvolumefactor = value
if name == "imdbid":
imdbid = value
tmp_dict = {'indexer_id': indexer_id,
'indexer': indexer,
'title': title,
'enclosure': enclosure,
'description': description,
'size': size,
'seeders': seeders,
'peers': peers,
'freeleech': freeleech,
'downloadvolumefactor': downloadvolumefactor,
'uploadvolumefactor': uploadvolumefactor,
'page_url': page_url,
'imdbid': imdbid}
torrents.append(tmp_dict)
except Exception as e:
ExceptionUtils.exception_traceback(e)
continue
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
pass
return torrents
pass
def filter_search_results(self, result_array: list,
order_seq,

View File

@ -42,7 +42,7 @@ class BuiltinIndexer(_IIndexClient):
"""
return True
def get_indexers(self, check=True, public=True, indexer_id=None):
def get_indexers(self, check=True, public=False, indexer_id=None):
ret_indexers = []
# 选中站点配置
indexer_sites = Config().get_config("pt").get("indexer_sites") or []

View File

@ -1,77 +0,0 @@
import requests
from app.utils import ExceptionUtils
from app.utils.types import IndexerType
from config import Config
from app.indexer.client._base import _IIndexClient
from app.utils import RequestUtils
from app.helper import IndexerConf
class Jackett(_IIndexClient):
schema = "jackett"
_client_config = {}
index_type = IndexerType.JACKETT.value
_password = None
def __init__(self, config=None):
super().__init__()
if config:
self._client_config = config
else:
self._client_config = Config().get_config('jackett')
self.init_config()
def init_config(self):
if self._client_config:
self.api_key = self._client_config.get('api_key')
self._password = self._client_config.get('password')
self.host = self._client_config.get('host')
if self.host:
if not self.host.startswith('http'):
self.host = "http://" + self.host
if not self.host.endswith('/'):
self.host = self.host + "/"
def get_status(self):
"""
检查连通性
:return: TrueFalse
"""
if not self.api_key or not self.host:
return False
return True if self.get_indexers() else False
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.index_type] else False
def get_indexers(self):
"""
获取配置的jackett indexer
:return: indexer 信息 [(indexerId, indexerName, url)]
"""
# 获取Cookie
cookie = None
session = requests.session()
res = RequestUtils(session=session).post_res(url=f"{self.host}UI/Dashboard",
params={"password": self._password})
if res and session.cookies:
cookie = session.cookies.get_dict()
indexer_query_url = f"{self.host}api/v2.0/indexers?configured=true"
try:
ret = RequestUtils(cookies=cookie).get_res(indexer_query_url)
if not ret or not ret.json():
return []
return [IndexerConf({"id": v["id"],
"name": v["name"],
"domain": f'{self.host}api/v2.0/indexers/{v["id"]}/results/torznab/',
"public": True if v['type'] == 'public' else False,
"builtin": False})
for v in ret.json()]
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
return []
def search(self, *kwargs):
return super().search(*kwargs)

View File

@ -1,66 +0,0 @@
from app.utils import ExceptionUtils
from app.utils.types import IndexerType
from config import Config
from app.indexer.client._base import _IIndexClient
from app.utils import RequestUtils
from app.helper import IndexerConf
class Prowlarr(_IIndexClient):
schema = "prowlarr"
_client_config = {}
index_type = IndexerType.PROWLARR.value
def __init__(self, config=None):
super().__init__()
if config:
self._client_config = config
else:
self._client_config = Config().get_config('prowlarr')
self.init_config()
def init_config(self):
if self._client_config:
self.api_key = self._client_config.get('api_key')
self.host = self._client_config.get('host')
if self.host:
if not self.host.startswith('http'):
self.host = "http://" + self.host
if not self.host.endswith('/'):
self.host = self.host + "/"
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.index_type] else False
def get_status(self):
"""
检查连通性
:return: TrueFalse
"""
if not self.api_key or not self.host:
return False
return True if self.get_indexers() else False
def get_indexers(self):
"""
获取配置的prowlarr indexer
:return: indexer 信息 [(indexerId, indexerName, url)]
"""
indexer_query_url = f"{self.host}api/v1/indexerstats?apikey={self.api_key}"
try:
ret = RequestUtils().get_res(indexer_query_url)
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
return []
if not ret:
return []
indexers = ret.json().get("indexers", [])
return [IndexerConf({"id": v["indexerId"],
"name": v["indexerName"],
"domain": f'{self.host}{v["indexerId"]}/api',
"builtin": False})
for v in indexers]
def search(self, *kwargs):
return super().search(*kwargs)

View File

@ -23,14 +23,14 @@ class Indexer(object):
'app.indexer.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【Indexer】: 已经加载索引器:{self._indexer_schemas}")
log.debug(f"【Indexer】加载索引器:{self._indexer_schemas}")
self.init_config()
def init_config(self):
self.progress = ProgressHelper()
self._client_type = ModuleConf.INDEXER_DICT.get(
Config().get_config("pt").get('search_indexer') or 'builtin'
)
) or IndexerType.BUILTIN
self._client = self.__get_client(self._client_type)
def __build_class(self, ctype, conf):

View File

@ -1751,20 +1751,6 @@ class Media:
return episode.get("name")
return None
def get_movie_discover(self, page=1):
"""
发现电影
"""
if not self.movie:
return []
try:
movies = self.movie.discover(page)
if movies:
return movies.get("results")
except Exception as e:
print(str(e))
return []
def get_movie_similar(self, tmdbid, page=1):
"""
查询类似电影
@ -2031,10 +2017,20 @@ class Media:
"""
获取TMDB热门电影随机一张背景图
"""
movies = self.get_movie_discover()
if movies:
backdrops = [movie.get("backdrop_path") for movie in movies]
return TMDB_IMAGE_ORIGINAL_URL % backdrops[round(random.uniform(0, len(backdrops) - 1))]
try:
# 随机类型
mtype = MediaType.MOVIE if random.uniform(0, 1) > 0.5 else MediaType.TV
# 热门电影/电视剧
if mtype == MediaType.MOVIE:
medias = self.discover.discover_movies(params={"sort_by": "popularity.desc"})
else:
medias = self.discover.discover_tv_shows(params={"sort_by": "popularity.desc"})
if medias:
backdrops = [media.get("backdrop_path") for media in medias if media.get("backdrop_path")]
# 随机一张
return TMDB_IMAGE_ORIGINAL_URL % backdrops[round(random.uniform(0, len(backdrops) - 1))]
except Exception as err:
print(str(err))
return ""
def save_rename_cache(self, file_name, cache_info):

View File

@ -26,7 +26,7 @@ class MediaServer:
'app.mediaserver.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【MediaServer】: 已经加载媒体服务器:{self._mediaserver_schemas}")
log.debug(f"【MediaServer】加载媒体服务器:{self._mediaserver_schemas}")
self.init_config()
def init_config(self):

View File

@ -28,7 +28,7 @@ class Message(object):
'app.message.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【Message】: 已经加载消息服务:{self._message_schemas}")
log.debug(f"【Message】加载消息服务:{self._message_schemas}")
self.init_config()
def init_config(self):
@ -93,7 +93,7 @@ class Message(object):
state, ret_msg = self.__build_class(ctype=ctype,
conf=config).send_msg(title="测试",
text="这是一条测试消息",
url="https://github.com/jxxghp/nas-tools")
url="https://github.com/NAStool/nas-tools")
if not state:
log.error(f"【Message】{ctype} 发送测试消息失败:%s" % ret_msg)
return state

View File

@ -12,7 +12,7 @@ from app.downloader import Downloader
from app.helper import MetaHelper
from app.mediaserver import MediaServer
from app.rss import Rss
from app.sites import Sites
from app.sites import Sites, SiteUserInfo, SiteSignin
from app.subscribe import Subscribe
from app.sync import Sync
from app.utils import ExceptionUtils
@ -83,7 +83,7 @@ class Scheduler:
except Exception as e:
log.info("站点自动签到时间 配置格式错误:%s" % str(e))
hour = minute = 0
self.SCHEDULER.add_job(Sites().signin,
self.SCHEDULER.add_job(SiteSignin().signin,
"cron",
hour=hour,
minute=minute)
@ -95,7 +95,7 @@ class Scheduler:
log.info("站点自动签到时间 配置格式错误:%s" % str(e))
hours = 0
if hours:
self.SCHEDULER.add_job(Sites().signin,
self.SCHEDULER.add_job(SiteSignin().signin,
"interval",
hours=hours)
log.info("站点自动签到服务启动")
@ -184,7 +184,7 @@ class Scheduler:
self.SCHEDULER.add_job(Subscribe().subscribe_search, 'interval', seconds=RSS_CHECK_INTERVAL)
# 站点数据刷新
self.SCHEDULER.add_job(Sites().refresh_pt_date_now,
self.SCHEDULER.add_job(SiteUserInfo().refresh_pt_date_now,
'interval',
hours=REFRESH_PT_DATA_INTERVAL,
next_run_time=datetime.datetime.now() + datetime.timedelta(minutes=1))
@ -232,7 +232,7 @@ class Scheduler:
if hour < 0 or minute < 0:
log.warn("站点自动签到时间 配置格式错误:不启动任务")
return
self.SCHEDULER.add_job(Sites().signin,
self.SCHEDULER.add_job(SiteSignin().signin,
"date",
run_date=datetime.datetime(year, month, day, hour, minute, second))

View File

@ -1,3 +1,4 @@
from app.sites.site_user_info_factory import SiteUserInfoFactory
from app.sites.site_userinfo import SiteUserInfo
from .sites import Sites
from .sitecookie import SiteCookie
from .site_cookie import SiteCookie
from .site_signin import SiteSignin

166
app/sites/site_signin.py Normal file
View File

@ -0,0 +1,166 @@
import re
from multiprocessing.dummy import Pool as ThreadPool
from threading import Lock
from lxml import etree
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as es
from selenium.webdriver.support.wait import WebDriverWait
import log
from app.conf import SiteConf
from app.helper import ChromeHelper, SubmoduleHelper, DbHelper, SiteHelper
from app.message import Message
from app.sites.sites import Sites
from app.utils import RequestUtils, ExceptionUtils, StringUtils
from app.utils.commons import singleton
from config import Config
lock = Lock()
@singleton
class SiteSignin(object):
sites = None
dbhelper = None
message = None
_MAX_CONCURRENCY = 10
def __init__(self):
# 加载模块
self._site_schema = SubmoduleHelper.import_submodules('app.sites.sitesignin',
filter_func=lambda _, obj: hasattr(obj, 'match'))
log.debug(f"【Sites】加载站点签到{self._site_schema}")
self.init_config()
def init_config(self):
self.sites = Sites()
self.dbhelper = DbHelper()
self.message = Message()
def __build_class(self, url):
for site_schema in self._site_schema:
try:
if site_schema.match(url):
return site_schema
except Exception as e:
ExceptionUtils.exception_traceback(e)
return None
def signin(self):
"""
站点并发签到
"""
sites = self.sites.get_sites(signin=True)
if not sites:
return
with ThreadPool(min(len(sites), self._MAX_CONCURRENCY)) as p:
status = p.map(self.__signin_site, sites)
if status:
self.message.send_site_signin_message(status)
def __signin_site(self, site_info):
"""
签到一个站点
"""
site_module = self.__build_class(site_info.get("signurl"))
if site_module:
return site_module.signin(site_info)
else:
return self.__signin_base(site_info)
@staticmethod
def __signin_base(site_info):
"""
通用签到处理
:param site_info: 站点信息
:return: 签到结果信息
"""
if not site_info:
return ""
site = site_info.get("name")
try:
site_url = site_info.get("signurl")
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
if not site_url or not site_cookie:
log.warn("【Sites】未配置 %s 的站点地址或Cookie无法签到" % str(site))
return ""
chrome = ChromeHelper()
if site_info.get("chrome") and chrome.get_status():
# 首页
log.info("【Sites】开始站点仿真签到%s" % site)
home_url = StringUtils.get_base_url(site_url)
if not chrome.visit(url=home_url, ua=ua, cookie=site_cookie):
log.warn("【Sites】%s 无法打开网站" % site)
return f"{site}】无法打开网站!"
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.warn("【Sites】%s 跳转站点失败" % site)
return f"{site}】跳转站点失败!"
# 判断是否已签到
html_text = chrome.get_html()
if not html_text:
log.warn("【Sites】%s 获取站点源码失败" % site)
return f"{site}】获取站点源码失败!"
# 查找签到按钮
html = etree.HTML(html_text)
xpath_str = None
for xpath in SiteConf.SITE_CHECKIN_XPATH:
if html.xpath(xpath):
xpath_str = xpath
break
if re.search(r'已签|签到已得', html_text, re.IGNORECASE) \
and not xpath_str:
log.info("【Sites】%s 今日已签到" % site)
return f"{site}】今日已签到"
if not xpath_str:
if SiteHelper.is_logged_in(html_text):
log.warn("【Sites】%s 未找到签到按钮,模拟登录成功" % site)
return f"{site}】模拟登录成功"
else:
log.info("【Sites】%s 未找到签到按钮,且模拟登录失败" % site)
return f"{site}】模拟登录失败!"
# 开始仿真
try:
checkin_obj = WebDriverWait(driver=chrome.browser, timeout=6).until(
es.element_to_be_clickable((By.XPATH, xpath_str)))
if checkin_obj:
checkin_obj.click()
log.info("【Sites】%s 仿真签到成功" % site)
return f"{site}】仿真签到成功"
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.warn("【Sites】%s 仿真签到失败:%s" % (site, str(e)))
return f"{site}】签到失败!"
# 模拟登录
else:
if site_url.find("attendance.php") != -1:
checkin_text = "签到"
else:
checkin_text = "模拟登录"
log.info(f"【Sites】开始站点{checkin_text}{site}")
# 访问链接
res = RequestUtils(cookies=site_cookie,
headers=ua,
proxies=Config().get_proxies() if site_info.get("proxy") else None
).get_res(url=site_url)
if res and res.status_code == 200:
if not SiteHelper.is_logged_in(res.text):
log.warn(f"【Sites】{site} {checkin_text}失败请检查Cookie")
return f"{site}{checkin_text}失败请检查Cookie"
else:
log.info(f"【Sites】{site} {checkin_text}成功")
return f"{site}{checkin_text}成功"
elif res is not None:
log.warn(f"【Sites】{site} {checkin_text}失败,状态码:{res.status_code}")
return f"{site}{checkin_text}失败,状态码:{res.status_code}"
else:
log.warn(f"【Sites】{site} {checkin_text}失败,无法打开网站")
return f"{site}{checkin_text}失败,无法打开网站!"
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.warn("【Sites】%s 签到出错:%s" % (site, str(e)))
return f"{site} 签到出错:{str(e)}"

View File

@ -1,110 +0,0 @@
import requests
import log
from app.helper import ChromeHelper, SubmoduleHelper
from app.utils import RequestUtils, ExceptionUtils
from app.utils.commons import singleton
from config import Config
@singleton
class SiteUserInfoFactory(object):
def __init__(self):
self._site_schema = SubmoduleHelper.import_submodules('app.sites.siteuserinfo',
filter_func=lambda _, obj: hasattr(obj, 'schema'))
self._site_schema.sort(key=lambda x: x.order)
log.debug(f"【Sites】: 已经加载的站点解析 {self._site_schema}")
def __build_class(self, html_text):
for site_schema in self._site_schema:
try:
if site_schema.match(html_text):
return site_schema
except Exception as e:
ExceptionUtils.exception_traceback(e)
return None
def build(self, url, site_name, site_cookie=None, ua=None, emulate=None, proxy=False):
if not site_cookie:
return None
log.debug(f"【Sites】站点 {site_name} url={url} site_cookie={site_cookie} ua={ua}")
session = requests.Session()
# 检测环境,有浏览器内核的优先使用仿真签到
chrome = ChromeHelper()
if emulate and chrome.get_status():
if not chrome.visit(url=url, ua=ua, cookie=site_cookie):
log.error("【Sites】%s 无法打开网站" % site_name)
return None
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.error("【Sites】%s 跳转站点失败" % site_name)
return None
# 判断是否已签到
html_text = chrome.get_html()
else:
proxies = Config().get_proxies() if proxy else None
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
# 第一次登录反爬
if html_text.find("title") == -1:
i = html_text.find("window.location")
if i == -1:
return None
tmp_url = url + html_text[i:html_text.find(";")] \
.replace("\"", "").replace("+", "").replace(" ", "").replace("window.location=", "")
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=tmp_url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
else:
log.error("【Sites】站点 %s 被反爬限制:%s, 状态码:%s" % (site_name, url, res.status_code))
return None
# 兼容假首页情况,假首页通常没有 <link rel="search" 属性
if '"search"' not in html_text and '"csrf-token"' not in html_text:
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url + "/index.php")
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
elif res is not None:
log.error(f"【Sites】站点 {site_name} 连接失败,状态码:{res.status_code}")
return None
else:
log.error(f"【Sites】站点 {site_name} 无法访问:{url}")
return None
# 解析站点类型
site_schema = self.__build_class(html_text)
if not site_schema:
log.error("【Sites】站点 %s 无法识别站点类型" % site_name)
return None
return site_schema(site_name, url, site_cookie, html_text, session=session, ua=ua)

366
app/sites/site_userinfo.py Normal file
View File

@ -0,0 +1,366 @@
import json
from datetime import datetime
from multiprocessing.dummy import Pool as ThreadPool
from threading import Lock
import requests
import log
from app.helper import ChromeHelper, SubmoduleHelper, DbHelper
from app.message import Message
from app.sites.sites import Sites
from app.utils import RequestUtils, ExceptionUtils
from app.utils.commons import singleton
from config import Config
lock = Lock()
@singleton
class SiteUserInfo(object):
sites = None
dbhelper = None
message = None
_MAX_CONCURRENCY = 10
_last_update_time = None
_sites_data = {}
def __init__(self):
# 加载模块
self._site_schema = SubmoduleHelper.import_submodules('app.sites.siteuserinfo',
filter_func=lambda _, obj: hasattr(obj, 'schema'))
self._site_schema.sort(key=lambda x: x.order)
log.debug(f"【Sites】加载站点解析{self._site_schema}")
self.init_config()
def init_config(self):
self.sites = Sites()
self.dbhelper = DbHelper()
self.message = Message()
# 站点上一次更新时间
self._last_update_time = None
# 站点数据
self._sites_data = {}
def __build_class(self, html_text):
for site_schema in self._site_schema:
try:
if site_schema.match(html_text):
return site_schema
except Exception as e:
ExceptionUtils.exception_traceback(e)
return None
def build(self, url, site_name, site_cookie=None, ua=None, emulate=None, proxy=False):
if not site_cookie:
return None
session = requests.Session()
log.debug(f"【Sites】站点 {site_name} url={url} site_cookie={site_cookie} ua={ua}")
# 检测环境,有浏览器内核的优先使用仿真签到
chrome = ChromeHelper()
if emulate and chrome.get_status():
if not chrome.visit(url=url, ua=ua, cookie=site_cookie):
log.error("【Sites】%s 无法打开网站" % site_name)
return None
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.error("【Sites】%s 跳转站点失败" % site_name)
return None
# 判断是否已签到
html_text = chrome.get_html()
else:
proxies = Config().get_proxies() if proxy else None
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
# 第一次登录反爬
if html_text.find("title") == -1:
i = html_text.find("window.location")
if i == -1:
return None
tmp_url = url + html_text[i:html_text.find(";")] \
.replace("\"", "").replace("+", "").replace(" ", "").replace("window.location=", "")
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=tmp_url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
else:
log.error("【Sites】站点 %s 被反爬限制:%s, 状态码:%s" % (site_name, url, res.status_code))
return None
# 兼容假首页情况,假首页通常没有 <link rel="search" 属性
if '"search"' not in html_text and '"csrf-token"' not in html_text:
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url + "/index.php")
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
elif res is not None:
log.error(f"【Sites】站点 {site_name} 连接失败,状态码:{res.status_code}")
return None
else:
log.error(f"【Sites】站点 {site_name} 无法访问:{url}")
return None
# 解析站点类型
site_schema = self.__build_class(html_text)
if not site_schema:
log.error("【Sites】站点 %s 无法识别站点类型" % site_name)
return None
return site_schema(site_name, url, site_cookie, html_text, session=session, ua=ua)
def __refresh_site_data(self, site_info):
"""
更新单个site 数据信息
:param site_info:
:return:
"""
site_name = site_info.get("name")
site_url = site_info.get("strict_url")
if not site_url:
return
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
unread_msg_notify = site_info.get("unread_msg_notify")
chrome = site_info.get("chrome")
proxy = site_info.get("proxy")
try:
site_user_info = self.build(url=site_url,
site_name=site_name,
site_cookie=site_cookie,
ua=ua,
emulate=chrome,
proxy=proxy)
if site_user_info:
log.debug(f"【Sites】站点 {site_name} 开始以 {site_user_info.site_schema()} 模型解析")
# 开始解析
site_user_info.parse()
log.debug(f"【Sites】站点 {site_name} 解析完成")
# 获取不到数据时,仅返回错误信息,不做历史数据更新
if site_user_info.err_msg:
self._sites_data.update({site_name: {"err_msg": site_user_info.err_msg}})
return
# 发送通知,存在未读消息
self.__notify_unread_msg(site_name, site_user_info, unread_msg_notify)
self._sites_data.update(
{
site_name: {
"upload": site_user_info.upload,
"username": site_user_info.username,
"user_level": site_user_info.user_level,
"join_at": site_user_info.join_at,
"download": site_user_info.download,
"ratio": site_user_info.ratio,
"seeding": site_user_info.seeding,
"seeding_size": site_user_info.seeding_size,
"leeching": site_user_info.leeching,
"bonus": site_user_info.bonus,
"url": site_url,
"err_msg": site_user_info.err_msg,
"message_unread": site_user_info.message_unread
}
})
return site_user_info
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.error(f"【Sites】站点 {site_name} 获取流量数据失败:{str(e)}")
def __notify_unread_msg(self, site_name, site_user_info, unread_msg_notify):
if site_user_info.message_unread <= 0:
return
if self._sites_data.get(site_name, {}).get('message_unread') == site_user_info.message_unread:
return
if not unread_msg_notify:
return
# 解析出内容,则发送内容
if len(site_user_info.message_unread_contents) > 0:
for head, date, content in site_user_info.message_unread_contents:
msg_title = f"【站点 {site_user_info.site_name} 消息】"
msg_text = f"时间:{date}\n标题:{head}\n内容:\n{content}"
self.message.send_site_message(title=msg_title, text=msg_text)
else:
self.message.send_site_message(
title=f"站点 {site_user_info.site_name} 收到 {site_user_info.message_unread} 条新消息,请登陆查看")
def refresh_pt_date_now(self):
"""
强制刷新站点数据
"""
self.__refresh_all_site_data(force=True)
def get_pt_date(self, specify_sites=None, force=False):
"""
获取站点上传下载量
"""
self.__refresh_all_site_data(force=force, specify_sites=specify_sites)
return self._sites_data
def __refresh_all_site_data(self, force=False, specify_sites=None):
"""
多线程刷新站点下载上传量默认间隔6小时
"""
if not self.sites.get_sites():
return
with lock:
if not force \
and not specify_sites \
and self._last_update_time \
and (datetime.now() - self._last_update_time).seconds < 6 * 3600:
return
if specify_sites \
and not isinstance(specify_sites, list):
specify_sites = [specify_sites]
# 没有指定站点,默认使用全部站点
if not specify_sites:
refresh_sites = self.sites.get_sites(statistic=True)
else:
refresh_sites = [site for site in self.sites.get_sites(statistic=True) if
site.get("name") in specify_sites]
if not refresh_sites:
return
# 并发刷新
with ThreadPool(min(len(refresh_sites), self._MAX_CONCURRENCY)) as p:
site_user_infos = p.map(self.__refresh_site_data, refresh_sites)
site_user_infos = [info for info in site_user_infos if info]
# 登记历史数据
self.dbhelper.insert_site_statistics_history(site_user_infos)
# 实时用户数据
self.dbhelper.update_site_user_statistics(site_user_infos)
# 更新站点图标
self.dbhelper.update_site_favicon(site_user_infos)
# 实时做种信息
self.dbhelper.update_site_seed_info(site_user_infos)
# 站点图标重新加载
self.sites.init_favicons()
# 更新时间
self._last_update_time = datetime.now()
def get_pt_site_statistics_history(self, days=7):
"""
获取站点上传下载量
"""
site_urls = []
for site in self.sites.get_sites(statistic=True):
site_url = site.get("strict_url")
if site_url:
site_urls.append(site_url)
return self.dbhelper.get_site_statistics_recent_sites(days=days, strict_urls=site_urls)
def get_site_user_statistics(self, sites=None, encoding="RAW"):
"""
获取站点用户数据
:param sites: 站点名称
:param encoding: RAW/DICT
:return:
"""
statistic_sites = self.sites.get_sites(statistic=True)
if not sites:
site_urls = [site.get("strict_url") for site in statistic_sites]
else:
site_urls = [site.get("strict_url") for site in statistic_sites
if site.get("name") in sites]
raw_statistics = self.dbhelper.get_site_user_statistics(strict_urls=site_urls)
if encoding == "RAW":
return raw_statistics
return self.__todict(raw_statistics)
def get_pt_site_activity_history(self, site, days=365 * 2):
"""
查询站点 上传下载做种数据
:param site: 站点名称
:param days: 最大数据量
:return:
"""
site_activities = [["time", "upload", "download", "bonus", "seeding", "seeding_size"]]
sql_site_activities = self.dbhelper.get_site_statistics_history(site=site, days=days)
for sql_site_activity in sql_site_activities:
timestamp = datetime.strptime(sql_site_activity.DATE, '%Y-%m-%d').timestamp() * 1000
site_activities.append(
[timestamp,
sql_site_activity.UPLOAD,
sql_site_activity.DOWNLOAD,
sql_site_activity.BONUS,
sql_site_activity.SEEDING,
sql_site_activity.SEEDING_SIZE])
return site_activities
def get_pt_site_seeding_info(self, site):
"""
查询站点 做种分布信息
:param site: 站点名称
:return: seeding_info:[uploader_num, seeding_size]
"""
site_seeding_info = {"seeding_info": []}
seeding_info = self.dbhelper.get_site_seeding_info(site=site)
if not seeding_info:
return site_seeding_info
site_seeding_info["seeding_info"] = json.loads(seeding_info[0])
return site_seeding_info
@staticmethod
def __todict(raw_statistics):
statistics = []
for site in raw_statistics:
statistics.append({"site": site.SITE,
"username": site.USERNAME,
"user_level": site.USER_LEVEL,
"join_at": site.JOIN_AT,
"update_at": site.UPDATE_AT,
"upload": site.UPLOAD,
"download": site.DOWNLOAD,
"ratio": site.RATIO,
"seeding": site.SEEDING,
"leeching": site.LEECHING,
"seeding_size": site.SEEDING_SIZE,
"bonus": site.BONUS,
"url": site.URL,
"msg_unread": site.MSG_UNREAD
})
return statistics

View File

@ -1,29 +1,18 @@
import json
import random
import re
import time
import traceback
from datetime import datetime
from functools import lru_cache
from multiprocessing.dummy import Pool as ThreadPool
from threading import Lock
from lxml import etree
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as es
from selenium.webdriver.support.wait import WebDriverWait
import log
from app.conf import SiteConf
from app.helper import ChromeHelper, SiteHelper, DbHelper
from app.message import Message
from app.sites.site_user_info_factory import SiteUserInfoFactory
from app.conf import SiteConf
from app.utils import RequestUtils, StringUtils, ExceptionUtils
from app.utils.commons import singleton
from config import Config
lock = Lock()
@singleton
class Sites:
@ -33,13 +22,11 @@ class Sites:
_sites = []
_siteByIds = {}
_siteByUrls = {}
_sites_data = {}
_site_favicons = {}
_rss_sites = []
_brush_sites = []
_statistic_sites = []
_signin_sites = []
_last_update_time = None
_MAX_CONCURRENCY = 10
@ -51,10 +38,6 @@ class Sites:
self.message = Message()
# 原始站点列表
self._sites = []
# 站点数据
self._sites_data = {}
# 站点数据更新时间
self._last_update_time = None
# ID存储站点
self._siteByIds = {}
# URL存储站点
@ -68,7 +51,7 @@ class Sites:
# 开启签到功能站点:
self._signin_sites = []
# 站点图标
self.__init_favicons()
self.init_favicons()
# 站点数据
self._sites = self.dbhelper.get_config_site()
for site in self._sites:
@ -113,7 +96,8 @@ class Sites:
"unread_msg_notify": True if site_note.get("message") == "Y" else False,
"chrome": True if site_note.get("chrome") == "Y" else False,
"proxy": True if site_note.get("proxy") == "Y" else False,
"subtitle": True if site_note.get("subtitle") == "Y" else False
"subtitle": True if site_note.get("subtitle") == "Y" else False,
"strict_url": StringUtils.get_base_url(site_signurl or site_rssurl)
}
# 以ID存储
self._siteByIds[site.ID] = site_info
@ -122,7 +106,7 @@ class Sites:
if site_strict_url:
self._siteByUrls[site_strict_url] = site_info
def __init_favicons(self):
def init_favicons(self):
"""
加载图标到内存
"""
@ -214,129 +198,6 @@ class Sites:
return site.get("download_setting")
return None
def __refresh_all_site_data(self, force=False, specify_sites=None):
"""
多线程刷新站点下载上传量默认间隔6小时
"""
if not self._sites:
return
with lock:
if not force \
and not specify_sites \
and self._last_update_time \
and (datetime.now() - self._last_update_time).seconds < 6 * 3600:
return
if specify_sites \
and not isinstance(specify_sites, list):
specify_sites = [specify_sites]
# 没有指定站点,默认使用全部站点
if not specify_sites:
refresh_sites = self.get_sites(statistic=True)
else:
refresh_sites = [site for site in self.get_sites(statistic=True) if site.get("name") in specify_sites]
if not refresh_sites:
return
# 并发刷新
with ThreadPool(min(len(refresh_sites), self._MAX_CONCURRENCY)) as p:
site_user_infos = p.map(self.__refresh_site_data, refresh_sites)
site_user_infos = [info for info in site_user_infos if info]
# 登记历史数据
self.dbhelper.insert_site_statistics_history(site_user_infos)
# 实时用户数据
self.dbhelper.update_site_user_statistics(site_user_infos)
# 更新站点图标
self.dbhelper.update_site_favicon(site_user_infos)
# 实时做种信息
self.dbhelper.update_site_seed_info(site_user_infos)
# 站点图标重新加载
self.__init_favicons()
# 更新时间
self._last_update_time = datetime.now()
def __refresh_site_data(self, site_info):
"""
更新单个site 数据信息
:param site_info:
:return:
"""
site_name = site_info.get("name")
site_url = self.__get_site_strict_url(site_info)
if not site_url:
return
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
unread_msg_notify = site_info.get("unread_msg_notify")
chrome = site_info.get("chrome")
proxy = site_info.get("proxy")
try:
site_user_info = SiteUserInfoFactory().build(url=site_url,
site_name=site_name,
site_cookie=site_cookie,
ua=ua,
emulate=chrome,
proxy=proxy)
if site_user_info:
log.debug(f"【Sites】站点 {site_name} 开始以 {site_user_info.site_schema()} 模型解析")
# 开始解析
site_user_info.parse()
log.debug(f"【Sites】站点 {site_name} 解析完成")
# 获取不到数据时,仅返回错误信息,不做历史数据更新
if site_user_info.err_msg:
self._sites_data.update({site_name: {"err_msg": site_user_info.err_msg}})
return
# 发送通知,存在未读消息
self.__notify_unread_msg(site_name, site_user_info, unread_msg_notify)
self._sites_data.update({site_name: {
"upload": site_user_info.upload,
"username": site_user_info.username,
"user_level": site_user_info.user_level,
"join_at": site_user_info.join_at,
"download": site_user_info.download,
"ratio": site_user_info.ratio,
"seeding": site_user_info.seeding,
"seeding_size": site_user_info.seeding_size,
"leeching": site_user_info.leeching,
"bonus": site_user_info.bonus,
"url": site_url,
"err_msg": site_user_info.err_msg,
"message_unread": site_user_info.message_unread}
})
return site_user_info
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.error("【Sites】站点 %s 获取流量数据失败:%s - %s" % (site_name, str(e), traceback.format_exc()))
def __notify_unread_msg(self, site_name, site_user_info, unread_msg_notify):
if site_user_info.message_unread <= 0:
return
if self._sites_data.get(site_name, {}).get('message_unread') == site_user_info.message_unread:
return
if not unread_msg_notify:
return
# 解析出内容,则发送内容
if len(site_user_info.message_unread_contents) > 0:
for head, date, content in site_user_info.message_unread_contents:
msg_title = f"【站点 {site_user_info.site_name} 消息】"
msg_text = f"时间:{date}\n标题:{head}\n内容:\n{content}"
self.message.send_site_message(title=msg_title, text=msg_text)
else:
self.message.send_site_message(
title=f"站点 {site_user_info.site_name} 收到 {site_user_info.message_unread} 条新消息,请登陆查看")
def test_connection(self, site_id):
"""
测试站点连通性
@ -390,220 +251,6 @@ class Sites:
else:
return False, "无法打开网站", seconds
def signin(self):
"""
站点并发签到
"""
sites = self.get_sites(signin=True)
if not sites:
return
with ThreadPool(min(len(sites), self._MAX_CONCURRENCY)) as p:
status = p.map(self.__signin_site, sites)
if status:
self.message.send_site_signin_message(status)
@staticmethod
def __signin_site(site_info):
"""
签到一个站点
"""
if not site_info:
return ""
site = site_info.get("name")
try:
site_url = site_info.get("signurl")
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
if not site_url or not site_cookie:
log.warn("【Sites】未配置 %s 的站点地址或Cookie无法签到" % str(site))
return ""
chrome = ChromeHelper()
if site_info.get("chrome") and chrome.get_status():
# 首页
log.info("【Sites】开始站点仿真签到%s" % site)
home_url = StringUtils.get_base_url(site_url)
if not chrome.visit(url=home_url, ua=ua, cookie=site_cookie):
log.warn("【Sites】%s 无法打开网站" % site)
return f"{site}】无法打开网站!"
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.warn("【Sites】%s 跳转站点失败" % site)
return f"{site}】跳转站点失败!"
# 判断是否已签到
html_text = chrome.get_html()
if not html_text:
log.warn("【Sites】%s 获取站点源码失败" % site)
return f"{site}】获取站点源码失败!"
# 查找签到按钮
html = etree.HTML(html_text)
xpath_str = None
for xpath in SiteConf.SITE_CHECKIN_XPATH:
if html.xpath(xpath):
xpath_str = xpath
break
if re.search(r'已签|签到已得', html_text, re.IGNORECASE) \
and not xpath_str:
log.info("【Sites】%s 今日已签到" % site)
return f"{site}】今日已签到"
if not xpath_str:
if SiteHelper.is_logged_in(html_text):
log.warn("【Sites】%s 未找到签到按钮,模拟登录成功" % site)
return f"{site}】模拟登录成功"
else:
log.info("【Sites】%s 未找到签到按钮,且模拟登录失败" % site)
return f"{site}】模拟登录失败!"
# 开始仿真
try:
checkin_obj = WebDriverWait(driver=chrome.browser, timeout=6).until(
es.element_to_be_clickable((By.XPATH, xpath_str)))
if checkin_obj:
checkin_obj.click()
log.info("【Sites】%s 仿真签到成功" % site)
return f"{site}】仿真签到成功"
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.warn("【Sites】%s 仿真签到失败:%s" % (site, str(e)))
return f"{site}】签到失败!"
# 模拟登录
else:
if site_url.find("attendance.php") != -1:
checkin_text = "签到"
else:
checkin_text = "模拟登录"
log.info(f"【Sites】开始站点{checkin_text}{site}")
# 访问链接
res = RequestUtils(cookies=site_cookie,
headers=ua,
proxies=Config().get_proxies() if site_info.get("proxy") else None
).get_res(url=site_url)
if res and res.status_code == 200:
if not SiteHelper.is_logged_in(res.text):
log.warn(f"【Sites】{site} {checkin_text}失败请检查Cookie")
return f"{site}{checkin_text}失败请检查Cookie"
else:
log.info(f"【Sites】{site} {checkin_text}成功")
return f"{site}{checkin_text}成功"
elif res is not None:
log.warn(f"【Sites】{site} {checkin_text}失败,状态码:{res.status_code}")
return f"{site}{checkin_text}失败,状态码:{res.status_code}"
else:
log.warn(f"【Sites】{site} {checkin_text}失败,无法打开网站")
return f"{site}{checkin_text}失败,无法打开网站!"
except Exception as e:
log.error("【Sites】%s 签到出错:%s - %s" % (site, str(e), traceback.format_exc()))
return f"{site} 签到出错:{str(e)}"
def refresh_pt_date_now(self):
"""
强制刷新站点数据
"""
self.__refresh_all_site_data(force=True)
def get_pt_date(self, specify_sites=None, force=False):
"""
获取站点上传下载量
"""
self.__refresh_all_site_data(force=force, specify_sites=specify_sites)
return self._sites_data
def get_pt_site_statistics_history(self, days=7):
"""
获取站点上传下载量
"""
site_urls = []
for site in self.get_sites(statistic=True):
site_url = self.__get_site_strict_url(site)
if site_url:
site_urls.append(site_url)
return self.dbhelper.get_site_statistics_recent_sites(days=days, strict_urls=site_urls)
def get_site_user_statistics(self, sites=None, encoding="RAW"):
"""
获取站点用户数据
:param sites: 站点名称
:param encoding: RAW/DICT
:return:
"""
statistic_sites = self.get_sites(statistic=True)
if not sites:
site_urls = [self.__get_site_strict_url(site) for site in statistic_sites]
else:
site_urls = [self.__get_site_strict_url(site) for site in statistic_sites
if site.get("name") in sites]
raw_statistics = self.dbhelper.get_site_user_statistics(strict_urls=site_urls)
if encoding == "RAW":
return raw_statistics
return self.__todict(raw_statistics)
@staticmethod
def __todict(raw_statistics):
statistics = []
for site in raw_statistics:
statistics.append({"site": site.SITE,
"username": site.USERNAME,
"user_level": site.USER_LEVEL,
"join_at": site.JOIN_AT,
"update_at": site.UPDATE_AT,
"upload": site.UPLOAD,
"download": site.DOWNLOAD,
"ratio": site.RATIO,
"seeding": site.SEEDING,
"leeching": site.LEECHING,
"seeding_size": site.SEEDING_SIZE,
"bonus": site.BONUS,
"url": site.URL,
"msg_unread": site.MSG_UNREAD
})
return statistics
def get_pt_site_activity_history(self, site, days=365 * 2):
"""
查询站点 上传下载做种数据
:param site: 站点名称
:param days: 最大数据量
:return:
"""
site_activities = [["time", "upload", "download", "bonus", "seeding", "seeding_size"]]
sql_site_activities = self.dbhelper.get_site_statistics_history(site=site, days=days)
for sql_site_activity in sql_site_activities:
timestamp = datetime.strptime(sql_site_activity.DATE, '%Y-%m-%d').timestamp() * 1000
site_activities.append(
[timestamp,
sql_site_activity.UPLOAD,
sql_site_activity.DOWNLOAD,
sql_site_activity.BONUS,
sql_site_activity.SEEDING,
sql_site_activity.SEEDING_SIZE])
return site_activities
def get_pt_site_seeding_info(self, site):
"""
查询站点 做种分布信息
:param site: 站点名称
:return: seeding_info:[uploader_num, seeding_size]
"""
site_seeding_info = {"seeding_info": []}
seeding_info = self.dbhelper.get_site_seeding_info(site=site)
if not seeding_info:
return site_seeding_info
site_seeding_info["seeding_info"] = json.loads(seeding_info[0])
return site_seeding_info
@staticmethod
def __get_site_strict_url(site):
if not site:
return
site_url = site.get("signurl") or site.get("rssurl")
if site_url:
return StringUtils.get_base_url(site_url)
return ""
def get_site_attr(self, url):
"""
整合公有站点和私有站点的属性

View File

@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
from abc import ABCMeta, abstractmethod
from app.utils import StringUtils
class _ISiteSigninHandler(metaclass=ABCMeta):
"""
实现站点签到的基类所有站点签到类都需要继承此类并实现match和signin方法
实现类放置到sitesignin目录下将会自动加载
"""
# 匹配的站点Url每一个实现类都需要设置为自己的站点Url
site_url = ""
@abstractmethod
def match(self, url):
"""
根据站点Url判断是否匹配当前站点签到类大部分情况使用默认实现即可
:param url: 站点Url
:return: 是否匹配如匹配则会调用该类的signin方法
"""
return True if StringUtils.url_equal(url, self.site_url) else False
@abstractmethod
def signin(self, site_info: dict):
"""
执行签到操作
:param site_info: 站点信息含有站点Url站点CookieUA等信息
:return: 签到结果信息
"""
pass

View File

@ -243,6 +243,8 @@ class StringUtils:
"""
获取URL根地址
"""
if not url:
return ""
scheme, netloc = StringUtils.get_url_netloc(url)
return f"{scheme}://{netloc}"

View File

@ -12,7 +12,6 @@ class DownloaderType(Enum):
QB = 'Qbittorrent'
TR = 'Transmission'
Client115 = '115网盘'
Aria2 = 'Aria2'
PikPak = 'PikPak'
@ -59,8 +58,6 @@ class OsType(Enum):
class IndexerType(Enum):
JACKETT = "Jackett"
PROWLARR = "Prowlarr"
BUILTIN = "Indexer"

View File

@ -321,10 +321,6 @@ def update_config():
_config['client115'].pop('save_path')
if _config.get('client115', {}).get('save_containerpath'):
_config['client115'].pop('save_containerpath')
if _config.get('aria2', {}).get('save_path'):
_config['aria2'].pop('save_path')
if _config.get('aria2', {}).get('save_containerpath'):
_config['aria2'].pop('save_containerpath')
if _config.get('pikpak', {}).get('save_path'):
_config['pikpak'].pop('save_path')
if _config.get('pikpak', {}).get('save_containerpath'):

View File

@ -179,6 +179,9 @@ class Config(object):
def get_inner_config_path(self):
return os.path.join(self.get_root_path(), "config")
def get_script_path(self):
return os.path.join(self.get_inner_config_path(), "scripts")
def get_domain(self):
domain = (self.get_config('app') or {}).get('domain')
if domain and not domain.startswith('http'):

View File

@ -179,7 +179,7 @@ sync:
# 【配置站点检索信息】
pt:
# 【下载使用的客户端软件】qbittorrent、transmission、client115、aria2
# 【下载使用的客户端软件】qbittorrent、transmission、client115
pt_client: qbittorrent
# 【下载软件监控开关】是否监控下载软件true、false如为true则下载完成会自动转移和重命名如为false则不会处理
# 下载软件监控与Sync下载目录同步不要同时开启否则功能存在重复
@ -188,9 +188,7 @@ pt:
pt_monitor_only: true
# 【下载完成后转移到媒体库的转移模式】link、copy、softlink、move、rclone、rclonecopy、minio、miniocopy详情参考顶部说明
rmt_mode: link
#【聚合检索使用的检索器】jackett、prowlarr、builtin需要配置jackett或prowlarr对应的配置区域builtin为内置索引器需要在配置文件目录/sites目录下存入对应的站点配置文件
# 1、通过微信发送关键字实时检索下载发送格式示例电视剧 西部世界、西部世界第1季、西部世界第1季第2集、西部世界 2022只会匹配真实名称命中后会自动下载使用说明参考https://github.com/jxxghp/nas-tools/wiki/
# 2、使用WEB UI中的搜索界面搜索资源会识别显示真实名称并显示媒体图片和评分等信息会同时匹配种子名称跟真实名称
#【聚合检索使用的检索器】builtin
search_indexer: builtin
# 【内建索引器使用的站点】:只有在该站点列表中内建索引器搜索时才会使用
indexer_sites:
@ -212,22 +210,6 @@ pt:
# 【搜索结果数量限制】:每个站点返回搜索结果的最大数量
site_search_result_num: 100
# 【配置Jackett检索器】
jackett:
# 【Jackett地址】Jackett地址和端口格式http(s)://IP:PORT
host:
# 【Jackett ApiKey】Jackett配置页面右上角复制API Key
api_key:
# 【Jackett管理密码】如未设置可为空
password:
# 【配置prowlarr检索器】
prowlarr:
# 【Prowlarr地址】
host:
# 【Prowlarr ApiKey】Prowlarr设置页面获取API Key
api_key:
# 【配置qBittorrent下载软件】pt区的pt_client如配置为qbittorrent则需要同步配置该项
qbittorrent:
# 【qBittorrent IP地址和端口】注意如果qb启动了HTTPS证书则需要配置为https://IP
@ -255,15 +237,6 @@ client115:
# 115 Cookie 抓包获取
cookie:
# 配置Aria2下载器
aria2:
# Aria2地址
host:
# Aria2 RPC端口
port:
# 密码令牌
secret:
# 配置 pikpak 网盘下载器
pikpak:
# 用户名
@ -290,7 +263,7 @@ douban:
interval:
# 【同步数据类型】同步哪些类型的收藏数据do 在看wish 想看collect 看过,用逗号分隔配置
types: "wish"
# 【自动开载开关】:同步到豆瓣的数据后是否自动检索站点并下载需要配置Jackett
# 【自动开载开关】:同步到豆瓣的数据后是否自动检索站点并下载
auto_search: true
# 【自动添加RSS开关】站点检索找不到的记录是否自动添加RSS订阅可实现未搜索到的自动追更
auto_rss: true

View File

@ -1,6 +1,6 @@
FROM alpine
RUN apk add --no-cache libffi-dev \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/jxxghp/nas-tools/master/package_list.txt)) \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/NAStool/nas-tools/master/package_list.txt)) \
&& ln -sf /usr/share/zoneinfo/${TZ} /etc/localtime \
&& echo "${TZ}" > /etc/timezone \
&& ln -sf /usr/bin/python3 /usr/bin/python \
@ -10,7 +10,7 @@ RUN apk add --no-cache libffi-dev \
&& chmod +x /usr/bin/mc \
&& pip install --upgrade pip setuptools wheel \
&& pip install cython \
&& pip install -r https://raw.githubusercontent.com/jxxghp/nas-tools/master/requirements.txt \
&& pip install -r https://raw.githubusercontent.com/NAStool/nas-tools/master/requirements.txt \
&& apk del libffi-dev \
&& npm install pm2 -g \
&& rm -rf /tmp/* /root/.cache /var/cache/apk/*
@ -21,7 +21,7 @@ ENV LANG="C.UTF-8" \
NASTOOL_CN_UPDATE=true \
NASTOOL_VERSION=master \
PS1="\u@\h:\w \$ " \
REPO_URL="https://github.com/jxxghp/nas-tools.git" \
REPO_URL="https://github.com/NAStool/nas-tools.git" \
PYPI_MIRROR="https://pypi.tuna.tsinghua.edu.cn/simple" \
ALPINE_MIRROR="mirrors.ustc.edu.cn" \
PUID=0 \

View File

@ -1,6 +1,6 @@
FROM alpine
RUN apk add --no-cache libffi-dev \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/jxxghp/nas-tools/dev/package_list.txt)) \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/NAStool/nas-tools/dev/package_list.txt)) \
&& ln -sf /usr/share/zoneinfo/${TZ} /etc/localtime \
&& echo "${TZ}" > /etc/timezone \
&& ln -sf /usr/bin/python3 /usr/bin/python \
@ -10,7 +10,7 @@ RUN apk add --no-cache libffi-dev \
&& chmod +x /usr/bin/mc \
&& pip install --upgrade pip setuptools wheel \
&& pip install cython \
&& pip install -r https://raw.githubusercontent.com/jxxghp/nas-tools/dev/requirements.txt \
&& pip install -r https://raw.githubusercontent.com/NAStool/nas-tools/dev/requirements.txt \
&& apk del libffi-dev \
&& npm install pm2 -g \
&& rm -rf /tmp/* /root/.cache /var/cache/apk/*
@ -21,7 +21,7 @@ ENV LANG="C.UTF-8" \
NASTOOL_CN_UPDATE=true \
NASTOOL_VERSION=dev \
PS1="\u@\h:\w \$ " \
REPO_URL="https://github.com/jxxghp/nas-tools.git" \
REPO_URL="https://github.com/NAStool/nas-tools.git" \
PYPI_MIRROR="https://pypi.tuna.tsinghua.edu.cn/simple" \
ALPINE_MIRROR="mirrors.ustc.edu.cn" \
PUID=0 \

View File

@ -16,7 +16,7 @@ RUN apk add --no-cache libffi-dev \
&& ln -sf /usr/bin/python3 /usr/bin/python \
&& pip install --upgrade pip setuptools wheel \
&& pip install cython \
&& pip install -r https://raw.githubusercontent.com/jxxghp/nas-tools/master/requirements.txt \
&& pip install -r https://raw.githubusercontent.com/NAStool/nas-tools/master/requirements.txt \
&& npm install pm2 -g \
&& apk del --purge libffi-dev gcc musl-dev libxml2-dev libxslt-dev \
&& pip uninstall -y cython \
@ -28,7 +28,7 @@ ENV LANG="C.UTF-8" \
NASTOOL_CN_UPDATE=true \
NASTOOL_VERSION=lite \
PS1="\u@\h:\w \$ " \
REPO_URL="https://github.com/jxxghp/nas-tools.git" \
REPO_URL="https://github.com/NAStool/nas-tools.git" \
PYPI_MIRROR="https://pypi.tuna.tsinghua.edu.cn/simple" \
ALPINE_MIRROR="mirrors.ustc.edu.cn" \
PUID=0 \

View File

@ -12,7 +12,7 @@ services:
- PGID=0 # 想切换为哪个用户来运行程序该用户的gid
- UMASK=000 # 掩码权限默认000可以考虑设置为022
- NASTOOL_AUTO_UPDATE=false # 如需在启动容器时自动升级程程序请设置为true
#- REPO_URL=https://ghproxy.com/https://github.com/jxxghp/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
#- REPO_URL=https://ghproxy.com/https://github.com/NAStool/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
restart: always
network_mode: bridge
hostname: nas-tools

View File

@ -18,11 +18,11 @@
**注意**
- 媒体目录的设置必须符合 [配置说明](https://github.com/jxxghp/nas-tools#%E9%85%8D%E7%BD%AE) 的要求。
- 媒体目录的设置必须符合 [配置说明](https://github.com/NAStool/nas-tools#%E9%85%8D%E7%BD%AE) 的要求。
- umask含义详见http://www.01happy.com/linux-umask-analyze 。
- 创建后请根据 [配置说明](https://github.com/jxxghp/nas-tools#%E9%85%8D%E7%BD%AE) 及该文件本身的注释,修改`config/config.yaml`,修改好后再重启容器,最后访问`http://<ip>:<web_port>`
- 创建后请根据 [配置说明](https://github.com/NAStool/nas-tools#%E9%85%8D%E7%BD%AE) 及该文件本身的注释,修改`config/config.yaml`,修改好后再重启容器,最后访问`http://<ip>:<web_port>`
**docker cli**
@ -41,7 +41,7 @@ docker run -d \
jxxghp/nas-tools
```
如果你访问github的网络不太好可以考虑在创建容器时增加设置一个环境变量`-e REPO_URL="https://ghproxy.com/https://github.com/jxxghp/nas-tools.git" \`。
如果你访问github的网络不太好可以考虑在创建容器时增加设置一个环境变量`-e REPO_URL="https://ghproxy.com/https://github.com/NAStool/nas-tools.git" \`。
**docker-compose**
@ -63,7 +63,7 @@ services:
- UMASK=000 # 掩码权限默认000可以考虑设置为022
- NASTOOL_AUTO_UPDATE=false # 如需在启动容器时自动升级程程序请设置为true
- NASTOOL_CN_UPDATE=false # 如果开启了容器启动自动升级程序并且网络不太友好时可以设置为true会使用国内源进行软件更新
#- REPO_URL=https://ghproxy.com/https://github.com/jxxghp/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
#- REPO_URL=https://ghproxy.com/https://github.com/NAStool/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
restart: always
network_mode: bridge
hostname: nas-tools

6
run.py
View File

@ -108,6 +108,8 @@ def init_system():
def start_service():
log.console("开始启动服务...")
# 加载索引器配置
IndexerHelper()
# 启动虚拟显示
DisplayHelper()
# 启动定时服务
@ -122,9 +124,7 @@ def start_service():
TorrentRemover()
# 启动播放限速服务
SpeedLimiter()
# 加载索引器配置
IndexerHelper()
# 初始化浏览器
# 初始化浏览器驱动
if not is_windows_exe:
ChromeHelper().init_driver()

View File

@ -1 +1 @@
APP_VERSION = 'v2.9.1'
APP_VERSION = 'v2.9.2'

BIN
web/.DS_Store vendored Normal file

Binary file not shown.

View File

@ -31,8 +31,7 @@ from app.message import Message, MessageCenter
from app.rss import Rss
from app.rsschecker import RssChecker
from app.scheduler import stop_scheduler
from app.sites import Sites
from app.sites.sitecookie import SiteCookie
from app.sites import Sites, SiteUserInfo, SiteSignin, SiteCookie
from app.subscribe import Subscribe
from app.subtitle import Subtitle
from app.sync import Sync, stop_monitor
@ -273,7 +272,7 @@ class WebAction:
commands = {
"/ptr": {"func": TorrentRemover().auto_remove_torrents, "desp": "删种"},
"/ptt": {"func": Downloader().transfer, "desp": "下载文件转移"},
"/pts": {"func": Sites().signin, "desp": "站点签到"},
"/pts": {"func": SiteSignin().signin, "desp": "站点签到"},
"/rst": {"func": Sync().transfer_all_sync, "desp": "目录同步"},
"/rss": {"func": Rss().rssdownload, "desp": "RSS订阅"},
"/db": {"func": DoubanSync().sync, "desp": "豆瓣同步"},
@ -326,11 +325,6 @@ class WebAction:
vals = cfg_value.split(",")
cfg['douban']['users'] = vals
return cfg
# 索引器
if cfg_key == "jackett.indexers":
vals = cfg_value.split("\n")
cfg['jackett']['indexers'] = vals
return cfg
# 最大支持三层赋值
keys = cfg_key.split(".")
if keys:
@ -419,7 +413,7 @@ class WebAction:
commands = {
"autoremovetorrents": TorrentRemover().auto_remove_torrents,
"pttransfer": Downloader().transfer,
"ptsignin": Sites().signin,
"ptsignin": SiteSignin().signin,
"sync": Sync().transfer_all_sync,
"rssdownload": Rss().rssdownload,
"douban": DoubanSync().sync,
@ -640,23 +634,6 @@ class WebAction:
progress = round(torrent.get('percentDone'), 1)
# 主键
key = torrent.get('info_hash')
elif Client == DownloaderType.Aria2:
if torrent.get('status') != 'active':
state = "Stoped"
speed = "已暂停"
else:
state = "Downloading"
dlspeed = StringUtils.str_filesize(
torrent.get('downloadSpeed'))
upspeed = StringUtils.str_filesize(
torrent.get('uploadSpeed'))
speed = "%s%sB/s %s%sB/s" % (chr(8595),
dlspeed, chr(8593), upspeed)
# 进度
progress = round(int(torrent.get('completedLength')) /
int(torrent.get("totalLength")), 1) * 100
# 主键
key = torrent.get('gid')
elif Client == DownloaderType.PikPak:
key = torrent.get('id')
if torrent.get('finish'):
@ -2198,7 +2175,7 @@ class WebAction:
resp = {"code": 0}
resp.update(
{"dataset": Sites().get_pt_site_activity_history(data["name"])})
{"dataset": SiteUserInfo().get_pt_site_activity_history(data["name"])})
return resp
@staticmethod
@ -2212,8 +2189,7 @@ class WebAction:
return {"code": 1, "msg": "查询参数错误"}
resp = {"code": 0}
_, _, site, upload, download = Sites(
).get_pt_site_statistics_history(data["days"] + 1)
_, _, site, upload, download = SiteUserInfo().get_pt_site_statistics_history(data["days"] + 1)
# 调整为dataset组织数据
dataset = [["site", "upload", "download"]]
@ -2234,7 +2210,7 @@ class WebAction:
resp = {"code": 0}
seeding_info = Sites().get_pt_site_seeding_info(
seeding_info = SiteUserInfo().get_pt_site_seeding_info(
data["name"]).get("seeding_info", [])
# 调整为dataset组织数据
dataset = [["seeders", "size"]]
@ -3889,8 +3865,7 @@ class WebAction:
查询所有过滤规则
"""
RuleGroups = Filter().get_rule_infos()
sql_file = os.path.join(Config().get_root_path(),
"config", "init_filter.sql")
sql_file = os.path.join(Config().get_script_path(), "init_filter.sql")
with open(sql_file, "r", encoding="utf-8") as f:
sql_list = f.read().split(';\n')
Init_RuleGroups = []
@ -4384,7 +4359,7 @@ class WebAction:
sort_by = data.get("sort_by")
sort_on = data.get("sort_on")
site_hash = data.get("site_hash")
statistics = Sites().get_site_user_statistics(sites=sites, encoding=encoding)
statistics = SiteUserInfo().get_site_user_statistics(sites=sites, encoding=encoding)
if sort_by and sort_on in ["asc", "desc"]:
if sort_on == "asc":
statistics.sort(key=lambda x: x[sort_by])

View File

@ -597,7 +597,7 @@ class DownloadConfigUpdate(ClientResource):
parser.add_argument('download_limit', type=int, help='下载速度限制', location='form')
parser.add_argument('ratio_limit', type=int, help='分享率限制', location='form')
parser.add_argument('seeding_time_limit', type=int, help='做种时间限制', location='form')
parser.add_argument('downloader', type=str, help='下载器Qbittorrent/Transmission/115网盘/Aria2', location='form')
parser.add_argument('downloader', type=str, help='下载器Qbittorrent/Transmission', location='form')
@download.doc(parser=parser)
def post(self):

View File

@ -46,9 +46,9 @@ class WebUtils:
"""
try:
version_res = RequestUtils(proxies=Config().get_proxies()).get_res(
"https://api.github.com/repos/jxxghp/nas-tools/releases/latest")
"https://api.github.com/repos/NAStool/nas-tools/releases/latest")
commit_res = RequestUtils(proxies=Config().get_proxies()).get_res(
"https://api.github.com/repos/jxxghp/nas-tools/commits/master")
"https://api.github.com/repos/NAStool/nas-tools/commits/master")
if version_res and commit_res:
ver_json = version_res.json()
commit_json = commit_res.json()

View File

@ -29,7 +29,7 @@ from app.media.meta import MetaInfo
from app.mediaserver import WebhookEvent
from app.message import Message
from app.rsschecker import RssChecker
from app.sites import Sites
from app.sites import Sites, SiteUserInfo
from app.speedlimiter import SpeedLimiter
from app.subscribe import Subscribe
from app.sync import Sync
@ -560,7 +560,7 @@ def statistics():
SiteRatios = []
SiteErrs = {}
# 站点上传下载
SiteData = Sites().get_pt_date(specify_sites=refresh_site, force=refresh_force)
SiteData = SiteUserInfo().get_pt_date(specify_sites=refresh_site, force=refresh_force)
if isinstance(SiteData, dict):
for name, data in SiteData.items():
if not data:
@ -589,7 +589,7 @@ def statistics():
SiteRatios.append(round(float(ratio), 1))
# 近期上传下载各站点汇总
CurrentUpload, CurrentDownload, _, _, _ = Sites().get_pt_site_statistics_history(
CurrentUpload, CurrentDownload, _, _, _ = SiteUserInfo().get_pt_site_statistics_history(
days=2)
# 站点用户数据

BIN
web/static/.DS_Store vendored Normal file

Binary file not shown.

View File

@ -526,7 +526,7 @@ export class LayoutNavbar extends CustomElement {
this.layout_userpris = navbar_list.map((item) => (item.name));
this._active_name = "";
this._update_appversion = "";
this._update_url = "https://github.com/jxxghp/nas-tools";
this._update_url = "https://github.com/NAStool/nas-tools";
this._is_update = false;
this.classList.add("navbar","navbar-vertical","navbar-expand-lg","lit-navbar-fixed","lit-navbar","lit-navbar-hide-scrollbar");
}
@ -571,7 +571,7 @@ export class LayoutNavbar extends CustomElement {
url = ret.url;
break;
case 2:
url = "https://github.com/jxxghp/nas-tools/commits/master"
url = "https://github.com/NAStool/nas-tools/commits/master"
break;
}
if (url) {

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

File diff suppressed because it is too large Load Diff

Before

Width:  |  Height:  |  Size: 434 KiB

View File

@ -5,16 +5,14 @@
<html lang="en">
<head>
{{ HEAD.meta_link() }}
<title>NAStool - 资源归集、整理自动化工具</title>
<title>NAStool</title>
<!-- CSS files -->
<link href="../static/css/font-awesome.min.css" rel="stylesheet">
<link href="../static/css/tabler.min.css" rel="stylesheet"/>
<link href="../static/css/demo.min.css" rel="stylesheet"/>
<link href="../static/css/fullcalendar.min.css" rel="stylesheet"/>
<link href="../static/css/jquery.filetree.css" rel="stylesheet"/>
<link href="../static/css/dropzone.css" rel="stylesheet"/>
<link href="../static/css/nprogress.css" rel="stylesheet"/>
<link href="../static/css/jsoneditor.min.css" rel="stylesheet"/>
<!-- 附加样式 -->
<link href="../static/css/style.css" rel="stylesheet"/>
<!-- 站点图标 -->
@ -77,7 +75,7 @@
<div class="card border-0" style="overflow:hidden">
<div class="progress rounded-0">
<div class="progress-bar progress-bar-striped progress-bar-animated" id="modal_process_bar"
style="width: 0%" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100"></div>
style="width: 0" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100"></div>
</div>
<div class="card-body text-center">
<h3 class="card-title strong" id="modal_process_title">
@ -1871,9 +1869,6 @@
syncmod = '{{ SyncMod }}';
}
let source = CURRENT_PAGE_URI;
if (source === 'mediafile') {
source = `mediafile?dir=${inpath}`;
}
$("#rename_source").val(source);
$("#rename_manual_type").val(manual_type);
if (manual_type === 3) {

View File

@ -42,7 +42,7 @@
</div>
</div>
</div>
<div class="table-responsive" style="min-height: 300px; overflow: hidden">
<div class="table-responsive" style="min-height: 300px;">
<table class="table table-vcenter card-table table-hover table-striped">
<thead>
<tr>

View File

@ -107,6 +107,7 @@
{% endif %}
</div>
</div>
{% if PublicCount > 0 %}
<div class="mb-3">
<div class="btn-list">
<label class="form-label">公开站点 <span class="form-help"
@ -115,24 +116,19 @@
<a href="javascript:void(0)" class="ms-auto" onclick="select_btn_SelectALL(this, 'indexer_sites_public')">全选</a>
</div>
<div class="form-selectgroup">
{% if PublicCount > 0 %}
{% for Indexer in Indexers %}
{% if Indexer.public %}
<label class="form-selectgroup-item">
<input type="checkbox" name="indexer_sites_public" value="{{ Indexer.id }}"
class="form-selectgroup-input"
{% if Config.pt.indexer_sites and Indexer.id in Config.pt.indexer_sites %}checked{% endif %}>
<span class="form-selectgroup-label">{{ Indexer.name }}</span>
</label>
{% endif %}
{% endfor %}
{% else %}
<label class="form-selectgroup-item">
<span class="form-selectgroup-label"></span>
</label>
{% for Indexer in Indexers %}
{% if Indexer.public %}
<label class="form-selectgroup-item">
<input type="checkbox" name="indexer_sites_public" value="{{ Indexer.id }}"
class="form-selectgroup-input"
{% if Config.pt.indexer_sites and Indexer.id in Config.pt.indexer_sites %}checked{% endif %}>
<span class="form-selectgroup-label">{{ Indexer.name }}</span>
</label>
{% endif %}
{% endfor %}
</div>
</div>
{% endif %}
</div>
</div>
</div>

View File

@ -224,7 +224,7 @@
</select>
</div>
</div>
<div class="col-lg-2">
<div class="col-lg-4">
<div class="mb-3">
<label class="form-label">过滤规则 <span class="form-help"
title="选择该站点使用的过滤规则组,在设置->过滤规则中设置规则选择了过滤规则后该站点只有符合规则的种子才会被命中下载仅作用于RSS、内建索引自动搜索刷流等不受此限制"
@ -237,7 +237,7 @@
</select>
</div>
</div>
<div class="col-lg-2">
<div class="col-lg-4">
<div class="mb-3">
<label class="form-label">下载设置</label>
<select class="form-select" id="site_download_setting">
@ -248,8 +248,6 @@
</select>
</div>
</div>
</div>
<div class="row">
<div class="col-lg-4">
<div class="mb-3">
<label class="form-label required">开启浏览器仿真 <span class="form-help"
@ -282,9 +280,7 @@
</select>
</div>
</div>
</div>
<div class="row">
<div class="col">
<div class="col-lg-8">
<div class="mb-3">
<label class="form-label">User-Agent <span class="form-help"
title="站点签到/数据获取/搜索请求时使用的User-Agent为空则使用基础配置中User-Agent设置" data-bs-toggle="tooltip">?</span></label>
@ -313,7 +309,7 @@
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="table-responsive" style="min-height: 300px">
<table class="table table-vcenter card-table">
<table class="table table-vcenter card-table table-hover table-striped">
<thead>
<tr>
<th>

View File

@ -19,14 +19,12 @@
<meta name="MobileOptimized" content="320"/>
<title>组件开发效果预览</title>
<!-- CSS files -->
<link href="../static/css/font-awesome.min.css" rel="stylesheet">
<link href="../static/css/tabler.min.css" rel="stylesheet"/>
<link href="../static/css/demo.min.css" rel="stylesheet"/>
<link href="../static/css/fullcalendar.min.css" rel="stylesheet"/>
<link href="../static/css/jquery.filetree.css" rel="stylesheet"/>
<link href="../static/css/dropzone.css" rel="stylesheet"/>
<link href="../static/css/nprogress.css" rel="stylesheet"/>
<link href="../static/css/jsoneditor.min.css" rel="stylesheet"/>
<!-- 附加样式 -->
<link href="../static/css/style.css" rel="stylesheet"/>
</head>
@ -47,7 +45,6 @@
<script src="../static/js/dom-to-image.min.js"></script>
<script src="../static/js/FileSaver.min.js"></script>
<script src="../static/js/nprogress.js"></script>
<script src="../static/js/jsoneditor.min.js"></script>
<script src="../static/js/util.js"></script>
<layout-navbar></layout-navbar>
<layout-searchbar

View File

@ -64,12 +64,14 @@ hiddenimports = ['Crypto.Math',
'app.mediaserver.client',
'app.message.client',
'app.indexer.client',
'app.downloader.client']
'app.downloader.client',
'app.sites.sitesignin']
hiddenimports += collect_local_submodules('app.sites.siteuserinfo')
hiddenimports += collect_local_submodules('app.mediaserver.client')
hiddenimports += collect_local_submodules('app.message.client')
hiddenimports += collect_local_submodules('app.indexer.client')
hiddenimports += collect_local_submodules('app.downloader.client')
hiddenimports += collect_local_submodules('app.sites.sitesignin')
# <<< END HIDDENIMPORTS PART
block_cipher = None