Compare commits

...

No commits in common. "dev" and "v2.9.1-e3a43d4" have entirely different histories.

118 changed files with 6581 additions and 3201 deletions

BIN
.DS_Store vendored

Binary file not shown.

View File

@ -1,6 +1,7 @@
---
name: 问题模板
about: 如发现Bug请按此模板提交issues不按模板提交的问题将直接关闭。提交问题务必描述清楚、附上日志描述不清导致无法理解和分析的问题也可能会被直接关闭。
about: 如发现Bug请按此模板提交issues不按模板提交的问题将直接关闭。
提交问题务必描述清楚、附上日志,描述不清导致无法理解和分析的问题也可能会被直接关闭。
---
## 你使用的 NAStool 是什么版本,什么环境?

View File

@ -7,7 +7,6 @@ on:
paths:
- version.py
- .github/workflows/build-windows.yml
- windows/**
jobs:
Windows-build:
@ -21,7 +20,7 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install wheel numpy==1.23.5 pyparsing==3.0.9 wxpython==4.2.0 pyinstaller==5.7.0
git clone --depth=1 -b master https://github.com/NAStool/nas-tools --recurse-submodule
git clone --depth=1 -b master https://github.com/jxxghp/nas-tools --recurse-submodule
cd nas-tools
pip install -r requirements.txt
echo ("NASTOOL_CONFIG=D:/a/nas-tools/nas-tools/nas-tools/config/config.yaml") >> $env:GITHUB_ENV
@ -95,4 +94,4 @@ jobs:
message: |
*v${{ env.app_version }}*
${{ github.event.commits[0].message }}
${{ github.event.commits[0].message }}

216
README.md
View File

@ -1,10 +1,10 @@
![logo-blue](https://user-images.githubusercontent.com/51039935/197520391-f35db354-6071-4c12-86ea-fc450f04bc85.png)
# NAS媒体库管理工具
# NAS媒体库资源归集、整理自动化工具
[![GitHub stars](https://img.shields.io/github/stars/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/network/members)
[![GitHub issues](https://img.shields.io/github/issues/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/issues)
[![GitHub license](https://img.shields.io/github/license/NAStool/nas-tools?style=plastic)](https://github.com/NAStool/nas-tools/blob/master/LICENSE.md)
[![GitHub stars](https://img.shields.io/github/stars/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/network/members)
[![GitHub issues](https://img.shields.io/github/issues/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/issues)
[![GitHub license](https://img.shields.io/github/license/jxxghp/nas-tools?style=plastic)](https://github.com/jxxghp/nas-tools/blob/master/LICENSE.md)
[![Docker pulls](https://img.shields.io/docker/pulls/jxxghp/nas-tools?style=plastic)](https://hub.docker.com/r/jxxghp/nas-tools)
[![Platform](https://img.shields.io/badge/platform-amd64/arm64-pink?style=plastic)](https://hub.docker.com/r/jxxghp/nas-tools)
@ -13,12 +13,34 @@ Dockerhttps://hub.docker.com/repository/docker/jxxghp/nas-tools
TG频道https://t.me/nastool
WIKIhttps://github.com/jxxghp/nas-tools/wiki
API: http://localhost:3000/api/v1/
## 功能:
NAS媒体库管理工具。
本软件的初衷是实现影视资源的自动化管理,释放双手、聚焦观影。需要有良好的网络环境及私有站点才能获得较好的使用体验。
### 1、资源检索和订阅
* 站点RSS聚合想看的加入订阅资源自动实时追新。
* 通过微信、Telegram、Slack、Synology Chat或者WEB界面聚合资源搜索下载最新热门资源一键搜索或者订阅。
* 与豆瓣联动,在豆瓣中标记想看后台自动检索下载,未出全的自动加入订阅。
### 2、媒体库整理
* 监控下载软件,下载完成后自动识别真实名称,硬链接到媒体库并重命名。
* 对目录进行监控,文件变化时自动识别媒体信息硬链接到媒体库并重命名。
* 解决保种与媒体库整理冲突的问题专为中文环境优化支持国产剧集和动漫重命名准确率高改名后Emby/Jellyfin/Plex完美刮削海报墙。
### 3、站点养护
* 全面的站点数据统计,实时监测你的站点流量情况。
* 全自动化托管养站,支持远程下载器(本工具内建刷流功能仅为日常养站使用,如果追求数据建议使用更加强大的刷流工具:<a href="https://github.com/vertex-app/vertex" target="_blank">Vertex</a>)。
* 站点每日自动登录保号。
### 4、消息服务
* 支持微信、Telegram、Slack、Synology Chat、Bark、PushPlus、爱语飞飞等近十种渠道图文消息通知
* 支持通过微信、Telegram、Slack、Synology Chat远程控制订阅和下载。
* Emby/Jellyfin/Plex播放状态通知。
## 安装
@ -33,7 +55,7 @@ docker pull jxxghp/nas-tools:latest
### 2、本地运行
python3.10版本需要预安装cython如发现缺少依赖包需额外安装
```
git clone -b master https://github.com/NAStool/nas-tools --recurse-submodule
git clone -b master https://github.com/jxxghp/nas-tools --recurse-submodule
python3 -m pip install -r requirements.txt
export NASTOOL_CONFIG="/xxx/config/config.yaml"
nohup python3 run.py &
@ -42,7 +64,7 @@ nohup python3 run.py &
### 3、Windows
下载exe文件双击运行即可会自动生成配置文件目录
https://github.com/NAStool/nas-tools/releases
https://github.com/jxxghp/nas-tools/releases
### 4、群晖套件
添加矿神群晖SPK套件源直接安装
@ -50,3 +72,181 @@ https://github.com/NAStool/nas-tools/releases
https://spk.imnks.com/
https://spk7.imnks.com/
## 配置
### 1、申请相关API KEY
* 申请TMDB用户在 https://www.themoviedb.org/ 申请用户得到API KEY。
* 申请消息通知服务
1) 微信(推荐):在 https://work.weixin.qq.com/ 申请企业微信自建应用获得企业ID、自建应用secret、agentid 微信扫描自建应用二维码可实现在微信中使用消息服务,无需打开企业微信
2) Telegram推荐关注BotFather申请机器人获取token关注getuserID拿到chat_id。该渠道支持远程控制详情参考"5、配置微信/Telegram/Slack/Synology Chat远程控制"。
3) Slack在 https://api.slack.com/apps 申请应用,该渠道支持远程控制,详情参考频道说明。
4) Synology Chat在群晖中安装Synology Chat套件点击Chat界面"右上角头像->整合->机器人"创建机器人,"传出URL"设置为:"NAStool地址/synology""传入URL"及"令牌"填入到NAStool消息服务设置中该渠道支持远程控制。
5) 其它仍然会持续增加对通知渠道的支持API KEY获取方式类似不一一说明。
### 2、基础配置
* 文件转移模式说明目前支持六种模式复制、硬链接、软链接、移动、RCLONE、MINIO。
1) 复制模式下载做种和媒体库是两份多占用存储下载盘大小决定能保多少种好处是媒体库的盘不用24小时运行可以休眠
2) 硬链接模式不用额外增加存储空间,一份文件两份目录,但需要下载目录和媒体库目录在一个磁盘分区或者存储空间;软链接模式就是快捷方式,需要容器内路径与真实路径一致才能正常使用;
3) 移动模式会移动和删除原文件及目录;
4) RCLONE模式只针对RCLONE网盘使用场景**注意使用RCLONE模式需要自行映射rclone配置目录到容器中**,具体参考设置项小问号说明;
5) MINIO只针对S3/云原生场景,**注意使用MINIO媒体库应当设置为/bucket名/类别名**,例如,bucket的名字叫cloud,电影的分类文件夹名叫movie则媒体库电影路径为/cloud/movie,最好母集用s3fs挂载到/cloud/movie只读就行。
* 启动程序并配置Docker默认使用3000端口启动群晖套件默认3003端口默认用户密码admin/passworddocker需要参考教程提前映射好端口、下载目录、媒体库目录。登录管理界面后在设置中根据每个配置项的提示在WEB页面修改好配置并重启生效基础设置中有标红星的是必须要配置的如TMDB APIKEY等每一个配置项后都有小问号点击会有详细的配置说明推荐阅读。
### 3、设置媒体库服务器
支持 Emby推荐、Jellyfin、Plex设置媒体服务器后可以对本地资源进行判重避免重复下载同时能标识本地已存在的资源
* 在Emby/Jellyfin/Plex的Webhook插件中设置地址为http(s)://IP:PORT/emby、jellyfin、plex用于接收播放通知可选
* 将Emby/Jellyfin/Plex的相关信息配置到”设置-》媒体服务器“中
* 如果启用了默认分类需按如下的目录结构分别设置好媒体库如是自定义分类请按自己的定义建立好媒体库目录分类定义请参考default-category.yaml分类配置文件模板。注意开启二级分类时媒体库需要将目录设置到二级分类子目录中可添加多个子目录到一个媒体库也可以一个子目录设置一个媒体库否则媒体库管理软件可能无法正常搜刮识别。
> 电影
>> 精选
>> 华语电影
>> 外语电影
>> 动画电影
>
> 电视剧
>> 国产剧
>> 欧美剧
>> 日韩剧
>> 动漫
>> 纪录片
>> 综艺
>> 儿童
### 4、配置下载器及下载目录
支持qbittorrent推荐、transmission、aria2、115网盘、pikpak网盘等右上角按钮设置好下载目录。
### 5、配置同步目录
* 目录同步可以对多个分散的文件夹进行监控,文件夹中有新增媒体文件时会自动进行识别重命名,并按配置的转移方式转移到媒体库目录或指定的目录中。
* 如将下载软件的下载目录也纳入目录同步范围的,建议关闭下载软件监控功能,否则会触发重复处理。
### 5、配置微信/Telegram/Slack/Synology Chat远程控制
配置好微信、Telegram、Slack或Synology Chat机器人后可以直接通过移动端发送名字实现自动检索下载以及通过菜单控制程序运行。
1) **微信消息推送及回调**
* 配置消息推送代理
由于微信官方限制2022年6月20日后创建的企业微信应用需要有固定的公网IP地址并加入IP白名单后才能接收到消息使用有固定公网IP的代理服务器转发可解决该问题
如使用 Nginx 搭建代理服务,需在配置中增加以下代理配置:
```
location /cgi-bin/gettoken {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/message/send {
proxy_pass https://qyapi.weixin.qq.com;
}
```
如使用 Caddy 搭建代理服务,需在配置中增加以下代理配置(`{upstream_hostport}` 部分不是变量,不要改,原封不动复制粘贴过去即可)。
```
reverse_proxy https://qyapi.weixin.qq.com {
header_up Host {upstream_hostport}
}
```
如使用 Traefik 搭建代理服务,需在额外配置:
```
loadBalancer.passHostHeader=false
```
注意:代理服务器仅适用于在微信中接收工具推送的消息,消息回调与代理服务器无关。
* 配置微信消息接收服务
在企业微信自建应用管理页面-》API接收消息 开启消息接收服务:
1) 在微信页面生成Token和EncodingAESKey并在NASTool设置->消息通知->微信中填入对应的输入项并保存。
2) **重启NASTool**
3) 微信页面地址URL填写http(s)://IP:PORT/wechat点确定进行认证。
* 配置微信菜单控制
通过菜单远程控制工具运行在https://work.weixin.qq.com/wework_admin/frame#apps 应用自定义菜单页面按如下图所示维护好菜单,菜单内容为发送消息,消息内容随意。
**一级菜单及一级菜单下的前几个子菜单顺序需要一模一样**,在符合截图的示例项后可以自己增加别的二级菜单项。
![image](https://user-images.githubusercontent.com/54088512/218261870-ed15b6b6-895f-45e4-913c-4dda75144a9a.png)
2) **Telegram Bot机器人**
* 在NASTool设置中设置好本程序的外网访问地址根据实际网络情况决定是否打开Telegram Webhook开关。
**注意WebHook受Telegram限制程序运行端口需要设置为以下端口之一443, 80, 88, 8443且需要有以网认证的Https证书非WebHook模式时不能使用NAStool内建的SSL证书功能。**
* 在Telegram BotFather机器人中按下表维护好bot命令菜单要选选择菜单或输入命令运行对应服务输入其它内容则启动聚合检索。
3) **Slack**
* 详情参考频道说明
**命令与功能对应关系**
| 命令 | 功能 |
|---------| ---- |
| /rss | RSS订阅 |
| /ssa | 订阅搜索 |
| /ptt | 下载文件转移 |
| /ptr | 自动删种 |
| /pts | 站点签到 |
| /udt | 系统更新 |
| /tbl | 清理转移缓存 |
| /trh | 清理RSS缓存 |
| /rst | 目录同步 |
| /db | 豆瓣想看 |
| /utf | 重新识别 |
4) **Synology Chat**
* 无需额外设置,注意非同一服务器搭建的,还需要在基础设置->安全中调整IP地址限制策略。
### 6、配置索引器
配置索引器,以支持搜索站点资源:
* 本工具内建索引器目前已支持大部分主流PT站点及部分公开站点建议启用内建索引器。
* 同时支持Jackett/Prowlarr需额外搭建对应服务并获取API Key以及地址等信息配置到设置->索引器->Jackett/Prowlarr中。
### 7、配置站点
本工具的电影电视剧订阅、资源搜索、站点数据统计、刷流、自动签到等功能均依赖于正确配置站点信息,需要在“站点管理->站点维护”中维护好站点RSS链接以及Cookie等。
其中站点RSS链接生成时请尽量选择影视类资源分类且勾选副标题。
### 8、整理存量媒体资源
如果你的存量资源所在的目录与你目录同步中配置的源路径目的路径相同则可以通过WEBUI或微信/Telegram的“目录同步”按钮触发全量同步。
如果不相同则可以按以下说明操作,手工输入命令整理特定目录下的媒体资源:
说明:-d 参数为可选,如不输入则会自动区分电影/电视剧/动漫分别存储到对应的媒体库目录中;-d 参数有输入时则不管类型,都往-d目录中转移。
* Docker版本宿主机上运行以下命令nas-tools修改为你的docker名称修改源目录和目的目录参数。
```
docker exec -it nas-tools sh
python3 /nas-tools/app/filetransfer.py -m link -s /from/path -d /to/path
```
* 群晖套件版本ssh到后台运行以下命令同样修改配置文件路径以及源目录、目的目录参数。
```
export NASTOOL_CONFIG=/var/packages/NASTool/target/config/config.yaml
/var/packages/py3k/target/usr/local/bin/python3 /var/packages/NASTool/target/app/filetransfer.py -m link -s /from/path -d /to/path
```
* 本地直接运行的cd 到程序根目录,执行以下命令,修改配置文件、源目录和目的目录参数。
```
export NASTOOL_CONFIG=config/config.yaml
python3 app/filetransfer.py -m link -s /from/path -d /to/path
```
## 鸣谢
* 程序UI模板及图标来源于开源项目<a href="https://github.com/tabler/tabler">tabler</a>,此外项目中还使用到了开源模块:<a href="https://github.com/igorcmoura/anitopy" target="_blank">anitopy</a><a href="https://github.com/AnthonyBloomer/tmdbv3api" target="_blank">tmdbv3api</a><a href="https://github.com/pkkid/python-plexapi" target="_blank">python-plexapi</a><a href="https://github.com/rmartin16/qbittorrent-api">qbittorrent-api</a><a href="https://github.com/Trim21/transmission-rpc">transmission-rpc</a>
* 感谢 <a href="https://github.com/devome" target="_blank">nevinee</a> 完善docker构建
* 感谢 <a href="https://github.com/tbc0309" target="_blank">tbc0309</a> 适配群晖套件
* 感谢 PR 代码、完善WIKI、发布教程的所有大佬

View File

@ -41,7 +41,14 @@ class BrushTask(object):
self.sites = Sites()
self.filter = Filter()
# 移除现有任务
self.stop_service()
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
ExceptionUtils.exception_traceback(e)
# 读取下载器列表
downloaders = self.dbhelper.get_user_downloaders()
self._downloader_infos = []
@ -185,10 +192,8 @@ class BrushTask(object):
else:
log.info("【Brush】%s RSS获取数据%s" % (site_name, len(rss_result)))
# 同时下载数
max_dlcount = rss_rule.get("dlcount")
success_count = 0
new_torrent_count = 0
if max_dlcount:
downloading_count = self.__get_downloading_count(downloader_cfg) or 0
new_torrent_count = int(max_dlcount) - int(downloading_count)
@ -390,8 +395,6 @@ class BrushTask(object):
else:
# 将查询的torrent_ids转为数字型
torrent_ids = [int(x) for x in torrent_ids if str(x).isdigit()]
if not torrent_ids:
continue
# 检查完成状态
downloader = Transmission(config=downloader_cfg)
torrents, has_err = downloader.get_torrents(ids=torrent_ids, status=["seeding", "seed_pending"])
@ -854,16 +857,3 @@ class BrushTask(object):
except Exception as err:
ExceptionUtils.exception_traceback(err)
return False, BrushDeleteType.NOTDELETE
def stop_service(self):
"""
停止服务
"""
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
print(str(e))

View File

@ -44,11 +44,16 @@ class ModuleConf(object):
# 下载器
DOWNLOADER_DICT = {
"qbittorrent": DownloaderType.QB,
"transmission": DownloaderType.TR
"transmission": DownloaderType.TR,
"client115": DownloaderType.Client115,
"aria2": DownloaderType.Aria2,
"pikpak": DownloaderType.PikPak
}
# 索引器
INDEXER_DICT = {
"prowlarr": IndexerType.PROWLARR,
"jackett": IndexerType.JACKETT,
"builtin": IndexerType.BUILTIN
}
@ -157,14 +162,6 @@ class ModuleConf(object):
"tooltip": "需要交互功能时才需要填写,在微信企业应用管理后台-接收消息设置页面生成,填入完成后重启本应用,然后再在微信页面输入地址确定",
"type": "text",
"placeholder": "API接收消息EncodingAESKey"
},
"adminUser": {
"id": "wechat_adminUser",
"required": False,
"title": "AdminUser",
"tooltip": "需要交互功能时才需要填写,可执行交互菜单命令的用户名,为空则不限制,多个;号分割。可在企业微信后台查看成员的Account ID",
"type": "text",
"placeholder": "可执行交互菜单的用户名"
}
}
},
@ -610,7 +607,85 @@ class ModuleConf(object):
"placeholder": ""
}
}
}
},
"client115": {
"name": "115网盘",
"img_url": "../static/img/115.jpg",
"background": "bg-azure",
"test_command": "app.downloader.client.client115|Client115",
"config": {
"cookie": {
"id": "client115.cookie",
"required": True,
"title": "Cookie",
"tooltip": "115网盘Cookie通过115网盘网页端抓取Cookie",
"type": "text",
"placeholder": "USERSESSIONID=xxx;115_lang=zh;UID=xxx;CID=xxx;SEID=xxx"
}
}
},
"aria2": {
"name": "Aria2",
"img_url": "../static/img/aria2.png",
"background": "bg-green",
"test_command": "app.downloader.client.aria2|Aria2",
"config": {
"host": {
"id": "aria2.host",
"required": True,
"title": "IP地址",
"tooltip": "配置IP地址如为https则需要增加https://前缀",
"type": "text",
"placeholder": "127.0.0.1"
},
"port": {
"id": "aria2.port",
"required": True,
"title": "端口",
"type": "text",
"placeholder": "6800"
},
"secret": {
"id": "aria2.secret",
"required": True,
"title": "令牌",
"type": "text",
"placeholder": ""
}
}
},
"pikpak": {
"name": "PikPak",
"img_url": "../static/img/pikpak.png",
"background": "bg-indigo",
"test_command": "app.downloader.client.pikpak|PikPak",
"config": {
"username": {
"id": "pikpak.username",
"required": True,
"title": "用户名",
"tooltip": "用户名",
"type": "text",
"placeholder": ""
},
"password": {
"id": "pikpak.password",
"required": True,
"title": "密码",
"tooltip": "密码",
"type": "password",
"placeholder": ""
},
"proxy": {
"id": "pikpak.proxy",
"required": False,
"title": "代理",
"tooltip": "如果需要代理才能访问pikpak可以在此处填入代理地址",
"type": "text",
"placeholder": "127.0.0.1:7890"
}
}
},
}
# 媒体服务器
@ -712,7 +787,64 @@ class ModuleConf(object):
}
# 索引器
INDEXER_CONF = {}
INDEXER_CONF = {
"jackett": {
"name": "Jackett",
"img_url": "./static/img/jackett.png",
"background": "bg-black",
"test_command": "app.indexer.client.jackett|Jackett",
"config": {
"host": {
"id": "jackett.host",
"required": True,
"title": "Jackett地址",
"tooltip": "Jackett访问地址和端口如为https需加https://前缀。注意需要先在Jackett中添加indexer才能正常测试通过和使用",
"type": "text",
"placeholder": "http://127.0.0.1:9117"
},
"api_key": {
"id": "jackett.api_key",
"required": True,
"title": "Api Key",
"tooltip": "Jackett管理界面右上角复制API Key",
"type": "text",
"placeholder": ""
},
"password": {
"id": "jackett.password",
"required": False,
"title": "密码",
"tooltip": "Jackett管理界面中配置的Admin password如未配置可为空",
"type": "password",
"placeholder": ""
}
}
},
"prowlarr": {
"name": "Prowlarr",
"img_url": "../static/img/prowlarr.png",
"background": "bg-orange",
"test_command": "app.indexer.client.prowlarr|Prowlarr",
"config": {
"host": {
"id": "prowlarr.host",
"required": True,
"title": "Prowlarr地址",
"tooltip": "Prowlarr访问地址和端口如为https需加https://前缀。注意需要先在Prowlarr中添加搜刮器同时勾选所有搜刮器后搜索一次才能正常测试通过和使用",
"type": "text",
"placeholder": "http://127.0.0.1:9696"
},
"api_key": {
"id": "prowlarr.api_key",
"required": True,
"title": "Api Key",
"tooltip": "在Prowlarr->Settings->General->Security-> API Key中获取",
"type": "text",
"placeholder": ""
}
}
}
}
# 发现过滤器
DISCOVER_FILTER_CONF = {

View File

@ -477,4 +477,72 @@ class SiteConf:
}
}
# 公共BT站点
PUBLIC_TORRENT_SITES = {}
PUBLIC_TORRENT_SITES = {
'rarbg.to': {
"parser": "Rarbg",
"proxy": True,
"language": "en"
},
'dmhy.org': {
"proxy": True
},
'eztv.re': {
"proxy": True,
"language": "en"
},
'acg.rip': {
"proxy": False
},
'thepiratebay.org': {
"proxy": True,
"render": True,
"language": "en"
},
'nyaa.si': {
"proxy": True
},
'1337x.to': {
"proxy": True,
"language": "en"
},
'ext.to': {
"proxy": True,
"language": "en",
"parser": "RenderSpider"
},
'torrentgalaxy.to': {
"proxy": True,
"language": "en"
},
'mikanani.me': {
"proxy": False
},
'gaoqing.fm': {
"proxy": False
},
'www.mp4ba.vip': {
"proxy": False,
"referer": True
},
'www.miobt.com': {
"proxy": True
},
'katcr.to': {
"proxy": True,
"language": "en"
},
'btsow.quest': {
"proxy": True
},
'www.hdpianyuan.com': {
"proxy": False
},
'skrbtla.top': {
"proxy": False,
"referer": True,
"parser": "RenderSpider"
},
'www.comicat.org': {
"proxy": False
}
}

View File

@ -8,44 +8,55 @@ from app.utils.commons import singleton
class SystemConfig:
# 系统设置
systemconfig = {}
systemconfig = {
# 默认下载设置
"DefaultDownloadSetting": None,
# CookieCloud的设置
"CookieCloud": {},
# 自动获取Cookie的用户信息
"CookieUserInfo": {},
# 用户自定义CSS/JavsScript
"CustomScript": {},
# 播放限速设置
"SpeedLimit": {}
}
def __init__(self):
self.dicthelper = DictHelper()
self.init_config()
def init_config(self):
def init_config(self, key=None):
"""
缓存系统设置
"""
for item in self.dicthelper.list("SystemConfig"):
if not item:
continue
if self.__is_obj(item.VALUE):
self.systemconfig[item.KEY] = json.loads(item.VALUE)
def __set_value(_key, _value):
if isinstance(_value, dict) \
or isinstance(_value, list):
dict_value = DictHelper().get("SystemConfig", _key)
if dict_value:
self.systemconfig[_key] = json.loads(dict_value)
else:
self.systemconfig[_key] = {}
else:
self.systemconfig[item.KEY] = item.VALUE
self.systemconfig[_key] = DictHelper().get("SystemConfig", _key)
@staticmethod
def __is_obj(obj):
if isinstance(obj, list) or isinstance(obj, dict):
return True
if key:
__set_value(key, self.systemconfig.get(key))
else:
return str(obj).startswith("{") or str(obj).startswith("[")
for key, value in self.systemconfig.items():
__set_value(key, value)
def set_system_config(self, key, value):
"""
设置系统设置
"""
# 更新内存
self.systemconfig[key] = value
# 写入数据库
if self.__is_obj(value):
if isinstance(value, dict) \
or isinstance(value, list):
if value:
value = json.dumps(value)
else:
value = ''
self.dicthelper.set("SystemConfig", key, value)
value = None
DictHelper().set("SystemConfig", key, value)
self.init_config(key)
def get_system_config(self, key=None):
"""

View File

@ -41,7 +41,7 @@ class MainDb:
"""
config = Config().get_config()
init_files = Config().get_config("app").get("init_files") or []
config_dir = Config().get_script_path()
config_dir = os.path.join(Config().get_root_path(), "config")
sql_files = PathUtils.get_dir_level1_files(in_path=config_dir, exts=".sql")
config_flag = False
for sql_file in sql_files:

View File

@ -25,7 +25,6 @@ class DoubanSync:
downloader = None
dbhelper = None
subscribe = None
message = None
_interval = None
_auto_search = None
_auto_rss = None
@ -34,9 +33,6 @@ class DoubanSync:
_types = None
def __init__(self):
self.init_config()
def init_config(self):
self.douban = DouBan()
self.searcher = Searcher()
self.downloader = Downloader()
@ -44,6 +40,9 @@ class DoubanSync:
self.message = Message()
self.dbhelper = DbHelper()
self.subscribe = Subscribe()
self.init_config()
def init_config(self):
douban = Config().get_config('douban')
if douban:
# 同步间隔

View File

@ -0,0 +1,182 @@
import re
import time
from urllib import parse
import requests
from app.utils import RequestUtils, ExceptionUtils
class Py115:
cookie = None
user_agent = None
req = None
uid = None
sign = None
err = None
def __init__(self, cookie):
self.cookie = cookie
self.req = RequestUtils(cookies=self.cookie, session=requests.Session())
# 登录
def login(self):
if not self.getuid():
return False
if not self.getsign():
return False
return True
# 获取目录ID
def getdirid(self, tdir):
try:
url = "https://webapi.115.com/files/getid?path=" + parse.quote(tdir or '/')
p = self.req.get_res(url=url)
if p:
rootobject = p.json()
if not rootobject.get("state"):
self.err = "获取目录 [{}]ID 错误:{}".format(tdir, rootobject["error"])
return False, ''
return True, rootobject.get("id")
except Exception as result:
ExceptionUtils.exception_traceback(result)
self.err = "异常错误:{}".format(result)
return False, ''
# 获取sign
def getsign(self):
try:
self.sign = ''
url = "https://115.com/?ct=offline&ac=space&_=" + str(round(time.time() * 1000))
p = self.req.get_res(url=url)
if p:
rootobject = p.json()
if not rootobject.get("state"):
self.err = "获取 SIGN 错误:{}".format(rootobject.get("error_msg"))
return False
self.sign = rootobject.get("sign")
return True
except Exception as result:
ExceptionUtils.exception_traceback(result)
self.err = "异常错误:{}".format(result)
return False
# 获取UID
def getuid(self):
try:
self.uid = ''
url = "https://webapi.115.com/files?aid=1&cid=0&o=user_ptime&asc=0&offset=0&show_dir=1&limit=30&code=&scid=&snap=0&natsort=1&star=1&source=&format=json"
p = self.req.get_res(url=url)
if p:
rootobject = p.json()
if not rootobject.get("state"):
self.err = "获取 UID 错误:{}".format(rootobject.get("error_msg"))
return False
self.uid = rootobject.get("uid")
return True
except Exception as result:
ExceptionUtils.exception_traceback(result)
self.err = "异常错误:{}".format(result)
return False
# 获取任务列表
def gettasklist(self, page=1):
try:
tasks = []
url = "https://115.com/web/lixian/?ct=lixian&ac=task_lists"
while True:
postdata = "page={}&uid={}&sign={}&time={}".format(page, self.uid, self.sign,
str(round(time.time() * 1000)))
p = self.req.post_res(url=url, params=postdata.encode('utf-8'))
if p:
rootobject = p.json()
if not rootobject.get("state"):
self.err = "获取任务列表错误:{}".format(rootobject["error"])
return False, tasks
if rootobject.get("count") == 0:
break
tasks += rootobject.get("tasks") or []
if page >= rootobject.get("page_count"):
break
return True, tasks
except Exception as result:
ExceptionUtils.exception_traceback(result)
self.err = "异常错误:{}".format(result)
return False, []
# 添加任务
def addtask(self, tdir, content):
try:
ret, dirid = self.getdirid(tdir)
if not ret:
return False, ''
# 转换为磁力
if re.match("^https*://", content):
try:
p = self.req.get_res(url=content)
if p and p.headers.get("Location"):
content = p.headers.get("Location")
except Exception as result:
ExceptionUtils.exception_traceback(result)
content = str(result).replace("No connection adapters were found for '", "").replace("'", "")
url = "https://115.com/web/lixian/?ct=lixian&ac=add_task_url"
postdata = "url={}&savepath=&wp_path_id={}&uid={}&sign={}&time={}".format(parse.quote(content), dirid,
self.uid, self.sign,
str(round(time.time() * 1000)))
p = self.req.post_res(url=url, params=postdata.encode('utf-8'))
if p:
rootobject = p.json()
if not rootobject.get("state"):
self.err = rootobject.get("error_msg")
return False, ''
return True, rootobject.get("info_hash")
except Exception as result:
ExceptionUtils.exception_traceback(result)
self.err = "异常错误:{}".format(result)
return False, ''
# 删除任务
def deltask(self, thash):
try:
url = "https://115.com/web/lixian/?ct=lixian&ac=task_del"
postdata = "hash[0]={}&uid={}&sign={}&time={}".format(thash, self.uid, self.sign,
str(round(time.time() * 1000)))
p = self.req.post_res(url=url, params=postdata.encode('utf-8'))
if p:
rootobject = p.json()
if not rootobject.get("state"):
self.err = rootobject.get("error_msg")
return False
return True
except Exception as result:
ExceptionUtils.exception_traceback(result)
self.err = "异常错误:{}".format(result)
return False
# 根据ID获取文件夹路径
def getiddir(self, tid):
try:
path = '/'
url = "https://aps.115.com/natsort/files.php?aid=1&cid={}&o=file_name&asc=1&offset=0&show_dir=1&limit=40&code=&scid=&snap=0&natsort=1&record_open_time=1&source=&format=json&fc_mix=0&type=&star=&is_share=&suffix=&custom_order=0".format(
tid)
p = self.req.get_res(url=url)
if p:
rootobject = p.json()
if not rootobject.get("state"):
self.err = "获取 ID[{}]路径 错误:{}".format(id, rootobject["error"])
return False, path
patharray = rootobject["path"]
for pathobject in patharray:
if pathobject.get("cid") == 0:
continue
path += pathobject.get("name") + '/'
if path == "/":
self.err = "文件路径不存在"
return False, path
return True, path
except Exception as result:
ExceptionUtils.exception_traceback(result)
self.err = "异常错误:{}".format(result)
return False, '/'

View File

@ -0,0 +1,345 @@
# -*- coding: utf-8 -*-
import xmlrpc.client
DEFAULT_HOST = 'localhost'
DEFAULT_PORT = 6800
SERVER_URI_FORMAT = '%s:%s/rpc'
class PyAria2(object):
_secret = None
def __init__(self, secret=None, host=DEFAULT_HOST, port=DEFAULT_PORT):
"""
PyAria2 constructor.
secret: aria2 secret token
host: string, aria2 rpc host, default is 'localhost'
port: integer, aria2 rpc port, default is 6800
session: string, aria2 rpc session saving.
"""
server_uri = SERVER_URI_FORMAT % (host, port)
self._secret = "token:%s" % (secret or "")
self.server = xmlrpc.client.ServerProxy(server_uri, allow_none=True)
def addUri(self, uris, options=None, position=None):
"""
This method adds new HTTP(S)/FTP/BitTorrent Magnet URI.
uris: list, list of URIs
options: dict, additional options
position: integer, position in download queue
return: This method returns GID of registered download.
"""
return self.server.aria2.addUri(self._secret, uris, options, position)
def addTorrent(self, torrent, uris=None, options=None, position=None):
"""
This method adds BitTorrent download by uploading ".torrent" file.
torrent: bin, torrent file bin
uris: list, list of webseed URIs
options: dict, additional options
position: integer, position in download queue
return: This method returns GID of registered download.
"""
return self.server.aria2.addTorrent(self._secret, xmlrpc.client.Binary(torrent), uris, options, position)
def addMetalink(self, metalink, options=None, position=None):
"""
This method adds Metalink download by uploading ".metalink" file.
metalink: string, metalink file path
options: dict, additional options
position: integer, position in download queue
return: This method returns list of GID of registered download.
"""
return self.server.aria2.addMetalink(self._secret, xmlrpc.client.Binary(open(metalink, 'rb').read()), options,
position)
def remove(self, gid):
"""
This method removes the download denoted by gid.
gid: string, GID.
return: This method returns GID of removed download.
"""
return self.server.aria2.remove(self._secret, gid)
def forceRemove(self, gid):
"""
This method removes the download denoted by gid.
gid: string, GID.
return: This method returns GID of removed download.
"""
return self.server.aria2.forceRemove(self._secret, gid)
def pause(self, gid):
"""
This method pauses the download denoted by gid.
gid: string, GID.
return: This method returns GID of paused download.
"""
return self.server.aria2.pause(self._secret, gid)
def pauseAll(self):
"""
This method is equal to calling aria2.pause() for every active/waiting download.
return: This method returns OK for success.
"""
return self.server.aria2.pauseAll(self._secret)
def forcePause(self, gid):
"""
This method pauses the download denoted by gid.
gid: string, GID.
return: This method returns GID of paused download.
"""
return self.server.aria2.forcePause(self._secret, gid)
def forcePauseAll(self):
"""
This method is equal to calling aria2.forcePause() for every active/waiting download.
return: This method returns OK for success.
"""
return self.server.aria2.forcePauseAll()
def unpause(self, gid):
"""
This method changes the status of the download denoted by gid from paused to waiting.
gid: string, GID.
return: This method returns GID of unpaused download.
"""
return self.server.aria2.unpause(self._secret, gid)
def unpauseAll(self):
"""
This method is equal to calling aria2.unpause() for every active/waiting download.
return: This method returns OK for success.
"""
return self.server.aria2.unpauseAll()
def tellStatus(self, gid, keys=None):
"""
This method returns download progress of the download denoted by gid.
gid: string, GID.
keys: list, keys for method response.
return: The method response is of type dict and it contains following keys.
"""
return self.server.aria2.tellStatus(self._secret, gid, keys)
def getUris(self, gid):
"""
This method returns URIs used in the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getUris(self._secret, gid)
def getFiles(self, gid):
"""
This method returns file list of the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getFiles(self._secret, gid)
def getPeers(self, gid):
"""
This method returns peer list of the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getPeers(self._secret, gid)
def getServers(self, gid):
"""
This method returns currently connected HTTP(S)/FTP servers of the download denoted by gid.
gid: string, GID.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.getServers(self._secret, gid)
def tellActive(self, keys=None):
"""
This method returns the list of active downloads.
keys: keys for method response.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.tellActive(self._secret, keys)
def tellWaiting(self, offset, num, keys=None):
"""
This method returns the list of waiting download, including paused downloads.
offset: integer, the offset from the download waiting at the front.
num: integer, the number of downloads to be returned.
keys: keys for method response.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.tellWaiting(self._secret, offset, num, keys)
def tellStopped(self, offset, num, keys=None):
"""
This method returns the list of stopped download.
offset: integer, the offset from the download waiting at the front.
num: integer, the number of downloads to be returned.
keys: keys for method response.
return: The method response is of type list and its element is of type dict and it contains following keys.
"""
return self.server.aria2.tellStopped(self._secret, offset, num, keys)
def changePosition(self, gid, pos, how):
"""
This method changes the position of the download denoted by gid.
gid: string, GID.
pos: integer, the position relative which to be changed.
how: string.
POS_SET, it moves the download to a position relative to the beginning of the queue.
POS_CUR, it moves the download to a position relative to the current position.
POS_END, it moves the download to a position relative to the end of the queue.
return: The response is of type integer, and it is the destination position.
"""
return self.server.aria2.changePosition(self._secret, gid, pos, how)
def changeUri(self, gid, fileIndex, delUris, addUris, position=None):
"""
This method removes URIs in delUris from and appends URIs in addUris to download denoted by gid.
gid: string, GID.
fileIndex: integer, file to affect (1-based)
delUris: list, URIs to be removed
addUris: list, URIs to be added
position: integer, where URIs are inserted, after URIs have been removed
return: This method returns a list which contains 2 integers. The first integer is the number of URIs deleted. The second integer is the number of URIs added.
"""
return self.server.aria2.changeUri(self._secret, gid, fileIndex, delUris, addUris, position)
def getOption(self, gid):
"""
This method returns options of the download denoted by gid.
gid: string, GID.
return: The response is of type dict.
"""
return self.server.aria2.getOption(self._secret, gid)
def changeOption(self, gid, options):
"""
This method changes options of the download denoted by gid dynamically.
gid: string, GID.
options: dict, the options.
return: This method returns OK for success.
"""
return self.server.aria2.changeOption(self._secret, gid, options)
def getGlobalOption(self):
"""
This method returns global options.
return: The method response is of type dict.
"""
return self.server.aria2.getGlobalOption(self._secret)
def changeGlobalOption(self, options):
"""
This method changes global options dynamically.
options: dict, the options.
return: This method returns OK for success.
"""
return self.server.aria2.changeGlobalOption(self._secret, options)
def getGlobalStat(self):
"""
This method returns global statistics such as overall download and upload speed.
return: The method response is of type struct and contains following keys.
"""
return self.server.aria2.getGlobalStat(self._secret)
def purgeDownloadResult(self):
"""
This method purges completed/error/removed downloads to free memory.
return: This method returns OK for success.
"""
return self.server.aria2.purgeDownloadResult(self._secret)
def removeDownloadResult(self, gid):
"""
This method removes completed/error/removed download denoted by gid from memory.
return: This method returns OK for success.
"""
return self.server.aria2.removeDownloadResult(self._secret, gid)
def getVersion(self):
"""
This method returns version of the program and the list of enabled features.
return: The method response is of type dict and contains following keys.
"""
return self.server.aria2.getVersion(self._secret)
def getSessionInfo(self):
"""
This method returns session information.
return: The response is of type dict.
"""
return self.server.aria2.getSessionInfo(self._secret)
def shutdown(self):
"""
This method shutdowns aria2.
return: This method returns OK for success.
"""
return self.server.aria2.shutdown(self._secret)
def forceShutdown(self):
"""
This method shutdowns aria2.
return: This method returns OK for success.
"""
return self.server.aria2.forceShutdown(self._secret)

View File

@ -0,0 +1,167 @@
import os
import re
from app.utils import RequestUtils, ExceptionUtils, StringUtils
from app.utils.types import DownloaderType
from config import Config
from app.downloader.client._base import _IDownloadClient
from app.downloader.client._pyaria2 import PyAria2
class Aria2(_IDownloadClient):
schema = "aria2"
client_type = DownloaderType.Aria2.value
_client_config = {}
_client = None
host = None
port = None
secret = None
def __init__(self, config=None):
if config:
self._client_config = config
else:
self._client_config = Config().get_config('aria2')
self.init_config()
self.connect()
def init_config(self):
if self._client_config:
self.host = self._client_config.get("host")
if self.host:
if not self.host.startswith('http'):
self.host = "http://" + self.host
if self.host.endswith('/'):
self.host = self.host[:-1]
self.port = self._client_config.get("port")
self.secret = self._client_config.get("secret")
if self.host and self.port:
self._client = PyAria2(secret=self.secret, host=self.host, port=self.port)
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.client_type] else False
def connect(self):
pass
def get_status(self):
if not self._client:
return False
ver = self._client.getVersion()
return True if ver else False
def get_torrents(self, ids=None, status=None, **kwargs):
if not self._client:
return []
ret_torrents = []
if ids:
if isinstance(ids, list):
for gid in ids:
ret_torrents.append(self._client.tellStatus(gid=gid))
else:
ret_torrents = [self._client.tellStatus(gid=ids)]
elif status:
if status == "downloading":
ret_torrents = self._client.tellActive() or [] + self._client.tellWaiting(offset=-1, num=100) or []
else:
ret_torrents = self._client.tellStopped(offset=-1, num=1000)
return ret_torrents
def get_downloading_torrents(self, **kwargs):
return self.get_torrents(status="downloading")
def get_completed_torrents(self, **kwargs):
return self.get_torrents(status="completed")
def set_torrents_status(self, ids, **kwargs):
return self.delete_torrents(ids=ids, delete_file=False)
def get_transfer_task(self, **kwargs):
if not self._client:
return []
torrents = self.get_completed_torrents()
trans_tasks = []
for torrent in torrents:
name = torrent.get('bittorrent', {}).get('info', {}).get("name")
if not name:
continue
path = torrent.get("dir")
if not path:
continue
true_path = self.get_replace_path(path)
trans_tasks.append({'path': os.path.join(true_path, name), 'id': torrent.get("gid")})
return trans_tasks
def get_remove_torrents(self, **kwargs):
return []
def add_torrent(self, content, download_dir=None, **kwargs):
if not self._client:
return None
if isinstance(content, str):
# 转换为磁力链
if re.match("^https*://", content):
try:
p = RequestUtils().get_res(url=content, allow_redirects=False)
if p and p.headers.get("Location"):
content = p.headers.get("Location")
except Exception as result:
ExceptionUtils.exception_traceback(result)
return self._client.addUri(uris=[content], options=dict(dir=download_dir))
else:
return self._client.addTorrent(torrent=content, uris=[], options=dict(dir=download_dir))
def start_torrents(self, ids):
if not self._client:
return False
return self._client.unpause(gid=ids)
def stop_torrents(self, ids):
if not self._client:
return False
return self._client.pause(gid=ids)
def delete_torrents(self, delete_file, ids):
if not self._client:
return False
return self._client.remove(gid=ids)
def get_download_dirs(self):
return []
def change_torrent(self, **kwargs):
pass
def get_downloading_progress(self, **kwargs):
"""
获取正在下载的种子进度
"""
Torrents = self.get_downloading_torrents()
DispTorrents = []
for torrent in Torrents:
# 进度
try:
progress = round(int(torrent.get('completedLength')) / int(torrent.get("totalLength")), 1) * 100
except ZeroDivisionError:
progress = 0.0
state = "Downloading"
_dlspeed = StringUtils.str_filesize(torrent.get('downloadSpeed'))
_upspeed = StringUtils.str_filesize(torrent.get('uploadSpeed'))
speed = "%s%sB/s %s%sB/s" % (chr(8595), _dlspeed, chr(8593), _upspeed)
DispTorrents.append({
'id': torrent.get('gid'),
'name': torrent.get('bittorrent', {}).get('info', {}).get("name"),
'speed': speed,
'state': state,
'progress': progress
})
return DispTorrents
def set_speed_limit(self, **kwargs):
"""
设置速度限制
"""
pass

View File

@ -0,0 +1,141 @@
import log
from app.utils import StringUtils
from app.utils.types import DownloaderType
from config import Config
from app.downloader.client._base import _IDownloadClient
from app.downloader.client._py115 import Py115
class Client115(_IDownloadClient):
schema = "client115"
client_type = DownloaderType.Client115.value
_client_config = {}
downclient = None
lasthash = None
def __init__(self, config=None):
if config:
self._client_config = config
else:
self._client_config = Config().get_config('client115')
self.init_config()
self.connect()
def init_config(self):
if self._client_config:
self.downclient = Py115(self._client_config.get("cookie"))
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.client_type] else False
def connect(self):
self.downclient.login()
def get_status(self):
if not self.downclient:
return False
ret = self.downclient.login()
if not ret:
log.info(self.downclient.err)
return False
return True
def get_torrents(self, ids=None, status=None, **kwargs):
tlist = []
if not self.downclient:
return tlist
ret, tasks = self.downclient.gettasklist(page=1)
if not ret:
log.info(f"{self.client_type}】获取任务列表错误:{self.downclient.err}")
return tlist
if tasks:
for task in tasks:
if ids:
if task.get("info_hash") not in ids:
continue
if status:
if task.get("status") not in status:
continue
ret, tdir = self.downclient.getiddir(task.get("file_id"))
task["path"] = tdir
tlist.append(task)
return tlist or []
def get_completed_torrents(self, **kwargs):
return self.get_torrents(status=[2])
def get_downloading_torrents(self, **kwargs):
return self.get_torrents(status=[0, 1])
def remove_torrents_tag(self, **kwargs):
pass
def get_transfer_task(self, **kwargs):
pass
def get_remove_torrents(self, **kwargs):
return []
def add_torrent(self, content, download_dir=None, **kwargs):
if not self.downclient:
return False
if isinstance(content, str):
ret, self.lasthash = self.downclient.addtask(tdir=download_dir, content=content)
if not ret:
log.error(f"{self.client_type}】添加下载任务失败:{self.downclient.err}")
return None
return self.lasthash
else:
log.info(f"{self.client_type}】暂时不支持非链接下载")
return None
def delete_torrents(self, delete_file, ids):
if not self.downclient:
return False
return self.downclient.deltask(thash=ids)
def start_torrents(self, ids):
pass
def stop_torrents(self, ids):
pass
def set_torrents_status(self, ids, **kwargs):
return self.delete_torrents(ids=ids, delete_file=False)
def get_download_dirs(self):
return []
def change_torrent(self, **kwargs):
pass
def get_downloading_progress(self, **kwargs):
"""
获取正在下载的种子进度
"""
Torrents = self.get_downloading_torrents()
DispTorrents = []
for torrent in Torrents:
# 进度
progress = round(torrent.get('percentDone'), 1)
state = "Downloading"
_dlspeed = StringUtils.str_filesize(torrent.get('peers'))
_upspeed = StringUtils.str_filesize(torrent.get('rateDownload'))
speed = "%s%sB/s %s%sB/s" % (chr(8595), _dlspeed, chr(8593), _upspeed)
DispTorrents.append({
'id': torrent.get('info_hash'),
'name': torrent.get('name'),
'speed': speed,
'state': state,
'progress': progress
})
return DispTorrents
def set_speed_limit(self, **kwargs):
"""
设置速度限制
"""
pass

View File

@ -0,0 +1,153 @@
import asyncio
from pikpakapi import PikPakApi, DownloadStatus
import log
from app.downloader.client._base import _IDownloadClient
from app.utils.types import DownloaderType
from config import Config
class PikPak(_IDownloadClient):
schema = "pikpak"
client_type = DownloaderType.PikPak.value
_client_config = {}
downclient = None
lasthash = None
def __init__(self, config=None):
if config:
self._client_config = config
else:
self._client_config = Config().get_config('pikpak')
self.init_config()
self.connect()
def init_config(self):
if self._client_config:
self.downclient = PikPakApi(
username=self._client_config.get("username"),
password=self._client_config.get("password"),
proxy=self._client_config.get("proxy"),
)
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.client_type] else False
def connect(self):
try:
asyncio.run(self.downclient.login())
except Exception as err:
print(str(err))
return
def get_status(self):
if not self.downclient:
return False
try:
asyncio.run(self.downclient.login())
if self.downclient.user_id is None:
log.info("PikPak 登录失败")
return False
except Exception as err:
log.error("PikPak 登录出错:%s" % str(err))
return False
return True
def get_torrents(self, ids=None, status=None, **kwargs):
rv = []
if self.downclient.user_id is None:
if self.get_status():
return [], False
if ids is not None:
for id in ids:
status = asyncio.run(self.downclient.get_task_status(id, ''))
if status == DownloadStatus.downloading:
rv.append({"id": id, "finish": False})
if status == DownloadStatus.done:
rv.append({"id": id, "finish": True})
return rv, True
def get_completed_torrents(self, **kwargs):
return []
def get_downloading_torrents(self, **kwargs):
if self.downclient.user_id is None:
if self.get_status():
return []
try:
offline_list = asyncio.run(self.downclient.offline_list())
return offline_list['tasks']
except Exception as err:
print(str(err))
return []
def get_transfer_task(self, **kwargs):
pass
def get_remove_torrents(self, **kwargs):
return []
def add_torrent(self, content, download_dir=None, **kwargs):
try:
folder = asyncio.run(
self.downclient.path_to_id(download_dir, True))
count = len(folder)
if count == 0:
print("create parent folder failed")
return None
else:
task = asyncio.run(self.downclient.offline_download(
content, folder[count - 1]["id"]
))
return task["task"]["id"]
except Exception as e:
log.error("PikPak 添加离线下载任务失败: %s" % str(e))
return None
# 需要完成
def delete_torrents(self, delete_file, ids):
pass
def start_torrents(self, ids):
pass
def stop_torrents(self, ids):
pass
# 需要完成
def set_torrents_status(self, ids, **kwargs):
pass
def get_download_dirs(self):
return []
def change_torrent(self, **kwargs):
pass
# 需要完成
def get_downloading_progress(self, **kwargs):
"""
获取正在下载的种子进度
"""
Torrents = self.get_downloading_torrents()
DispTorrents = []
for torrent in Torrents:
DispTorrents.append({
'id': torrent.get('id'),
'file_id': torrent.get('file_id'),
'name': torrent.get('file_name'),
'nomenu': True,
'noprogress': True
})
return DispTorrents
def set_speed_limit(self, **kwargs):
"""
设置速度限制
"""
pass

View File

@ -4,6 +4,8 @@ import time
from datetime import datetime
from urllib import parse
from pkg_resources import parse_version as v
import log
import qbittorrentapi
from app.downloader.client._base import _IDownloadClient
@ -94,21 +96,9 @@ class Qbittorrent(_IDownloadClient):
if not self.qbc:
return [], True
try:
torrents = self.qbc.torrents_info(torrent_hashes=ids,
status_filter=status)
if tag:
results = []
if not isinstance(tag, list):
tag = [tag]
for torrent in torrents:
include_flag = True
for t in tag:
if t and t not in torrent.get("tags"):
include_flag = False
break
if include_flag:
results.append(torrent)
return results or [], False
torrents = self.qbc.torrents_info(torrent_hashes=ids, status_filter=status, tag=tag)
if self.is_ver_less_4_4():
torrents = self.filter_torrent_by_tag(torrents, tag=tag)
return torrents or [], False
except Exception as err:
ExceptionUtils.exception_traceback(err)
@ -147,9 +137,6 @@ class Qbittorrent(_IDownloadClient):
return False
def set_torrents_status(self, ids, tags=None):
"""
设置种子状态为已整理以及是否强制做种
"""
if not self.qbc:
return
try:
@ -172,9 +159,6 @@ class Qbittorrent(_IDownloadClient):
ExceptionUtils.exception_traceback(err)
def get_transfer_task(self, tag):
"""
获取下载文件转移任务种子
"""
# 处理下载完成的任务
torrents = self.get_completed_torrents(tag=tag)
trans_tasks = []
@ -198,9 +182,6 @@ class Qbittorrent(_IDownloadClient):
return trans_tasks
def get_remove_torrents(self, config=None):
"""
获取自动删种任务种子
"""
if not config:
return []
remove_torrents = []
@ -476,6 +457,26 @@ class Qbittorrent(_IDownloadClient):
self.qbc.torrents_set_download_limit(limit=int(limit),
torrent_hashes=ids)
def is_ver_less_4_4(self):
return v(self.ver) < v("v4.4.0")
@staticmethod
def filter_torrent_by_tag(torrents, tag):
if not tag:
return torrents
if not isinstance(tag, list):
tag = [tag]
results = []
for torrent in torrents:
include_flag = True
for t in tag:
if t and t not in torrent.get("tags"):
include_flag = False
break
if include_flag:
results.append(torrent)
return results
def change_torrent(self, **kwargs):
"""
修改种子状态

View File

@ -136,9 +136,6 @@ class Transmission(_IDownloadClient):
return []
def set_torrents_status(self, ids, tags=None):
"""
设置种子为已整理状态
"""
if not self.trc:
return
if isinstance(ids, list):
@ -161,9 +158,6 @@ class Transmission(_IDownloadClient):
ExceptionUtils.exception_traceback(err)
def set_torrent_tag(self, tid, tag):
"""
设置种子标签
"""
if not tid or not tag:
return
try:
@ -238,9 +232,6 @@ class Transmission(_IDownloadClient):
ExceptionUtils.exception_traceback(err)
def get_transfer_task(self, tag):
"""
获取下载文件转移任务
"""
# 处理所有任务
torrents = self.get_completed_torrents(tag=tag)
trans_tasks = []
@ -263,17 +254,14 @@ class Transmission(_IDownloadClient):
return trans_tasks
def get_remove_torrents(self, config=None):
"""
获取自动删种任务
"""
if not config:
return []
remove_torrents = []
remove_torrents_ids = []
torrents, error_flag = self.get_torrents(tag=config.get("filter_tags"),
status=config.get("tr_state"))
torrents, error_flag = self.get_torrents()
if error_flag:
return []
tags = config.get("filter_tags")
ratio = config.get("ratio")
# 做种时间 单位:小时
seeding_time = config.get("seeding_time")
@ -285,6 +273,7 @@ class Transmission(_IDownloadClient):
upload_avs = config.get("upload_avs")
savepath_key = config.get("savepath_key")
tracker_key = config.get("tracker_key")
tr_state = config.get("tr_state")
tr_error_key = config.get("tr_error_key")
for torrent in torrents:
date_done = torrent.date_done or torrent.date_added
@ -313,8 +302,13 @@ class Transmission(_IDownloadClient):
break
if not tacker_key_flag:
continue
if tr_state and torrent.status not in tr_state:
continue
if tr_error_key and not re.findall(tr_error_key, torrent.error_string, re.I):
continue
labels = set(torrent.labels)
if tags and (not labels or not set(tags).issubset(labels)):
continue
remove_torrents.append({
"id": torrent.id,
"name": torrent.name,

View File

@ -3,7 +3,6 @@ from threading import Lock
import log
from app.conf import ModuleConf
from app.conf import SystemConfig
from app.filetransfer import FileTransfer
from app.helper import DbHelper, ThreadHelper, SubmoduleHelper
from app.media import Media
@ -11,6 +10,8 @@ from app.media.meta import MetaInfo
from app.mediaserver import MediaServer
from app.message import Message
from app.sites import Sites
from app.subtitle import Subtitle
from app.conf import SystemConfig
from app.utils import Torrent, StringUtils, SystemUtils, ExceptionUtils
from app.utils.commons import singleton
from app.utils.types import MediaType, DownloaderType, SearchType, RmtMode
@ -44,7 +45,7 @@ class Downloader:
'app.downloader.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【Downloader】加载下载器:{self._downloader_schema}")
log.debug(f"【Downloader】: 已经加载下载器:{self._downloader_schema}")
self.init_config()
def init_config(self):
@ -302,7 +303,7 @@ class Downloader:
else:
subtitle_dir = visit_dir
ThreadHelper().start_thread(
self.sites.download_subtitle_from_site,
Subtitle().download_subtitle_from_site,
(media_info, site_info.get("cookie"), site_info.get("ua"), subtitle_dir)
)
return ret, ""
@ -356,8 +357,7 @@ class Downloader:
if not downloader or not config:
return []
_client = self.__get_client(downloader)
config["filter_tags"] = []
if config.get("onlynastool"):
if self._pt_monitor_only:
config["filter_tags"] = config["tags"] + [PT_TAG]
else:
config["filter_tags"] = config["tags"]
@ -639,7 +639,7 @@ class Downloader:
# 选中一个单季整季的或单季包括需要的所有集的
if item.tmdb_id == need_tmdbid \
and (not item.get_episode_list()
or set(item.get_episode_list()).intersection(set(need_episodes))) \
or set(item.get_episode_list()).issuperset(set(need_episodes))) \
and len(item.get_season_list()) == 1 \
and item.get_season_list()[0] == need_season:
# 检查种子看是否有需要的集
@ -1020,6 +1020,8 @@ class Downloader:
:return: 集数列表种子路径
"""
site_info = self.sites.get_site_attr(url)
if not site_info.get("cookie"):
return [], None
# 保存种子文件
file_path, _, _, files, retmsg = Torrent().get_torrent_info(
url=url,

View File

@ -14,10 +14,11 @@ from app.helper import DbHelper, ProgressHelper
from app.helper import ThreadHelper
from app.media import Media, Category, Scraper
from app.media.meta import MetaInfo
from app.mediaserver import MediaServer
from app.message import Message
from app.plugins import EventManager
from app.subtitle import Subtitle
from app.utils import EpisodeFormat, PathUtils, StringUtils, SystemUtils, ExceptionUtils
from app.utils.types import MediaType, SyncType, RmtMode, EventType
from app.utils.types import MediaType, SyncType, RmtMode
from config import RMT_SUBEXT, RMT_MEDIAEXT, RMT_FAVTYPE, RMT_MIN_FILESIZE, DEFAULT_MOVIE_FORMAT, \
DEFAULT_TV_FORMAT, Config
@ -33,7 +34,6 @@ class FileTransfer:
threadhelper = None
dbhelper = None
progress = None
eventmanager = None
_default_rmt_mode = None
_movie_path = None
@ -61,11 +61,11 @@ class FileTransfer:
self.media = Media()
self.message = Message()
self.category = Category()
self.mediaserver = MediaServer()
self.scraper = Scraper()
self.threadhelper = ThreadHelper()
self.dbhelper = DbHelper()
self.progress = ProgressHelper()
self.eventmanager = EventManager()
self.init_config()
def init_config(self):
@ -567,6 +567,8 @@ class FileTransfer:
message_medias = {}
# 需要刷新媒体库的清单
refresh_library_items = []
# 需要下载字段的清单
download_subtitle_items = []
# 处理识别后的每一个文件或单个文件夹
for file_item, media in Medias.items():
try:
@ -770,20 +772,25 @@ class FileTransfer:
tmdbid=media.tmdb_id,
append_to_response="all"))
# 下载字幕条目
subtitle_item = media.to_dict()
subtitle_item.update({
"file": ret_file_path,
"file_ext": os.path.splitext(file_item)[-1],
"bluray": True if bluray_disk_dir else False
})
# 登记字幕下载事件
self.eventmanager.send_event(EventType.SubtitleDownload, subtitle_item)
subtitle_item = {"type": media.type,
"file": ret_file_path,
"file_ext": os.path.splitext(file_item)[-1],
"name": media.en_name if media.en_name else media.cn_name,
"title": media.title,
"year": media.year,
"season": media.begin_season,
"episode": media.begin_episode,
"bluray": True if bluray_disk_dir else False,
"imdbid": media.imdb_id}
# 登记字幕下载
if subtitle_item not in download_subtitle_items:
download_subtitle_items.append(subtitle_item)
# 转移历史记录
self.dbhelper.insert_transfer_history(
in_from=in_from,
rmt_mode=rmt_mode,
in_path=reg_path,
out_path=new_file if not bluray_disk_dir else ret_dir_path,
out_path=new_file if not bluray_disk_dir else None,
dest=dist_path,
media_info=media)
# 未识别手动识别或历史记录重新识别的批处理模式
@ -833,6 +840,9 @@ class FileTransfer:
# 刷新媒体库
if refresh_library_items and self._refresh_mediaserver:
self.mediaserver.refresh_library_by_items(refresh_library_items)
# 启新进程下载字幕
if download_subtitle_items:
self.threadhelper.start_thread(Subtitle().download_subtitle, (download_subtitle_items,))
# 总结
log.info("【Rmt】%s 处理完成,总数:%s,失败:%s" % (in_path, total_count, failed_count))
if alert_count > 0:
@ -1244,6 +1254,42 @@ class FileTransfer:
return file_list, ""
def get_media_exists_flag(self, mtype, title, year, mediaid):
"""
获取媒体存在标记是否存在是否订阅
:param: mtype 媒体类型
:param: title 媒体标题
:param: year 媒体年份
:param: mediaid TMDBID/DB:豆瓣ID/BG:Bangumi的ID
:return: 1-已订阅/2-已下载/0-不存在未订阅, RSSID
"""
if str(mediaid).isdigit():
tmdbid = mediaid
else:
tmdbid = None
if mtype in ["MOV", "电影", MediaType.MOVIE]:
rssid = self.dbhelper.get_rss_movie_id(title=title, year=year, tmdbid=tmdbid)
else:
if not tmdbid:
meta_info = MetaInfo(title=title)
title = meta_info.get_name()
season = meta_info.get_season_string()
if season:
year = None
else:
season = None
rssid = self.dbhelper.get_rss_tv_id(title=title, year=year, season=season, tmdbid=tmdbid)
if rssid:
# 已订阅
fav = "1"
elif MediaServer().check_item_exists(title=title, year=year, tmdbid=tmdbid):
# 已下载
fav = "2"
else:
# 未订阅、未下载
fav = "0"
return fav, rssid
if __name__ == "__main__":
"""

View File

@ -1,4 +1,4 @@
from .chrome_helper import ChromeHelper, init_chrome
from .chrome_helper import ChromeHelper
from .indexer_helper import IndexerHelper, IndexerConf
from .meta_helper import MetaHelper
from .progress_helper import ProgressHelper
@ -9,6 +9,7 @@ from .dict_helper import DictHelper
from .display_helper import DisplayHelper
from .site_helper import SiteHelper
from .ocr_helper import OcrHelper
from .opensubtitles import OpenSubtitles
from .words_helper import WordsHelper
from .submodule_helper import SubmoduleHelper
from .cookiecloud_helper import CookieCloudHelper

View File

@ -238,10 +238,3 @@ class ChromeWithPrefs(uc.Chrome):
# pylint: disable=protected-access
# remove the experimental_options to avoid an error
del options._experimental_options["prefs"]
def init_chrome():
"""
初始化chrome驱动
"""
ChromeHelper().init_driver()

View File

@ -1,4 +1,6 @@
from app.utils import RequestUtils
import json
from app.utils import RequestUtils, StringUtils
class CookieCloudHelper(object):

View File

@ -212,19 +212,6 @@ class DbHelper:
TRANSFERHISTORY.DEST_FILENAME == dest_filename).count()
return True if ret > 0 else False
def update_transfer_history_date(self, source_path, source_filename, dest_path, dest_filename, date):
"""
更新历史转移记录时间
"""
self._db.query(TRANSFERHISTORY).filter(TRANSFERHISTORY.SOURCE_PATH == source_path,
TRANSFERHISTORY.SOURCE_FILENAME == source_filename,
TRANSFERHISTORY.DEST_PATH == dest_path,
TRANSFERHISTORY.DEST_FILENAME == dest_filename).update(
{
"DATE": date
}
)
@DbPersist(_db)
def insert_transfer_history(self, in_from: Enum, rmt_mode: RmtMode, in_path, out_path, dest, media_info):
"""
@ -248,12 +235,10 @@ class DbHelper:
dest_filename = ""
season_episode = media_info.get_season_string()
title = media_info.title
timestr = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))
if self.is_transfer_history_exists(source_path, source_filename, dest_path, dest_filename):
# 更新历史转移记录的时间
self.update_transfer_history_date(source_path, source_filename, dest_path, dest_filename, timestr)
return
dest = dest or ""
timestr = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))
self._db.insert(
TRANSFERHISTORY(
MODE=str(rmt_mode.value),
@ -327,27 +312,6 @@ class DbHelper:
"""
return self._db.query(TRANSFERUNKNOWN).filter(TRANSFERUNKNOWN.STATE == 'N').all()
def get_transfer_unknown_paths_by_page(self, search, page, rownum):
"""
按页查询未识别的记录列表
"""
if int(page) == 1:
begin_pos = 0
else:
begin_pos = (int(page) - 1) * int(rownum)
if search:
search = f"%{search}%"
count = self._db.query(TRANSFERUNKNOWN).filter((TRANSFERUNKNOWN.STATE == 'N')
& (TRANSFERUNKNOWN.PATH.like(search))).count()
data = self._db.query(TRANSFERUNKNOWN).filter((TRANSFERUNKNOWN.STATE == 'N')
& (TRANSFERUNKNOWN.PATH.like(search))).order_by(
TRANSFERUNKNOWN.ID.desc()).limit(int(rownum)).offset(begin_pos).all()
return count, data
else:
return self._db.query(TRANSFERUNKNOWN).filter(TRANSFERUNKNOWN.STATE == 'N').count(), self._db.query(
TRANSFERUNKNOWN).filter(TRANSFERUNKNOWN.STATE == 'N').order_by(
TRANSFERUNKNOWN.ID.desc()).limit(int(rownum)).offset(begin_pos).all()
@DbPersist(_db)
def update_transfer_unknown_state(self, path):
"""
@ -490,14 +454,6 @@ class DbHelper:
PATH=os.path.normpath(path)
))
@DbPersist(_db)
def delete_transfer_blacklist(self, path):
"""
删除黑名单记录
"""
self._db.query(TRANSFERBLACKLIST).filter(TRANSFERBLACKLIST.PATH == str(path)).delete()
self._db.query(SYNCHISTORY).filter(SYNCHISTORY.PATH == str(path)).delete()
@DbPersist(_db)
def truncate_transfer_blacklist(self, ):
"""

View File

@ -77,11 +77,3 @@ class DictHelper:
return True
else:
return False
def list(self, dtype):
"""
查询字典列表
"""
if not dtype:
return []
return self._db.query(SYSTEMDICT).filter(SYSTEMDICT.TYPE == dtype).all()

View File

@ -15,7 +15,7 @@ class DisplayHelper(object):
self.init_config()
def init_config(self):
self.stop_service()
self.quit()
if self.can_display():
try:
self._display = Display(visible=False, size=(1024, 768))
@ -27,7 +27,7 @@ class DisplayHelper(object):
def get_display(self):
return self._display
def stop_service(self):
def quit(self):
os.environ["NASTOOL_DISPLAY"] = ""
if self._display:
self._display.stop()
@ -40,4 +40,4 @@ class DisplayHelper(object):
return False
def __del__(self):
self.stop_service()
self.quit()

103
app/helper/opensubtitles.py Normal file
View File

@ -0,0 +1,103 @@
from functools import lru_cache
from urllib.parse import quote
from pyquery import PyQuery
import log
from app.helper.chrome_helper import ChromeHelper
from config import Config
class OpenSubtitles:
_cookie = ""
_ua = None
_url_imdbid = "https://www.opensubtitles.org/zh/search/imdbid-%s/sublanguageid-chi"
_url_keyword = "https://www.opensubtitles.org/zh/search/moviename-%s/sublanguageid-chi"
def __init__(self):
self._ua = Config().get_ua()
def search_subtitles(self, query):
if query.get("imdbid"):
return self.__search_subtitles_by_imdbid(query.get("imdbid"))
else:
return self.__search_subtitles_by_keyword("%s %s" % (query.get("name"), query.get("year")))
def __search_subtitles_by_imdbid(self, imdbid):
"""
按TMDBID搜索OpenSubtitles
"""
return self.__parse_opensubtitles_results(url=self._url_imdbid % str(imdbid).replace("tt", ""))
def __search_subtitles_by_keyword(self, keyword):
"""
按关键字搜索OpenSubtitles
"""
return self.__parse_opensubtitles_results(url=self._url_keyword % quote(keyword))
@classmethod
@lru_cache(maxsize=128)
def __parse_opensubtitles_results(cls, url):
"""
搜索并解析结果
"""
chrome = ChromeHelper()
if not chrome.get_status():
log.error("【Subtitle】未找到浏览器内核当前环境无法检索opensubtitles字幕")
return []
# 访问页面
if not chrome.visit(url):
log.error("【Subtitle】无法连接opensubtitles.org")
return []
# 源码
html_text = chrome.get_html()
# Cookie
cls._cookie = chrome.get_cookies()
# 解析列表
ret_subtitles = []
html_doc = PyQuery(html_text)
global_season = ''
for tr in html_doc('#search_results > tbody > tr:not([style])'):
tr_doc = PyQuery(tr)
# 季
season = tr_doc('span[id^="season-"] > a > b').text()
if season:
global_season = season
continue
# 集
episode = tr_doc('span[itemprop="episodeNumber"]').text()
# 标题
title = tr_doc('strong > a.bnone').text()
# 描述 下载链接
if not global_season:
description = tr_doc('td:nth-child(1)').text()
if description and len(description.split("\n")) > 1:
description = description.split("\n")[1]
link = tr_doc('td:nth-child(5) > a').attr("href")
else:
description = tr_doc('span[itemprop="name"]').text()
link = tr_doc('a[href^="/download/"]').attr("href")
if link:
link = "https://www.opensubtitles.org%s" % link
else:
continue
ret_subtitles.append({
"season": global_season,
"episode": episode,
"title": title,
"description": description,
"link": link
})
return ret_subtitles
def get_cookie(self):
"""
返回Cookie
"""
return self._cookie
def get_ua(self):
"""
返回User-Agent
"""
return self._ua

View File

@ -1,4 +1,5 @@
import datetime
import xml.dom.minidom
from abc import ABCMeta, abstractmethod
import log
@ -6,6 +7,7 @@ from app.filter import Filter
from app.helper import ProgressHelper
from app.media import Media
from app.media.meta import MetaInfo
from app.utils import DomUtils, RequestUtils, StringUtils, ExceptionUtils
from app.utils.types import MediaType, SearchType
@ -54,7 +56,137 @@ class _IIndexClient(metaclass=ABCMeta):
"""
根据关键字多线程检索
"""
pass
if not indexer or not key_word:
return None
if filter_args is None:
filter_args = {}
# 不在设定搜索范围的站点过滤掉
if filter_args.get("site") and indexer.name not in filter_args.get("site"):
return []
# 计算耗时
start_time = datetime.datetime.now()
log.info(f"{self.index_type}】开始检索Indexer{indexer.name} ...")
# 特殊符号处理
search_word = StringUtils.handler_special_chars(text=key_word,
replace_word=" ",
allow_space=True)
api_url = f"{indexer.domain}?apikey={self.api_key}&t=search&q={search_word}"
result_array = self.__parse_torznabxml(api_url)
if len(result_array) == 0:
log.warn(f"{self.index_type}{indexer.name} 未检索到数据")
self.progress.update(ptype='search', text=f"{indexer.name} 未检索到数据")
return []
else:
log.warn(f"{self.index_type}{indexer.name} 返回数据:{len(result_array)}")
return self.filter_search_results(result_array=result_array,
order_seq=order_seq,
indexer=indexer,
filter_args=filter_args,
match_media=match_media,
start_time=start_time)
@staticmethod
def __parse_torznabxml(url):
"""
从torznab xml中解析种子信息
:param url: URL地址
:return: 解析出来的种子信息列表
"""
if not url:
return []
try:
ret = RequestUtils(timeout=10).get_res(url)
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
return []
if not ret:
return []
xmls = ret.text
if not xmls:
return []
torrents = []
try:
# 解析XML
dom_tree = xml.dom.minidom.parseString(xmls)
root_node = dom_tree.documentElement
items = root_node.getElementsByTagName("item")
for item in items:
try:
# indexer id
indexer_id = DomUtils.tag_value(item, "jackettindexer", "id",
default=DomUtils.tag_value(item, "prowlarrindexer", "id", ""))
# indexer
indexer = DomUtils.tag_value(item, "jackettindexer",
default=DomUtils.tag_value(item, "prowlarrindexer", default=""))
# 标题
title = DomUtils.tag_value(item, "title", default="")
if not title:
continue
# 种子链接
enclosure = DomUtils.tag_value(item, "enclosure", "url", default="")
if not enclosure:
continue
# 描述
description = DomUtils.tag_value(item, "description", default="")
# 种子大小
size = DomUtils.tag_value(item, "size", default=0)
# 种子页面
page_url = DomUtils.tag_value(item, "comments", default="")
# 做种数
seeders = 0
# 下载数
peers = 0
# 是否免费
freeleech = False
# 下载因子
downloadvolumefactor = 1.0
# 上传因子
uploadvolumefactor = 1.0
# imdbid
imdbid = ""
torznab_attrs = item.getElementsByTagName("torznab:attr")
for torznab_attr in torznab_attrs:
name = torznab_attr.getAttribute('name')
value = torznab_attr.getAttribute('value')
if name == "seeders":
seeders = value
if name == "peers":
peers = value
if name == "downloadvolumefactor":
downloadvolumefactor = value
if float(downloadvolumefactor) == 0:
freeleech = True
if name == "uploadvolumefactor":
uploadvolumefactor = value
if name == "imdbid":
imdbid = value
tmp_dict = {'indexer_id': indexer_id,
'indexer': indexer,
'title': title,
'enclosure': enclosure,
'description': description,
'size': size,
'seeders': seeders,
'peers': peers,
'freeleech': freeleech,
'downloadvolumefactor': downloadvolumefactor,
'uploadvolumefactor': uploadvolumefactor,
'page_url': page_url,
'imdbid': imdbid}
torrents.append(tmp_dict)
except Exception as e:
ExceptionUtils.exception_traceback(e)
continue
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
pass
return torrents
def filter_search_results(self, result_array: list,
order_seq,

View File

@ -42,7 +42,7 @@ class BuiltinIndexer(_IIndexClient):
"""
return True
def get_indexers(self, check=True, public=False, indexer_id=None):
def get_indexers(self, check=True, public=True, indexer_id=None):
ret_indexers = []
# 选中站点配置
indexer_sites = Config().get_config("pt").get("indexer_sites") or []

View File

@ -0,0 +1,77 @@
import requests
from app.utils import ExceptionUtils
from app.utils.types import IndexerType
from config import Config
from app.indexer.client._base import _IIndexClient
from app.utils import RequestUtils
from app.helper import IndexerConf
class Jackett(_IIndexClient):
schema = "jackett"
_client_config = {}
index_type = IndexerType.JACKETT.value
_password = None
def __init__(self, config=None):
super().__init__()
if config:
self._client_config = config
else:
self._client_config = Config().get_config('jackett')
self.init_config()
def init_config(self):
if self._client_config:
self.api_key = self._client_config.get('api_key')
self._password = self._client_config.get('password')
self.host = self._client_config.get('host')
if self.host:
if not self.host.startswith('http'):
self.host = "http://" + self.host
if not self.host.endswith('/'):
self.host = self.host + "/"
def get_status(self):
"""
检查连通性
:return: TrueFalse
"""
if not self.api_key or not self.host:
return False
return True if self.get_indexers() else False
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.index_type] else False
def get_indexers(self):
"""
获取配置的jackett indexer
:return: indexer 信息 [(indexerId, indexerName, url)]
"""
# 获取Cookie
cookie = None
session = requests.session()
res = RequestUtils(session=session).post_res(url=f"{self.host}UI/Dashboard",
params={"password": self._password})
if res and session.cookies:
cookie = session.cookies.get_dict()
indexer_query_url = f"{self.host}api/v2.0/indexers?configured=true"
try:
ret = RequestUtils(cookies=cookie).get_res(indexer_query_url)
if not ret or not ret.json():
return []
return [IndexerConf({"id": v["id"],
"name": v["name"],
"domain": f'{self.host}api/v2.0/indexers/{v["id"]}/results/torznab/',
"public": True if v['type'] == 'public' else False,
"builtin": False})
for v in ret.json()]
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
return []
def search(self, *kwargs):
return super().search(*kwargs)

View File

@ -0,0 +1,66 @@
from app.utils import ExceptionUtils
from app.utils.types import IndexerType
from config import Config
from app.indexer.client._base import _IIndexClient
from app.utils import RequestUtils
from app.helper import IndexerConf
class Prowlarr(_IIndexClient):
schema = "prowlarr"
_client_config = {}
index_type = IndexerType.PROWLARR.value
def __init__(self, config=None):
super().__init__()
if config:
self._client_config = config
else:
self._client_config = Config().get_config('prowlarr')
self.init_config()
def init_config(self):
if self._client_config:
self.api_key = self._client_config.get('api_key')
self.host = self._client_config.get('host')
if self.host:
if not self.host.startswith('http'):
self.host = "http://" + self.host
if not self.host.endswith('/'):
self.host = self.host + "/"
@classmethod
def match(cls, ctype):
return True if ctype in [cls.schema, cls.index_type] else False
def get_status(self):
"""
检查连通性
:return: TrueFalse
"""
if not self.api_key or not self.host:
return False
return True if self.get_indexers() else False
def get_indexers(self):
"""
获取配置的prowlarr indexer
:return: indexer 信息 [(indexerId, indexerName, url)]
"""
indexer_query_url = f"{self.host}api/v1/indexerstats?apikey={self.api_key}"
try:
ret = RequestUtils().get_res(indexer_query_url)
except Exception as e2:
ExceptionUtils.exception_traceback(e2)
return []
if not ret:
return []
indexers = ret.json().get("indexers", [])
return [IndexerConf({"id": v["indexerId"],
"name": v["indexerName"],
"domain": f'{self.host}{v["indexerId"]}/api',
"builtin": False})
for v in indexers]
def search(self, *kwargs):
return super().search(*kwargs)

View File

@ -23,14 +23,14 @@ class Indexer(object):
'app.indexer.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【Indexer】加载索引器:{self._indexer_schemas}")
log.debug(f"【Indexer】: 已经加载索引器:{self._indexer_schemas}")
self.init_config()
def init_config(self):
self.progress = ProgressHelper()
self._client_type = ModuleConf.INDEXER_DICT.get(
Config().get_config("pt").get('search_indexer') or 'builtin'
) or IndexerType.BUILTIN
)
self._client = self.__get_client(self._client_type)
def __build_class(self, ctype, conf):

View File

@ -1751,6 +1751,20 @@ class Media:
return episode.get("name")
return None
def get_movie_discover(self, page=1):
"""
发现电影
"""
if not self.movie:
return []
try:
movies = self.movie.discover(page)
if movies:
return movies.get("results")
except Exception as e:
print(str(e))
return []
def get_movie_similar(self, tmdbid, page=1):
"""
查询类似电影
@ -2017,16 +2031,10 @@ class Media:
"""
获取TMDB热门电影随机一张背景图
"""
if not self.discover:
return ""
try:
medias = self.discover.discover_movies(params={"sort_by": "popularity.desc"})
if medias:
backdrops = [media.get("backdrop_path") for media in medias if media.get("backdrop_path")]
# 随机一张
return TMDB_IMAGE_ORIGINAL_URL % backdrops[round(random.uniform(0, len(backdrops) - 1))]
except Exception as err:
print(str(err))
movies = self.get_movie_discover()
if movies:
backdrops = [movie.get("backdrop_path") for movie in movies]
return TMDB_IMAGE_ORIGINAL_URL % backdrops[round(random.uniform(0, len(backdrops) - 1))]
return ""
def save_rename_cache(self, file_name, cache_info):
@ -2086,14 +2094,12 @@ class Media:
"""
if not self.episode:
return ""
if not tv_id or not season_id or not episode_id:
return ""
res = self.episode.images(tv_id, season_id, episode_id)
if res:
if orginal:
return TMDB_IMAGE_ORIGINAL_URL % res[-1].get("file_path")
return TMDB_IMAGE_ORIGINAL_URL % res[0].get("file_path")
else:
return TMDB_IMAGE_W500_URL % res[-1].get("file_path")
return TMDB_IMAGE_W500_URL % res[0].get("file_path")
else:
return ""

View File

@ -138,8 +138,8 @@ class MetaBase(object):
_subtitle_flag = False
_subtitle_season_re = r"[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季"
_subtitle_season_all_re = r"\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季全"
_subtitle_episode_re = r"[第\s]+([0-9一二三四五六七八九十百零EP\-]+)\s*[集话話期]"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集全|全\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]"
_subtitle_episode_re = r"[第\s]+([0-9一二三四五六七八九十EP\-]+)\s*[集话話期]"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十]+)\s*集全|全\s*([0-9一二三四五六七八九十]+)\s*[集话話期]"
def __init__(self, title, subtitle=None, fileflag=False):
self.category_handler = Category()
@ -706,52 +706,5 @@ class MetaBase(object):
"imdb_id": self.imdb_id,
"tmdb_id": self.tmdb_id,
"overview": str(self.overview).strip() if self.overview else '',
"link": self.get_detail_url(),
"season": self.get_season_list(),
"episode": self.get_episode_list(),
"backdrop": self.get_backdrop_image(),
"poster": self.get_poster_image(),
"org_string": self.org_string,
"subtitle": self.subtitle,
"cn_name": self.cn_name,
"en_name": self.en_name,
"total_seasons": self.total_seasons,
"total_episodes": self.total_episodes,
"part": self.part,
"resource_type": self.resource_type,
"resource_effect": self.resource_effect,
"resource_pix": self.resource_pix,
"resource_team": self.resource_team,
"video_encode": self.video_encode,
"audio_encode": self.audio_encode,
"category": self.category,
"douban_id": self.douban_id,
"keyword": self.keyword,
"original_language": self.original_language,
"original_title": self.original_title,
"release_date": self.release_date,
"runtime": self.runtime,
"fav": self.fav,
"rss_sites": self.rss_sites,
"search_sites": self.search_sites,
"site": self.site,
"site_order": self.site_order,
"user_name": self.user_name,
"enclosure": self.enclosure,
"res_order": self.res_order,
"filter_rule": self.filter_rule,
"over_edition": self.over_edition,
"size": self.size,
"seeders": self.seeders,
"peers": self.peers,
"page_url": self.page_url,
"upload_volume_factor": self.upload_volume_factor,
"download_volume_factor": self.download_volume_factor,
"hit_and_run": self.hit_and_run,
"rssid": self.rssid,
"save_path": self.save_path,
"download_setting": self.download_setting,
"ignored_words": self.ignored_words,
"replaced_words": self.replaced_words,
"offset_words": self.offset_words
"link": self.get_detail_url()
}

View File

@ -36,7 +36,7 @@ class MetaVideo(MetaBase):
_name_nostring_re = r"^PTS|^JADE|^AOD|^CHC|^[A-Z]{1,4}TV[\-0-9UVHDK]*" \
r"|HBO$|\s+HBO|\d{1,2}th|\d{1,2}bit|NETFLIX|AMAZON|IMAX|^3D|\s+3D|^BBC\s+|\s+BBC|BBC$|DISNEY\+?|XXX|\s+DC$" \
r"|[第\s共]+[0-9一二三四五六七八九十\-\s]+季" \
r"|[第\s共]+[0-9一二三四五六七八九十百零\-\s]+[集话話]" \
r"|[第\s共]+[0-9一二三四五六七八九十\-\s]+[集话話]" \
r"|连载|日剧|美剧|电视剧|动画片|动漫|欧美|西德|日韩|超高清|高清|蓝光|翡翠台|梦幻天堂·龙网|★?\d*月?新番" \
r"|最终季|合集|[多中国英葡法俄日韩德意西印泰台港粤双文语简繁体特效内封官译外挂]+字幕|版本|出品|台版|港版|\w+字幕组" \
r"|未删减版|UNCUT$|UNRATE$|WITH EXTRAS$|RERIP$|SUBBED$|PROPER$|REPACK$|SEASON$|EPISODE$|Complete$|Extended$|Extended Version$" \

View File

@ -386,27 +386,27 @@ class Scraper:
if scraper_tv_pic.get("background"):
background_image = media.fanart.get_background(media_type=media.type, queryid=media.tvdb_id)
if background_image:
self.__save_image(background_image, os.path.dirname(dir_path), "show")
self.__save_image(background_image, dir_path, "show")
# logo
if scraper_tv_pic.get("logo"):
logo_image = media.fanart.get_logo(media_type=media.type, queryid=media.tvdb_id)
if logo_image:
self.__save_image(logo_image, os.path.dirname(dir_path), "logo")
self.__save_image(logo_image, dir_path, "logo")
# clearart
if scraper_tv_pic.get("clearart"):
clearart_image = media.fanart.get_disc(media_type=media.type, queryid=media.tvdb_id)
if clearart_image:
self.__save_image(clearart_image, os.path.dirname(dir_path), "clearart")
self.__save_image(clearart_image, dir_path, "clearart")
# banner
if scraper_tv_pic.get("banner"):
banner_image = media.fanart.get_banner(media_type=media.type, queryid=media.tvdb_id)
if banner_image:
self.__save_image(banner_image, os.path.dirname(dir_path), "banner")
self.__save_image(banner_image, dir_path, "banner")
# thumb
if scraper_tv_pic.get("thumb"):
thumb_image = media.fanart.get_thumb(media_type=media.type, queryid=media.tvdb_id)
if thumb_image:
self.__save_image(thumb_image, os.path.dirname(dir_path), "thumb")
self.__save_image(thumb_image, dir_path, "thumb")
# season nfo
if scraper_tv_nfo.get("season_basic"):
if not os.path.exists(os.path.join(dir_path, "season.nfo")):
@ -475,13 +475,12 @@ class Scraper:
if episode_image:
self.__save_image(episode_image, episode_thumb)
else:
# 开启ffmpeg则从视频文件生成缩略图
if scraper_tv_pic.get("episode_thumb_ffmpeg"):
video_path = os.path.join(dir_path, file_name + file_ext)
log.info(f"【Scraper】正在生成缩略图{video_path} ...")
FfmpegHelper().get_thumb_image_from_video(video_path=video_path,
image_path=episode_thumb)
log.info(f"【Scraper】缩略图生成完成{episode_thumb}")
# 从视频文件生成缩略图
video_path = os.path.join(dir_path, file_name + file_ext)
log.info(f"【Scraper】正在生成缩略图{video_path} ...")
FfmpegHelper().get_thumb_image_from_video(video_path=video_path,
image_path=episode_thumb)
log.info(f"【Scraper】缩略图生成完成{episode_thumb}")
except Exception as e:
ExceptionUtils.exception_traceback(e)

View File

@ -1 +1,2 @@
from .media_server import MediaServer
from .webhook_event import WebhookEvent

View File

@ -106,10 +106,3 @@ class _IMediaClient(metaclass=ABCMeta):
获取正在播放的会话
"""
pass
@abstractmethod
def get_webhook_message(self, message):
"""
解析Webhook报文获取消息内容结构
"""
pass

View File

@ -9,7 +9,6 @@ from app.utils.types import MediaType, MediaServerType
class Emby(_IMediaClient):
schema = "emby"
server_type = MediaServerType.EMBY.value
_client_config = {}
@ -491,52 +490,3 @@ class Emby(_IMediaClient):
except Exception as e:
ExceptionUtils.exception_traceback(e)
return []
def get_webhook_message(self, message):
"""
解析Emby报文
"""
eventItem = {'event': message.get('Event', '')}
if message.get('Item'):
if message.get('Item', {}).get('Type') == 'Episode':
eventItem['item_type'] = "TV"
eventItem['item_name'] = "%s %s%s %s" % (
message.get('Item', {}).get('SeriesName'),
"S" + str(message.get('Item', {}).get('ParentIndexNumber')),
"E" + str(message.get('Item', {}).get('IndexNumber')),
message.get('Item', {}).get('Name'))
eventItem['item_id'] = message.get('Item', {}).get('SeriesId')
eventItem['season_id'] = message.get('Item', {}).get('ParentIndexNumber')
eventItem['episode_id'] = message.get('Item', {}).get('IndexNumber')
eventItem['tmdb_id'] = message.get('Item', {}).get('ProviderIds', {}).get('Tmdb')
if message.get('Item', {}).get('Overview') and len(message.get('Item', {}).get('Overview')) > 100:
eventItem['overview'] = str(message.get('Item', {}).get('Overview'))[:100] + "..."
else:
eventItem['overview'] = message.get('Item', {}).get('Overview')
eventItem['percentage'] = message.get('TranscodingInfo', {}).get('CompletionPercentage')
if not eventItem['percentage']:
eventItem['percentage'] = message.get('PlaybackInfo', {}).get('PositionTicks') / \
message.get('Item', {}).get('RunTimeTicks') * 100
else:
eventItem['item_type'] = "MOV"
eventItem['item_name'] = "%s %s" % (
message.get('Item', {}).get('Name'), "(" + str(message.get('Item', {}).get('ProductionYear')) + ")")
eventItem['item_path'] = message.get('Item', {}).get('Path')
eventItem['item_id'] = message.get('Item', {}).get('Id')
eventItem['tmdb_id'] = message.get('Item', {}).get('ProviderIds', {}).get('Tmdb')
if len(message.get('Item', {}).get('Overview')) > 100:
eventItem['overview'] = str(message.get('Item', {}).get('Overview'))[:100] + "..."
else:
eventItem['overview'] = message.get('Item', {}).get('Overview')
eventItem['percentage'] = message.get('TranscodingInfo', {}).get('CompletionPercentage')
if not eventItem['percentage']:
eventItem['percentage'] = message.get('PlaybackInfo', {}).get('PositionTicks') / \
message.get('Item', {}).get('RunTimeTicks') * 100
if message.get('Session'):
eventItem['ip'] = message.get('Session').get('RemoteEndPoint')
eventItem['device_name'] = message.get('Session').get('DeviceName')
eventItem['client'] = message.get('Session').get('Client')
if message.get("User"):
eventItem['user_name'] = message.get("User").get('Name')
return eventItem

View File

@ -8,7 +8,6 @@ from app.utils import RequestUtils, SystemUtils, ExceptionUtils
class Jellyfin(_IMediaClient):
schema = "jellyfin"
server_type = MediaServerType.JELLYFIN.value
_client_config = {}
@ -422,28 +421,4 @@ class Jellyfin(_IMediaClient):
"""
获取正在播放的会话
"""
if not self._host or not self._apikey:
return []
playing_sessions = []
req_url = "%sSessions?api_key=%s" % (self._host, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
sessions = res.json()
for session in sessions:
if session.get("NowPlayingItem"):
playing_sessions.append(session)
return playing_sessions
except Exception as e:
ExceptionUtils.exception_traceback(e)
return []
def get_webhook_message(self, message):
"""
解析Jellyfin报文
"""
eventItem = {'event': message.get('NotificationType', ''),
'item_name': message.get('Name'),
'user_name': message.get('NotificationUsername')
}
return eventItem
pass

View File

@ -98,16 +98,15 @@ class Plex(_IMediaClient):
if not self._plex:
return {}
sections = self._plex.library.sections()
MovieCount = SeriesCount = SongCount = EpisodeCount = 0
MovieCount = SeriesCount = SongCount = 0
for sec in sections:
if sec.type == "movie":
MovieCount += sec.totalSize
if sec.type == "show":
SeriesCount += sec.totalSize
EpisodeCount += sec.totalViewSize(libtype='episode')
if sec.type == "artist":
SongCount += sec.totalSize
return {"MovieCount": MovieCount, "SeriesCount": SeriesCount, "SongCount": SongCount, "EpisodeCount": EpisodeCount}
return {"MovieCount": MovieCount, "SeriesCount": SeriesCount, "SongCount": SongCount, "EpisodeCount": 0}
def get_movies(self, title, year=None):
"""
@ -186,13 +185,6 @@ class Plex(_IMediaClient):
libraries.append({"id": library.key, "name": library.title})
return libraries
def get_iteminfo(self, itemid):
"""
获取单个项目详情
"""
return None
def get_items(self, parent):
"""
获取媒体服务器所有媒体库列表
@ -221,55 +213,4 @@ class Plex(_IMediaClient):
"""
获取正在播放的会话
"""
if not self._plex:
return []
sessions = self._plex.sessions()
ret_sessions = []
for session in sessions:
ret_sessions.append({
"type": session.TAG,
"bitrate": sum([m.bitrate for m in session.media]),
"address": session.player.address
})
return ret_sessions
def get_webhook_message(self, message):
"""
解析Plex报文
eventItem 字段的含义
event 事件类型
item_type 媒体类型 TV,MOV
item_name TV:琅琊榜 S1E6 剖心明志 虎口脱险
MOV:猪猪侠大冒险(2001)
overview 剧情描述
"""
eventItem = {'event': message.get('event', '')}
if message.get('Metadata'):
if message.get('Metadata', {}).get('type') == 'episode':
eventItem['item_type'] = "TV"
eventItem['item_name'] = "%s %s%s %s" % (
message.get('Metadata', {}).get('grandparentTitle'),
"S" + str(message.get('Metadata', {}).get('parentIndex')),
"E" + str(message.get('Metadata', {}).get('index')),
message.get('Metadata', {}).get('title'))
if message.get('Metadata', {}).get('summary') and len(message.get('Metadata', {}).get('summary')) > 100:
eventItem['overview'] = str(message.get('Metadata', {}).get('summary'))[:100] + "..."
else:
eventItem['overview'] = message.get('Metadata', {}).get('summary')
else:
eventItem['item_type'] = "MOV"
eventItem['item_name'] = "%s %s" % (
message.get('Metadata', {}).get('title'), "(" + str(message.get('Metadata', {}).get('year')) + ")")
if len(message.get('Metadata', {}).get('summary')) > 100:
eventItem['overview'] = str(message.get('Metadata', {}).get('summary'))[:100] + "..."
else:
eventItem['overview'] = message.get('Metadata', {}).get('summary')
if message.get('Player'):
eventItem['ip'] = message.get('Player').get('publicAddress')
eventItem['client'] = message.get('Player').get('title')
# 这里给个空,防止拼消息的时候出现None
eventItem['device_name'] = ' '
if message.get('Account'):
eventItem['user_name'] = message.get("Account").get('title')
return eventItem
pass

View File

@ -4,8 +4,6 @@ import log
from app.conf import ModuleConf
from app.db import MediaDb
from app.helper import ProgressHelper, SubmoduleHelper
from app.media import Media
from app.message import Message
from app.utils import ExceptionUtils
from app.utils.commons import singleton
from app.utils.types import MediaServerType
@ -22,22 +20,18 @@ class MediaServer:
_server = None
mediadb = None
progress = None
message = None
media = None
def __init__(self):
self._mediaserver_schemas = SubmoduleHelper.import_submodules(
'app.mediaserver.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【MediaServer】加载媒体服务器:{self._mediaserver_schemas}")
log.debug(f"【MediaServer】: 已经加载媒体服务器:{self._mediaserver_schemas}")
self.init_config()
def init_config(self):
self.mediadb = MediaDb()
self.message = Message()
self.progress = ProgressHelper()
self.media = Media()
# 当前使用的媒体库服务器
_type = Config().get_config('media').get('media_server') or 'emby'
self._server_type = ModuleConf.MEDIASERVER_DICT.get(_type)
@ -111,8 +105,6 @@ class MediaServer:
"""
if not self.server:
return None
if not item_id:
return None
return self.server.get_image_by_id(item_id, image_type)
def get_no_exists_episodes(self, meta_info,
@ -243,8 +235,6 @@ class MediaServer:
"""
if not self.server:
return None
if not itemid:
return None
return self.server.get_iteminfo(itemid)
def get_playing_sessions(self):
@ -254,27 +244,3 @@ class MediaServer:
if not self.server:
return None
return self.server.get_playing_sessions()
def webhook_message_handler(self, message: str, channel: MediaServerType):
"""
处理Webhook消息
"""
if not self.server:
return
if channel != self._server_type:
return
event_info = self.server.get_webhook_message(message)
if event_info:
# 获取消息图片
image_url = None
if event_info.get("item_type") == "TV":
item_info = self.get_iteminfo(event_info.get('item_id'))
if item_info:
image_url = self.media.get_episode_images(item_info.get('ProviderIds', {}).get('Tmdb'),
event_info.get('season_id'),
event_info.get('episode_id'))
else:
image_url = self.get_image_by_id(event_info.get('item_id'), "Backdrop")
self.message.send_mediaserver_message(event_info=event_info,
channel=channel.value,
image_url=image_url)

View File

@ -0,0 +1,198 @@
import time
from app.message import Message
from app.mediaserver import MediaServer
from app.media import Media
from web.backend.web_utils import WebUtils
class WebhookEvent:
message = None
mediaserver = None
media = None
def __init__(self):
self.message = Message()
self.mediaserver = MediaServer()
self.media = Media()
@staticmethod
def __parse_plex_msg(message):
"""
解析Plex报文
"""
eventItem = {'event': message.get('event', {}),
'item_name': message.get('Metadata', {}).get('title'),
'user_name': message.get('Account', {}).get('title')
}
return eventItem
@staticmethod
def __parse_jellyfin_msg(message):
"""
解析Jellyfin报文
"""
eventItem = {'event': message.get('NotificationType', {}),
'item_name': message.get('Name'),
'user_name': message.get('NotificationUsername')
}
return eventItem
@staticmethod
def __parse_emby_msg(message):
"""
解析Emby报文
"""
eventItem = {'event': message.get('Event', {})}
if message.get('Item'):
if message.get('Item', {}).get('Type') == 'Episode':
eventItem['item_type'] = "TV"
eventItem['item_name'] = "%s %s%s %s" % (
message.get('Item', {}).get('SeriesName'),
"S" + str(message.get('Item', {}).get('ParentIndexNumber')),
"E" + str(message.get('Item', {}).get('IndexNumber')),
message.get('Item', {}).get('Name'))
eventItem['item_id'] = message.get('Item', {}).get('SeriesId')
eventItem['season_id'] = message.get('Item', {}).get('ParentIndexNumber')
eventItem['episode_id'] = message.get('Item', {}).get('IndexNumber')
eventItem['tmdb_id'] = message.get('Item', {}).get('ProviderIds', {}).get('Tmdb')
if message.get('Item', {}).get('Overview') and len(message.get('Item', {}).get('Overview')) > 100:
eventItem['overview'] = str(message.get('Item', {}).get('Overview'))[:100] + "..."
else:
eventItem['overview'] = message.get('Item', {}).get('Overview')
eventItem['percentage'] = message.get('TranscodingInfo', {}).get('CompletionPercentage')
else:
eventItem['item_type'] = "MOV"
eventItem['item_name'] = "%s %s" % (
message.get('Item', {}).get('Name'), "(" + str(message.get('Item', {}).get('ProductionYear')) + ")")
eventItem['item_path'] = message.get('Item', {}).get('Path')
eventItem['item_id'] = message.get('Item', {}).get('Id')
eventItem['tmdb_id'] = message.get('Item', {}).get('ProviderIds', {}).get('Tmdb')
if len(message.get('Item', {}).get('Overview')) > 100:
eventItem['overview'] = str(message.get('Item', {}).get('Overview'))[:100] + "..."
else:
eventItem['overview'] = message.get('Item', {}).get('Overview')
eventItem['percentage'] = message.get('TranscodingInfo', {}).get('CompletionPercentage')
if message.get('Session'):
eventItem['ip'] = message.get('Session').get('RemoteEndPoint')
eventItem['device_name'] = message.get('Session').get('DeviceName')
eventItem['client'] = message.get('Session').get('Client')
if message.get("User"):
eventItem['user_name'] = message.get("User").get('Name')
return eventItem
def plex_action(self, message):
"""
执行Plex webhook动作
"""
event_info = self.__parse_plex_msg(message)
if event_info.get("event") in ["media.play", "media.stop"]:
self.send_webhook_message(event_info, 'plex')
def jellyfin_action(self, message):
"""
执行Jellyfin webhook动作
"""
event_info = self.__parse_jellyfin_msg(message)
if event_info.get("event") in ["PlaybackStart", "PlaybackStop"]:
self.send_webhook_message(event_info, 'jellyfin')
def emby_action(self, message):
"""
执行Emby webhook动作
"""
event_info = self.__parse_emby_msg(message)
if event_info.get("event") == "system.webhooktest":
return
elif event_info.get("event") in ["playback.start",
"playback.stop",
"user.authenticated",
"user.authenticationfailed"]:
self.send_webhook_message(event_info, 'emby')
def send_webhook_message(self, event_info, channel):
"""
发送消息
"""
_webhook_actions = {
"system.webhooktest": "测试",
"playback.start": "开始播放",
"playback.stop": "停止播放",
"playback.pause": "暂停播放",
"playback.unpause": "开始播放",
"user.authenticated": "登录成功",
"user.authenticationfailed": "登录失败",
"media.play": "开始播放",
"PlaybackStart": "开始播放",
"PlaybackStop": "停止播放",
"media.stop": "停止播放",
"item.rate": "标记了",
}
_webhook_images = {
"emby": "https://emby.media/notificationicon.png",
"plex": "https://www.plex.tv/wp-content/uploads/2022/04/new-logo-process-lines-gray.png",
"jellyfin": "https://play-lh.googleusercontent.com/SCsUK3hCCRqkJbmLDctNYCfehLxsS4ggD1ZPHIFrrAN1Tn9yhjmGMPep2D9lMaaa9eQi"
}
if self.is_ignore_webhook_message(event_info.get('user_name'), event_info.get('device_name')):
return
# 消息标题
if event_info.get('item_type') == "TV":
message_title = f"{_webhook_actions.get(event_info.get('event'))}剧集 {event_info.get('item_name')}"
elif event_info.get('item_type') == "MOV":
message_title = f"{_webhook_actions.get(event_info.get('event'))}电影 {event_info.get('item_name')}"
else:
message_title = f"{_webhook_actions.get(event_info.get('event'))}"
# 消息内容
if {event_info.get('user_name')}:
message_texts = [f"用户:{event_info.get('user_name')}"]
if event_info.get('device_name'):
message_texts.append(f"设备:{event_info.get('client')} {event_info.get('device_name')}")
if event_info.get('ip'):
message_texts.append(f"位置:{event_info.get('ip')} {WebUtils.get_location(event_info.get('ip'))}")
if event_info.get('percentage'):
percentage = round(float(event_info.get('percentage')), 2)
message_texts.append(f"进度:{percentage}%")
if event_info.get('overview'):
message_texts.append(f"剧情:{event_info.get('overview')}")
message_texts.append(f"时间:{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}")
# 消息图片
image_url = ''
if event_info.get('item_id'):
if event_info.get("item_type") == "TV":
iteminfo = self.mediaserver.get_iteminfo(event_info.get('item_id'))
tmdb_id = iteminfo.get('ProviderIds', {}).get('Tmdb')
try:
# 从tmdb获取剧集某季某集图片
image_url = self.media.get_episode_images(tmdb_id,
event_info.get('season_id'),
event_info.get('episode_id'))
except IOError:
pass
if not image_url:
image_url = self.mediaserver.get_image_by_id(event_info.get('item_id'),
"Backdrop") or _webhook_images.get(channel)
else:
image_url = _webhook_images.get(channel)
# 发送消息
self.message.send_mediaserver_message(title=message_title, text="\n".join(message_texts), image=image_url)
def is_ignore_webhook_message(self, user_name, device_name):
"""
判断是否忽略通知
"""
if not user_name and not device_name:
return False
webhook_ignore = self.message.get_webhook_ignore()
if not webhook_ignore:
return False
if user_name in webhook_ignore or \
device_name in webhook_ignore or \
(user_name + ':' + device_name) in webhook_ignore:
return True
return False

View File

@ -1,29 +1,26 @@
import json
import re
import time
from enum import Enum
import log
from app.conf import ModuleConf
from app.helper import DbHelper, SubmoduleHelper
from app.message.message_center import MessageCenter
from app.plugins import EventManager
from app.utils import StringUtils, ExceptionUtils
from app.utils.commons import singleton
from app.utils.types import SearchType, MediaType, EventType
from app.utils.types import SearchType, MediaType
from config import Config
from web.backend.web_utils import WebUtils
@singleton
class Message(object):
dbhelper = None
messagecenter = None
eventmanager = None
_message_schemas = []
_active_clients = []
_active_interactive_clients = {}
_client_configs = {}
_webhook_ignore = None
_domain = None
def __init__(self):
@ -31,14 +28,12 @@ class Message(object):
'app.message.client',
filter_func=lambda _, obj: hasattr(obj, 'schema')
)
log.debug(f"【Message】加载消息服务:{self._message_schemas}")
log.debug(f"【Message】: 已经加载消息服务:{self._message_schemas}")
self.init_config()
def init_config(self):
self.dbhelper = DbHelper()
self.messagecenter = MessageCenter()
self.eventmanager = EventManager()
self._domain = Config().get_domain()
# 停止旧服务
if self._active_clients:
@ -98,11 +93,17 @@ class Message(object):
state, ret_msg = self.__build_class(ctype=ctype,
conf=config).send_msg(title="测试",
text="这是一条测试消息",
url="https://github.com/NAStool/nas-tools")
url="https://github.com/jxxghp/nas-tools")
if not state:
log.error(f"【Message】{ctype} 发送测试消息失败:%s" % ret_msg)
return state
def get_webhook_ignore(self):
"""
获取Emby/Jellyfin不通知的设备清单
"""
return self._webhook_ignore or []
def __sendmsg(self, client, title, text="", image="", url="", user_id=""):
"""
通用消息发送
@ -233,8 +234,6 @@ class Message(object):
msg_text = f"{msg_text}\n描述:{can_item.description}"
# 插入消息中心
self.messagecenter.insert_system_message(level="INFO", title=msg_title, content=msg_text)
# 解发事件
self.eventmanager.send_event(EventType.DownloadAdd, can_item.to_dict())
# 发送消息
for client in self._active_clients:
if "download_start" in client.get("switchs"):
@ -270,8 +269,6 @@ class Message(object):
msg_str = f"{msg_str}{exist_filenum}个文件已存在"
# 插入消息中心
self.messagecenter.insert_system_message(level="INFO", title=msg_title, content=msg_str)
# 解发事件
self.eventmanager.send_event(EventType.TransferFinished, media_info.to_dict())
# 发送消息
for client in self._active_clients:
if "transfer_finished" in client.get("switchs"):
@ -304,8 +301,6 @@ class Message(object):
msg_str = f"{msg_str},总大小:{StringUtils.str_filesize(item_info.size)},来自:{in_from.value}"
# 插入消息中心
self.messagecenter.insert_system_message(level="INFO", title=msg_title, content=msg_str)
# 解发事件
self.eventmanager.send_event(EventType.TransferFinished, item_info.to_dict())
# 发送消息
for client in self._active_clients:
if "transfer_finished" in client.get("switchs"):
@ -324,8 +319,6 @@ class Message(object):
text = f"站点:{item.site}\n种子名称:{item.org_string}\n种子链接:{item.enclosure}\n错误信息:{error_msg}"
# 插入消息中心
self.messagecenter.insert_system_message(level="INFO", title=title, content=text)
# 解发事件
self.eventmanager.send_event(EventType.DownloadFail, item.to_dict())
# 发送消息
for client in self._active_clients:
if "download_fail" in client.get("switchs"):
@ -352,8 +345,6 @@ class Message(object):
msg_str = f"{msg_str},用户:{media_info.user_name}"
# 插入消息中心
self.messagecenter.insert_system_message(level="INFO", title=msg_title, content=msg_str)
# 解发事件
self.eventmanager.send_event(EventType.SubscribeAdd, media_info.to_dict())
# 发送消息
for client in self._active_clients:
if "rss_added" in client.get("switchs"):
@ -381,8 +372,6 @@ class Message(object):
msg_str = f"{msg_str}{media_info.get_vote_string()}"
# 插入消息中心
self.messagecenter.insert_system_message(level="INFO", title=msg_title, content=msg_str)
# 解发事件
self.eventmanager.send_event(EventType.SubscribeFinished, media_info.to_dict())
# 发送消息
for client in self._active_clients:
if "rss_finished" in client.get("switchs"):
@ -442,9 +431,6 @@ class Message(object):
text = f"源路径:{path}\n原因:{text}"
# 插入消息中心
self.messagecenter.insert_system_message(level="INFO", title=title, content=text)
# 解发事件
self.eventmanager.send_event(EventType.TransferFail,
{"path": path, "count": count, "reason": text})
# 发送消息
for client in self._active_clients:
if "transfer_fail" in client.get("switchs"):
@ -491,75 +477,22 @@ class Message(object):
url="brushtask"
)
def send_mediaserver_message(self, event_info: dict, channel, image_url):
def send_mediaserver_message(self, title, text, image):
"""
发送媒体服务器的消息
:param event_info: 事件信息
:param channel: 服务器类型:
:param image_url: 图片
"""
if not event_info or not channel:
if not title or not text or not image:
return
# 拼装消息内容
_webhook_actions = {
"system.webhooktest": "测试",
"playback.start": "开始播放",
"playback.stop": "停止播放",
"user.authenticated": "登录成功",
"user.authenticationfailed": "登录失败",
"media.play": "开始播放",
"media.stop": "停止播放",
"PlaybackStart": "开始播放",
"PlaybackStop": "停止播放",
"item.rate": "标记了",
}
_webhook_images = {
"Emby": "https://emby.media/notificationicon.png",
"Plex": "https://www.plex.tv/wp-content/uploads/2022/04/new-logo-process-lines-gray.png",
"Jellyfin": "https://play-lh.googleusercontent.com/SCsUK3hCCRqkJbmLDctNYCfehLxsS4ggD1ZPHIFrrAN1Tn9yhjmGMPep2D9lMaaa9eQi"
}
if not _webhook_actions.get(event_info.get('event')):
return
# 消息标题
if event_info.get('item_type') == "TV":
message_title = f"{_webhook_actions.get(event_info.get('event'))}剧集 {event_info.get('item_name')}"
elif event_info.get('item_type') == "MOV":
message_title = f"{_webhook_actions.get(event_info.get('event'))}电影 {event_info.get('item_name')}"
else:
message_title = f"{_webhook_actions.get(event_info.get('event'))}"
# 消息内容
if {event_info.get('user_name')}:
message_texts = [f"用户:{event_info.get('user_name')}"]
if event_info.get('device_name'):
message_texts.append(f"设备:{event_info.get('client')} {event_info.get('device_name')}")
if event_info.get('ip'):
message_texts.append(f"位置:{event_info.get('ip')} {WebUtils.get_location(event_info.get('ip'))}")
if event_info.get('percentage'):
percentage = round(float(event_info.get('percentage')), 2)
message_texts.append(f"进度:{percentage}%")
if event_info.get('overview'):
message_texts.append(f"剧情:{event_info.get('overview')}")
message_texts.append(f"时间:{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}")
# 消息图片
if not image_url:
image_url = _webhook_images.get(channel)
# 插入消息中心
message_content = "\n".join(message_texts)
self.messagecenter.insert_system_message(level="INFO", title=message_title, content=message_content)
self.messagecenter.insert_system_message(level="INFO", title=title, content=text)
# 发送消息
for client in self._active_clients:
if "mediaserver_message" in client.get("switchs"):
self.__sendmsg(
client=client,
title=message_title,
text=message_content,
image=image_url
title=title,
text=text,
image=image
)
def send_custom_message(self, title, text="", image=""):

View File

@ -1,2 +0,0 @@
from .event_manager import EventManager, EventHandler, Event
from .plugin_manager import PluginManager

View File

@ -1,105 +0,0 @@
from queue import Queue, Empty
import log
from app.utils.commons import singleton
from app.utils.types import EventType
@singleton
class EventManager:
"""
事件管理器
"""
# 事件队列
_eventQueue = None
# 事件响应函数字典
_handlers = {}
def __init__(self):
self.init_config()
def init_config(self):
# 事件队列
self._eventQueue = Queue()
# 事件响应函数字典
self._handlers = {}
def get_event(self):
"""
获取事件
"""
try:
event = self._eventQueue.get(block=True, timeout=1)
handlerList = self._handlers.get(event.event_type)
return event, handlerList
except Empty:
return None, []
def add_event_listener(self, etype: EventType, handler):
"""
注册事件处理
"""
try:
handlerList = self._handlers[etype.value]
except KeyError:
handlerList = []
self._handlers[etype.value] = handlerList
if handler not in handlerList:
handlerList.append(handler)
log.info(f"已注册事件:{handler}")
def remove_event_listener(self, etype: EventType, handler):
"""
移除监听器的处理函数
"""
try:
handlerList = self._handlers[etype.value]
if handler in handlerList:
handlerList.remove(handler)
if not handlerList:
del self._handlers[etype.value]
except KeyError:
pass
def send_event(self, etype: EventType, data: dict = None):
"""
发送事件
"""
if etype not in EventType:
return
event = Event(etype.value)
event.event_data = data or {}
self._eventQueue.put(event)
def register(self, etype: [EventType, list]):
"""
事件注册
:param etype: 事件类型
"""
def decorator(f):
if isinstance(etype, list):
for et in etype:
self.add_event_listener(et, f)
else:
self.add_event_listener(etype, f)
return f
return decorator
class Event(object):
"""
事件对象
"""
def __init__(self, event_type=None):
# 事件类型
self.event_type = event_type
# 字典用于保存具体的事件数据
self.event_data = {}
# 实例引用,用于注册事件
EventHandler = EventManager()

View File

@ -1,46 +0,0 @@
from abc import ABCMeta, abstractmethod
class _IPluginModule(metaclass=ABCMeta):
"""
插件模块基类
"""
# 插件名称
module_name = ""
# 插件描述
module_desc = ""
# 插件图标
module_icon = ""
# 主题色
module_color = ""
# 插件版本
module_version = "1.0"
# 插件作者
module_author = ""
# 插件配置项ID前缀为了避免各插件配置表单相冲突配置表单元素ID自动在前面加上此前缀
module_config_prefix = "plugin_"
# 显示顺序
module_order = 0
@staticmethod
@abstractmethod
def get_fields():
"""
获取配置字典用于生成表单
"""
pass
@abstractmethod
def init_config(self, config: dict):
"""
生效配置信息
:param config: 配置信息字典
"""
pass
@abstractmethod
def stop_service(self):
"""
停止插件
"""
pass

View File

@ -1,177 +0,0 @@
import os.path
import log
from app.plugins import EventHandler
from app.plugins.modules._base import _IPluginModule
from app.utils import RequestUtils
from app.utils.types import MediaType, EventType
from config import Config
class ChineseSubFinder(_IPluginModule):
# 插件名称
module_name = "ChineseSubFinder"
# 插件描述
module_desc = "通知ChineseSubFinder下载字幕。"
# 插件图标
module_icon = "chinesesubfinder.png"
# 主题色
module_color = "bg-green"
# 插件版本
module_version = "1.0"
# 插件作者
module_author = "jxxghp"
# 插件配置项ID前缀
module_config_prefix = "chinesesubfinder_"
# 加载顺序
module_order = 3
# 私有属性
_save_tmp_path = None
_host = None
_api_key = None
_remote_path = None
_local_path = None
def init_config(self, config: dict = None):
self._save_tmp_path = Config().get_temp_path()
if not os.path.exists(self._save_tmp_path):
os.makedirs(self._save_tmp_path)
if config:
self._api_key = config.get("api_key")
self._host = config.get('host')
if self._host:
if not self._host.startswith('http'):
self._host = "http://" + self._host
if not self._host.endswith('/'):
self._host = self._host + "/"
self._local_path = config.get("local_path")
self._remote_path = config.get("remote_path")
@staticmethod
def get_fields():
return [
# 同一板块
{
'type': 'div',
'content': [
# 同一行
[
{
'title': '服务器地址',
'required': "required",
'tooltip': '配置IP地址和端口如为https则需要增加https://前缀',
'type': 'text',
'content': [
{
'id': 'host',
'placeholder': 'http://127.0.0.1:19035'
}
]
},
{
'title': 'Api Key',
'required': "required",
'tooltip': '在ChineseSubFinder->配置中心->实验室->API Key处生成',
'type': 'text',
'content': [
{
'id': 'api_key',
'placeholder': ''
}
]
}
],
[
{
'title': '本地路径',
'required': "required",
'tooltip': 'NAStool访问媒体库的路径如NAStool与ChineseSubFinder的媒体目录路径一致则不用配置',
'type': 'text',
'content': [
{
'id': 'local_path',
'placeholder': '本地映射路径'
}
]
},
{
'title': '远程路径',
'required': "required",
'tooltip': 'ChineseSubFinder的媒体目录访问路径会用此路径替换掉本地路径后传递给ChineseSubFinder下载字幕如NAStool与ChineseSubFinder的媒体目录路径一致则不用配置',
'type': 'text',
'content': [
{
'id': 'remote_path',
'placeholder': '远程映射路径'
}
]
}
]
]
}
]
def stop_service(self):
pass
@EventHandler.register(EventType.SubtitleDownload)
def download_chinesesubfinder(self, event):
"""
调用ChineseSubFinder下载字幕
"""
if not self._host or not self._api_key:
return
item = event.event_data
if not item:
return
req_url = "%sapi/v1/add-job" % self._host
item_type = item.get("type")
item_bluray = item.get("bluray")
item_file = item.get("file")
item_file_ext = item.get("file_ext")
if item_bluray:
file_path = "%s.mp4" % item_file
else:
if os.path.splitext(item_file)[-1] != item_file_ext:
file_path = "%s%s" % (item_file, item_file_ext)
else:
file_path = item_file
# 路径替换
if self._local_path and self._remote_path and file_path.startswith(self._local_path):
file_path = file_path.replace(self._local_path, self._remote_path).replace('\\', '/')
# 一个名称只建一个任务
log.info("【Plugin】通知ChineseSubFinder下载字幕: %s" % file_path)
params = {
"video_type": 0 if item_type == MediaType.MOVIE.value else 1,
"physical_video_file_full_path": file_path,
"task_priority_level": 3,
"media_server_inside_video_id": "",
"is_bluray": item_bluray
}
try:
res = RequestUtils(headers={
"Authorization": "Bearer %s" % self._api_key
}).post(req_url, json=params)
if not res or res.status_code != 200:
log.error("【Plugin】调用ChineseSubFinder API失败")
else:
# 如果文件目录没有识别的nfo元数据 此接口会返回控制符推测是ChineseSubFinder的原因
# emby refresh元数据时异步的
if res.text:
job_id = res.json().get("job_id")
message = res.json().get("message")
if not job_id:
log.warn("【Plugin】ChineseSubFinder下载字幕出错%s" % message)
else:
log.info("【Plugin】ChineseSubFinder任务添加成功%s" % job_id)
else:
log.error("【Plugin】%s 目录缺失nfo元数据" % file_path)
except Exception as e:
log.error("【Plugin】连接ChineseSubFinder出错" + str(e))

View File

@ -1,272 +0,0 @@
import datetime
import os
import re
import shutil
from functools import lru_cache
from urllib.parse import quote
from pyquery import PyQuery
import log
from app.helper.chrome_helper import ChromeHelper
from app.plugins import EventHandler
from app.plugins.modules._base import _IPluginModule
from app.utils import RequestUtils, PathUtils, SystemUtils, ExceptionUtils
from app.utils.types import MediaType, EventType
from config import Config, RMT_SUBEXT
class OpenSubtitles(_IPluginModule):
# 插件名称
module_name = "OpenSubtitles"
# 插件描述
module_desc = "从opensubtitles.org下载中文字幕。"
# 插件图标
module_icon = "opensubtitles.png"
# 主题色
module_color = ""
# 插件版本
module_version = "1.0"
# 插件作者
module_author = "jxxghp"
# 插件配置项ID前缀
module_config_prefix = "opensubtitles_"
# 加载顺序
module_order = 2
# 私有属性
_cookie = ""
_ua = None
_url_imdbid = "https://www.opensubtitles.org/zh/search/imdbid-%s/sublanguageid-chi"
_url_keyword = "https://www.opensubtitles.org/zh/search/moviename-%s/sublanguageid-chi"
_save_tmp_path = None
_enable = False
def __init__(self):
self._ua = Config().get_ua()
def init_config(self, config: dict):
self._save_tmp_path = Config().get_temp_path()
if not os.path.exists(self._save_tmp_path):
os.makedirs(self._save_tmp_path)
if config:
self._enable = config.get("enable")
@staticmethod
def get_fields():
return [
# 同一板块
{
'type': 'div',
'content': [
# 同一行
[
{
'title': '开启opensubtitles.org字幕下载',
'required': "",
'tooltip': '需要确保网络能正常连通www.opensubtitles.org',
'type': 'switch',
'id': 'enable',
}
]
]
}
]
def stop_service(self):
pass
@EventHandler.register(EventType.SubtitleDownload)
def download_opensubtitles(self, event):
"""
调用OpenSubtitles Api下载字幕
"""
if not self._enable:
return
item = event.event_data
if not item:
return
if item.get("type") != MediaType.MOVIE.value and not item.get("imdb_id"):
log.warn("【Plugin】电视剧类型需要imdbid才能检索字幕")
return
# 查询名称
item_name = item.get("en_name") or item.get("cn_name")
# 查询IMDBID
imdb_id = item.get("imdb_id")
# 查询年份
item_year = item.get("year")
# 查询季
item_season = item.get("season")
# 查询集
item_episode = item.get("episode")
# 文件路径
item_file = item.get("file")
# 后缀
item_file_ext = item.get("file_ext")
log.info("【Plugin】开始从Opensubtitle.org检索字幕: %simdbid=%s" % (item_name, imdb_id))
subtitles = self.search_subtitles(imdb_id=imdb_id, name=item_name, year=item_year)
if not subtitles:
log.warn("【Plugin】%s 未检索到字幕" % item_name)
else:
log.info("【Plugin】opensubtitles.org返回数据%s" % len(subtitles))
# 成功数
subtitle_count = 0
for subtitle in subtitles:
# 标题
if not imdb_id:
if str(subtitle.get('title')) != "%s (%s)" % (item_name, item_year):
continue
# 季
if item_season \
and str(subtitle.get('season').replace("Season", "").strip()) != str(item_season):
continue
# 集
if item_episode \
and str(subtitle.get('episode')) != str(item_episode):
continue
# 字幕文件名
SubFileName = subtitle.get('description')
# 下载链接
Download_Link = subtitle.get('link')
# 下载后的字幕文件路径
Media_File = "%s.chi.zh-cn%s" % (item_file, item_file_ext)
log.info("【Plugin】正在从opensubtitles.org下载字幕 %s%s " % (SubFileName, Media_File))
# 下载
ret = RequestUtils(cookies=self._cookie,
headers=self._ua).get_res(Download_Link)
if ret and ret.status_code == 200:
# 保存ZIP
file_name = self.__get_url_subtitle_name(ret.headers.get('content-disposition'), Download_Link)
if not file_name:
continue
zip_file = os.path.join(self._save_tmp_path, file_name)
zip_path = os.path.splitext(zip_file)[0]
with open(zip_file, 'wb') as f:
f.write(ret.content)
# 解压文件
shutil.unpack_archive(zip_file, zip_path, format='zip')
# 遍历转移文件
for sub_file in PathUtils.get_dir_files(in_path=zip_path, exts=RMT_SUBEXT):
self.__transfer_subtitle(sub_file, Media_File)
# 删除临时文件
try:
shutil.rmtree(zip_path)
os.remove(zip_file)
except Exception as err:
ExceptionUtils.exception_traceback(err)
else:
log.error("【Plugin】下载字幕文件失败%s" % Download_Link)
continue
# 最多下载3个字幕
subtitle_count += 1
if subtitle_count > 2:
break
if not subtitle_count:
if item_episode:
log.info("【Plugin】%s%s季 第%s集 未找到符合条件的字幕" % (
item_name, item_season, item_episode))
else:
log.info("【Plugin】%s 未找到符合条件的字幕" % item_name)
else:
log.info("【Plugin】%s 共下载了 %s 个字幕" % (item_name, subtitle_count))
def search_subtitles(self, imdb_id, name, year):
if imdb_id:
return self.__search_subtitles_by_imdbid(imdb_id)
else:
return self.__search_subtitles_by_keyword("%s %s" % (name, year))
def __search_subtitles_by_imdbid(self, imdbid):
"""
按TMDBID搜索OpenSubtitles
"""
return self.__parse_opensubtitles_results(url=self._url_imdbid % str(imdbid).replace("tt", ""))
def __search_subtitles_by_keyword(self, keyword):
"""
按关键字搜索OpenSubtitles
"""
return self.__parse_opensubtitles_results(url=self._url_keyword % quote(keyword))
@classmethod
@lru_cache(maxsize=128)
def __parse_opensubtitles_results(cls, url):
"""
搜索并解析结果
"""
chrome = ChromeHelper()
if not chrome.get_status():
log.error("【Plugin】未找到浏览器内核当前环境无法检索opensubtitles字幕")
return []
# 访问页面
if not chrome.visit(url):
log.error("【Plugin】无法连接opensubtitles.org")
return []
# 源码
html_text = chrome.get_html()
# Cookie
cls._cookie = chrome.get_cookies()
# 解析列表
ret_subtitles = []
html_doc = PyQuery(html_text)
global_season = ''
for tr in html_doc('#search_results > tbody > tr:not([style])'):
tr_doc = PyQuery(tr)
# 季
season = tr_doc('span[id^="season-"] > a > b').text()
if season:
global_season = season
continue
# 集
episode = tr_doc('span[itemprop="episodeNumber"]').text()
# 标题
title = tr_doc('strong > a.bnone').text()
# 描述 下载链接
if not global_season:
description = tr_doc('td:nth-child(1)').text()
if description and len(description.split("\n")) > 1:
description = description.split("\n")[1]
link = tr_doc('td:nth-child(5) > a').attr("href")
else:
description = tr_doc('span[itemprop="name"]').text()
link = tr_doc('a[href^="/download/"]').attr("href")
if link:
link = "https://www.opensubtitles.org%s" % link
else:
continue
ret_subtitles.append({
"season": global_season,
"episode": episode,
"title": title,
"description": description,
"link": link
})
return ret_subtitles
@staticmethod
def __get_url_subtitle_name(disposition, url):
"""
从下载请求中获取字幕文件名
"""
fname = re.findall(r"filename=\"?(.+)\"?", disposition or "")
if fname:
fname = str(fname[0].encode('ISO-8859-1').decode()).split(";")[0].strip()
if fname.endswith('"'):
fname = fname[:-1]
elif url and os.path.splitext(url)[-1] in (RMT_SUBEXT + ['.zip']):
fname = url.split("/")[-1]
else:
fname = str(datetime.datetime.now())
return fname
@staticmethod
def __transfer_subtitle(source_sub_file, media_file):
"""
转移字幕
"""
new_sub_file = "%s%s" % (os.path.splitext(media_file)[0], os.path.splitext(source_sub_file)[-1])
if os.path.exists(new_sub_file):
return 1
else:
return SystemUtils.copy(source_sub_file, new_sub_file)

View File

@ -1,394 +0,0 @@
from app.downloader import Downloader
from app.mediaserver import MediaServer
from app.plugins import EventHandler
from app.plugins.modules._base import _IPluginModule
from app.utils import ExceptionUtils
from app.utils.types import DownloaderType, MediaServerType, EventType
from app.helper.security_helper import SecurityHelper
from apscheduler.schedulers.background import BackgroundScheduler
from config import Config
import log
class SpeedLimiter(_IPluginModule):
# 插件名称
module_name = "播放限速"
# 插件描述
module_desc = "媒体服务器开始播放时,自动对下载器进行限速。"
# 插件图标
module_icon = "SpeedLimiter.jpg"
# 主题色
module_color = "bg-blue"
# 插件版本
module_version = "1.0"
# 插件作者
module_author = "Shurelol"
# 插件配置项ID前缀
module_config_prefix = "speedlimit_"
# 加载顺序
module_order = 1
# 私有属性
_downloader = None
_mediaserver = None
_scheduler = None
# 限速开关
_limit_enabled = False
_limit_flag = False
# QB
_qb_limit = False
_qb_download_limit = 0
_qb_upload_limit = 0
_qb_upload_ratio = 0
# TR
_tr_limit = False
_tr_download_limit = 0
_tr_upload_limit = 0
_tr_upload_ratio = 0
# 不限速地址
_unlimited_ips = {"ipv4": "0.0.0.0/0", "ipv6": "::/0"}
# 自动限速
_auto_limit = False
# 总速宽
_bandwidth = 0
@staticmethod
def get_fields():
return [
# 同一板块
{
'type': 'div',
'content': [
# 同一行
[
{
'title': 'Qbittorrent',
'required': "",
'tooltip': '媒体服务器播放时对Qbittorrent下载器进行限速不限速地址范围除外0或留空不启用',
'type': 'text',
'content': [
{
'id': 'qb_upload',
'placeholder': '上传限速KB/s'
},
{
'id': 'qb_download',
'placeholder': '下载限速KB/s'
}
]
},
{
'title': 'Transmission',
'required': "",
'tooltip': '媒体服务器播放时对Transmission下载器进行限速不限速地址范围除外0或留空不启用',
'type': 'text',
'content': [
{
'id': 'tr_upload',
'placeholder': '上传限速KB/s'
},
{
'id': 'tr_download',
'placeholder': '下载限速KB/s'
}
]
},
{
'title': '不限速地址范围',
'required': 'required',
'tooltip': '以下地址范围不进行限速处理,一般配置为局域网地址段;多个地址段用,号分隔配置为0.0.0.0/0,::/0则不做限制',
'type': 'text',
'content': [
{
'id': 'ipv4',
'placeholder': '192.168.1.0/24',
},
{
'id': 'ipv6',
'placeholder': 'FE80::/10',
}
]
}
]
]
},
{
'type': 'details',
'summary': '自动限速设置',
'tooltip': '设置后根据上行带宽及剩余比例自动计算限速数值',
'content': [
# 同一行
[
{
'title': '上行带宽',
'required': "",
'type': 'text',
'tooltip': '设置后将根据上行带宽、剩余比例、分配比例自动计算限速数值否则使用Qbittorrent、Transmisson设定的限速数值',
'content': [
{
'id': 'bandwidth',
'placeholder': 'Mbps留空不启用自动限速'
},
]
},
{
'title': '剩余比例',
'required': "",
'tooltip': '上行带宽扣除播放媒体比特率后乘以剩余比例为剩余带宽分配给下载器最大为1',
'type': 'text',
'content': [
{
'id': 'residual_ratio',
'placeholder': '0.5'
}
]
},
{
'title': '分配比例',
'required': "",
'tooltip': 'Qbittorrent与Transmission下载器分配剩余带宽比例如Qbittorrent下载器无需上传限速可设为0:xx可为任意正整数',
'type': 'text',
'content': [
{
'id': 'allocation',
'placeholder': '1:1'
}
]
}
]
]
}
]
def init_config(self, config=None):
self._downloader = Downloader()
self._mediaserver = MediaServer()
# 读取配置
if config:
try:
# 总带宽
self._bandwidth = int(float(config.get("bandwidth") or 0)) * 1000000
# 剩余比例
residual_ratio = float(config.get("residual_ratio") or 1)
if residual_ratio > 1:
residual_ratio = 1
# 分配比例
allocation = (config.get("allocation") or "1:1").split(":")
if len(allocation) != 2 or not str(allocation[0]).isdigit() or not str(allocation[-1]).isdigit():
allocation = ["1", "1"]
# QB上传限速
self._qb_upload_ratio = round(
int(allocation[0]) / (int(allocation[-1]) + int(allocation[0])) * residual_ratio, 2)
# TR上传限速
self._tr_upload_ratio = round(
int(allocation[-1]) / (int(allocation[-1]) + int(allocation[0])) * residual_ratio, 2)
except Exception as e:
ExceptionUtils.exception_traceback(e)
self._bandwidth = 0
self._qb_upload_ratio = 0
self._tr_upload_ratio = 0
# 自动限速开关
self._auto_limit = True if self._bandwidth and (self._qb_upload_ratio or self._tr_upload_ratio) else False
try:
# QB下载限速
self._qb_download_limit = int(float(config.get("qb_download") or 0)) * 1024
# QB上传限速
self._qb_upload_limit = int(float(config.get("qb_upload") or 0)) * 1024
except Exception as e:
ExceptionUtils.exception_traceback(e)
self._qb_download_limit = 0
self._qb_upload_limit = 0
# QB限速开关
self._qb_limit = True if self._qb_download_limit or self._qb_upload_limit or self._auto_limit else False
try:
# TR上传限速
self._tr_download_limit = int(float(config.get("tr_download") or 0))
# TR下载限速
self._tr_upload_limit = int(float(config.get("tr_upload") or 0))
except Exception as e:
self._tr_download_limit = 0
self._tr_upload_limit = 0
ExceptionUtils.exception_traceback(e)
# TR限速开关
self._tr_limit = True if self._tr_download_limit or self._tr_upload_limit or self._auto_limit else False
# 限速服务开关
self._limit_enabled = True if self._qb_limit or self._tr_limit else False
# 不限速地址
self._unlimited_ips["ipv4"] = config.get("ipv4") or "0.0.0.0/0"
self._unlimited_ips["ipv6"] = config.get("ipv6") or "::/0"
else:
# 限速关闭
self._limit_enabled = False
# 移出现有任务
self.stop_service()
# 启动限速任务
if self._limit_enabled:
self._scheduler = BackgroundScheduler(timezone=Config().get_timezone())
self._scheduler.add_job(func=self.__check_playing_sessions,
args=[self._mediaserver.get_type(), True],
trigger='interval',
seconds=300)
self._scheduler.print_jobs()
self._scheduler.start()
log.info("播放限速服务启动")
def __start(self):
"""
开始限速
"""
if self._qb_limit:
self._downloader.set_speed_limit(
downloader=DownloaderType.QB,
download_limit=self._qb_download_limit,
upload_limit=self._qb_upload_limit
)
if not self._limit_flag:
log.info(f"【Plugin】Qbittorrent下载器开始限速")
if self._tr_limit:
self._downloader.set_speed_limit(
downloader=DownloaderType.TR,
download_limit=self._tr_download_limit,
upload_limit=self._tr_upload_limit
)
if not self._limit_flag:
log.info(f"【Plugin】Transmission下载器开始限速")
self._limit_flag = True
def __stop(self):
"""
停止限速
"""
if self._qb_limit:
self._downloader.set_speed_limit(
downloader=DownloaderType.QB,
download_limit=0,
upload_limit=0
)
if self._limit_flag:
log.info(f"【Plugin】Qbittorrent下载器停止限速")
if self._tr_limit:
self._downloader.set_speed_limit(
downloader=DownloaderType.TR,
download_limit=0,
upload_limit=0
)
if self._limit_flag:
log.info(f"【Plugin】Transmission下载器停止限速")
self._limit_flag = False
@EventHandler.register(EventType.EmbyWebhook)
def emby_action(self, event):
"""
检查emby Webhook消息
"""
if self._limit_enabled and event.event_data.get("Event") in ["playback.start", "playback.stop"]:
self.__check_playing_sessions(_mediaserver_type=MediaServerType.EMBY, time_check=False)
@EventHandler.register(EventType.JellyfinWebhook)
def jellyfin_action(self, event):
"""
检查jellyfin Webhook消息
"""
if self._limit_enabled and event.event_data.get("NotificationType") in ["PlaybackStart", "PlaybackStop"]:
self.__check_playing_sessions(_mediaserver_type=MediaServerType.JELLYFIN, time_check=False)
@EventHandler.register(EventType.PlexWebhook)
def plex_action(self, event):
"""
检查plex Webhook消息
"""
if self._limit_enabled and event.event_data.get("event") in ["media.play", "media.stop"]:
self.__check_playing_sessions(_mediaserver_type=MediaServerType.PLEX, time_check=False)
def __check_playing_sessions(self, _mediaserver_type, time_check=False):
"""
检查是否限速
"""
def __calc_limit(_total_bit_rate):
"""
计算限速
"""
if not _total_bit_rate:
return False
if self._auto_limit:
residual__bandwidth = (self._bandwidth - _total_bit_rate)
if residual__bandwidth < 0:
self._qb_upload_limit = 10 * 1024
self._tr_upload_limit = 10
else:
_qb_upload_limit = residual__bandwidth / 8 / 1024 * self._qb_upload_ratio
_tr_upload_limit = residual__bandwidth / 8 / 1024 * self._tr_upload_ratio
self._qb_upload_limit = _qb_upload_limit * 1024 if _qb_upload_limit > 10 else 10 * 1024
self._tr_upload_limit = _tr_upload_limit if _tr_upload_limit > 10 else 10
return True
if _mediaserver_type != self._mediaserver.get_type():
return
# 当前播放的会话
playing_sessions = self._mediaserver.get_playing_sessions()
# 本次是否限速
_limit_flag = False
# 当前播放的总比特率
total_bit_rate = 0
if _mediaserver_type == MediaServerType.EMBY:
for session in playing_sessions:
if not SecurityHelper.allow_access(self._unlimited_ips, session.get("RemoteEndPoint")) \
and session.get("NowPlayingItem", {}).get("MediaType") == "Video":
total_bit_rate += int(session.get("NowPlayingItem", {}).get("Bitrate") or 0)
elif _mediaserver_type == MediaServerType.JELLYFIN:
for session in playing_sessions:
if not SecurityHelper.allow_access(self._unlimited_ips, session.get("RemoteEndPoint")) \
and session.get("NowPlayingItem", {}).get("MediaType") == "Video":
media_streams = session.get("NowPlayingItem", {}).get("MediaStreams") or []
for media_stream in media_streams:
total_bit_rate += int(media_stream.get("BitRate") or 0)
elif _mediaserver_type == MediaServerType.PLEX:
for session in playing_sessions:
if not SecurityHelper.allow_access(self._unlimited_ips, session.get("address")) \
and session.get("type") == "Video":
total_bit_rate += int(session.get("bitrate") or 0)
else:
return
# 计算限速标志及速率
_limit_flag = __calc_limit(total_bit_rate)
# 启动限速
if time_check or self._auto_limit:
if _limit_flag:
self.__start()
else:
self.__stop()
else:
if not self._limit_flag and _limit_flag:
self.__start()
elif self._limit_flag and not _limit_flag:
self.__stop()
else:
pass
def stop_service(self):
"""
退出插件
"""
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
print(str(e))

View File

@ -1,163 +0,0 @@
from threading import Thread
import log
from app.conf import SystemConfig
from app.helper import SubmoduleHelper
from app.plugins.event_manager import EventManager
from app.utils.commons import singleton
@singleton
class PluginManager:
"""
插件管理器
"""
systemconfig = None
eventmanager = None
# 插件列表
_plugins = {}
# 运行态插件列表
_running_plugins = {}
# 配置Key
_config_key = "plugin.%s"
# 事件处理线程
_thread = None
# 开关
_active = False
def __init__(self):
self.init_config()
def init_config(self):
self.systemconfig = SystemConfig()
self.eventmanager = EventManager()
# 启动事件处理进程
self.start_service()
def __run(self):
"""
事件处理线程
"""
while self._active:
event, handlers = self.eventmanager.get_event()
if event:
log.info(f"处理事件:{event.event_type} - {handlers}")
for handler in handlers:
try:
names = handler.__qualname__.split(".")
self.run_plugin(names[0], names[1], event)
except Exception as e:
log.error(f"事件处理出错:{str(e)}")
def start_service(self):
"""
启动
"""
# 加载插件
self.__load_plugins()
# 将事件管理器设为启动
self._active = True
self._thread = Thread(target=self.__run)
# 启动事件处理线程
self._thread.start()
def stop_service(self):
"""
停止
"""
# 将事件管理器设为停止
self._active = False
# 等待事件处理线程退出
self._thread.join()
# 停止所有插件
self.__stop_plugins()
def __load_plugins(self):
"""
加载所有插件
"""
plugins = SubmoduleHelper.import_submodules(
"app.plugins.modules",
filter_func=lambda _, obj: hasattr(obj, 'init_config')
)
plugins.sort(key=lambda x: x.module_order if hasattr(x, "module_order") else 0)
for plugin in plugins:
module_id = plugin.__name__
self._plugins[module_id] = plugin
self._running_plugins[module_id] = plugin()
self.reload_plugin(module_id)
log.info(f"加载插件:{plugin}")
def run_plugin(self, pid, method, *args, **kwargs):
"""
运行插件
"""
if not self._running_plugins.get(pid):
return None
if not hasattr(self._running_plugins[pid], method):
return
return getattr(self._running_plugins[pid], method)(*args, **kwargs)
def reload_plugin(self, pid):
"""
生效插件配置
"""
if not self._running_plugins.get(pid):
return
if hasattr(self._running_plugins[pid], "init_config"):
self._running_plugins[pid].init_config(self.get_plugin_config(pid))
def __stop_plugins(self):
"""
停止所有插件
"""
for plugin in self._running_plugins.values():
if hasattr(plugin, "stop_service"):
plugin.stop_service()
def get_plugin_config(self, pid):
"""
获取插件配置
"""
if not self._plugins.get(pid):
return {}
return self.systemconfig.get_system_config(self._config_key % pid) or {}
def save_plugin_config(self, pid, conf):
"""
保存插件配置
"""
if not self._plugins.get(pid):
return False
return self.systemconfig.set_system_config(self._config_key % pid, conf)
def get_plugins_conf(self):
"""
获取所有插件配置
"""
all_confs = {}
for pid, plugin in self._plugins.items():
# 基本属性
conf = {}
if hasattr(plugin, "module_name"):
conf.update({"name": plugin.module_name})
if hasattr(plugin, "module_desc"):
conf.update({"desc": plugin.module_desc})
if hasattr(plugin, "module_version"):
conf.update({"version": plugin.module_version})
if hasattr(plugin, "module_icon"):
conf.update({"icon": plugin.module_icon})
if hasattr(plugin, "module_color"):
conf.update({"color": plugin.module_color})
if hasattr(plugin, "module_author"):
conf.update({"author": plugin.module_author})
if hasattr(plugin, "module_config_prefix"):
conf.update({"prefix": plugin.module_config_prefix})
# 配置项
conf.update({"fields": plugin.get_fields() or {}})
# 配置值
conf.update({"config": self.get_plugin_config(pid)})
# 汇总
all_confs[pid] = conf
return all_confs

View File

@ -20,22 +20,21 @@ class Rss:
_sites = []
filter = None
media = None
sites = None
downloader = None
searcher = None
dbhelper = None
subscribe = None
def __init__(self):
self.init_config()
def init_config(self):
self.media = Media()
self.downloader = Downloader()
self.sites = Sites()
self.filter = Filter()
self.dbhelper = DbHelper()
self.subscribe = Subscribe()
self.init_config()
def init_config(self):
self._sites = self.sites.get_sites(rss=True)
def rssdownload(self):

View File

@ -53,7 +53,14 @@ class RssChecker(object):
self.downloader = Downloader()
self.subscribe = Subscribe()
# 移除现有任务
self.stop_service()
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
ExceptionUtils.exception_traceback(e)
# 读取解析器列表
rss_parsers = self.dbhelper.get_userrss_parser()
self._rss_parsers = []
@ -653,16 +660,3 @@ class RssChecker(object):
if mediainfos:
mediainfos_all += mediainfos
return mediainfos_all
def stop_service(self):
"""
停止服务
"""
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
print(str(e))

View File

@ -12,7 +12,7 @@ from app.downloader import Downloader
from app.helper import MetaHelper
from app.mediaserver import MediaServer
from app.rss import Rss
from app.sites import SiteUserInfo, SiteSignin
from app.sites import Sites
from app.subscribe import Subscribe
from app.sync import Sync
from app.utils import ExceptionUtils
@ -83,7 +83,7 @@ class Scheduler:
except Exception as e:
log.info("站点自动签到时间 配置格式错误:%s" % str(e))
hour = minute = 0
self.SCHEDULER.add_job(SiteSignin().signin,
self.SCHEDULER.add_job(Sites().signin,
"cron",
hour=hour,
minute=minute)
@ -95,7 +95,7 @@ class Scheduler:
log.info("站点自动签到时间 配置格式错误:%s" % str(e))
hours = 0
if hours:
self.SCHEDULER.add_job(SiteSignin().signin,
self.SCHEDULER.add_job(Sites().signin,
"interval",
hours=hours)
log.info("站点自动签到服务启动")
@ -184,7 +184,7 @@ class Scheduler:
self.SCHEDULER.add_job(Subscribe().subscribe_search, 'interval', seconds=RSS_CHECK_INTERVAL)
# 站点数据刷新
self.SCHEDULER.add_job(SiteUserInfo().refresh_pt_date_now,
self.SCHEDULER.add_job(Sites().refresh_pt_date_now,
'interval',
hours=REFRESH_PT_DATA_INTERVAL,
next_run_time=datetime.datetime.now() + datetime.timedelta(minutes=1))
@ -232,7 +232,7 @@ class Scheduler:
if hour < 0 or minute < 0:
log.warn("站点自动签到时间 配置格式错误:不启动任务")
return
self.SCHEDULER.add_job(SiteSignin().signin,
self.SCHEDULER.add_job(Sites().signin,
"date",
run_date=datetime.datetime(year, month, day, hour, minute, second))

View File

@ -20,15 +20,15 @@ class Searcher:
_search_auto = True
def __init__(self):
self.init_config()
def init_config(self):
self.downloader = Downloader()
self.media = Media()
self.message = Message()
self.progress = ProgressHelper()
self.dbhelper = DbHelper()
self.indexer = Indexer()
self.init_config()
def init_config(self):
self._search_auto = Config().get_config("pt").get('search_auto', True)
def search_medias(self,

View File

@ -1,4 +1,3 @@
from app.sites.site_userinfo import SiteUserInfo
from app.sites.site_user_info_factory import SiteUserInfoFactory
from .sites import Sites
from .site_cookie import SiteCookie
from .site_signin import SiteSignin
from .sitecookie import SiteCookie

View File

@ -1,166 +0,0 @@
import re
from multiprocessing.dummy import Pool as ThreadPool
from threading import Lock
from lxml import etree
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as es
from selenium.webdriver.support.wait import WebDriverWait
import log
from app.conf import SiteConf
from app.helper import ChromeHelper, SubmoduleHelper, DbHelper, SiteHelper
from app.message import Message
from app.sites.sites import Sites
from app.utils import RequestUtils, ExceptionUtils, StringUtils
from app.utils.commons import singleton
from config import Config
lock = Lock()
@singleton
class SiteSignin(object):
sites = None
dbhelper = None
message = None
_MAX_CONCURRENCY = 10
def __init__(self):
# 加载模块
self._site_schema = SubmoduleHelper.import_submodules('app.sites.sitesignin',
filter_func=lambda _, obj: hasattr(obj, 'match'))
log.debug(f"【Sites】加载站点签到{self._site_schema}")
self.init_config()
def init_config(self):
self.sites = Sites()
self.dbhelper = DbHelper()
self.message = Message()
def __build_class(self, url):
for site_schema in self._site_schema:
try:
if site_schema.match(url):
return site_schema
except Exception as e:
ExceptionUtils.exception_traceback(e)
return None
def signin(self):
"""
站点并发签到
"""
sites = self.sites.get_sites(signin=True)
if not sites:
return
with ThreadPool(min(len(sites), self._MAX_CONCURRENCY)) as p:
status = p.map(self.__signin_site, sites)
if status:
self.message.send_site_signin_message(status)
def __signin_site(self, site_info):
"""
签到一个站点
"""
site_module = self.__build_class(site_info.get("signurl"))
if site_module:
return site_module.signin(site_info)
else:
return self.__signin_base(site_info)
@staticmethod
def __signin_base(site_info):
"""
通用签到处理
:param site_info: 站点信息
:return: 签到结果信息
"""
if not site_info:
return ""
site = site_info.get("name")
try:
site_url = site_info.get("signurl")
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
if not site_url or not site_cookie:
log.warn("【Sites】未配置 %s 的站点地址或Cookie无法签到" % str(site))
return ""
chrome = ChromeHelper()
if site_info.get("chrome") and chrome.get_status():
# 首页
log.info("【Sites】开始站点仿真签到%s" % site)
home_url = StringUtils.get_base_url(site_url)
if not chrome.visit(url=home_url, ua=ua, cookie=site_cookie):
log.warn("【Sites】%s 无法打开网站" % site)
return f"{site}】无法打开网站!"
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.warn("【Sites】%s 跳转站点失败" % site)
return f"{site}】跳转站点失败!"
# 判断是否已签到
html_text = chrome.get_html()
if not html_text:
log.warn("【Sites】%s 获取站点源码失败" % site)
return f"{site}】获取站点源码失败!"
# 查找签到按钮
html = etree.HTML(html_text)
xpath_str = None
for xpath in SiteConf.SITE_CHECKIN_XPATH:
if html.xpath(xpath):
xpath_str = xpath
break
if re.search(r'已签|签到已得', html_text, re.IGNORECASE) \
and not xpath_str:
log.info("【Sites】%s 今日已签到" % site)
return f"{site}】今日已签到"
if not xpath_str:
if SiteHelper.is_logged_in(html_text):
log.warn("【Sites】%s 未找到签到按钮,模拟登录成功" % site)
return f"{site}】模拟登录成功"
else:
log.info("【Sites】%s 未找到签到按钮,且模拟登录失败" % site)
return f"{site}】模拟登录失败!"
# 开始仿真
try:
checkin_obj = WebDriverWait(driver=chrome.browser, timeout=6).until(
es.element_to_be_clickable((By.XPATH, xpath_str)))
if checkin_obj:
checkin_obj.click()
log.info("【Sites】%s 仿真签到成功" % site)
return f"{site}】仿真签到成功"
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.warn("【Sites】%s 仿真签到失败:%s" % (site, str(e)))
return f"{site}】签到失败!"
# 模拟登录
else:
if site_url.find("attendance.php") != -1:
checkin_text = "签到"
else:
checkin_text = "模拟登录"
log.info(f"【Sites】开始站点{checkin_text}{site}")
# 访问链接
res = RequestUtils(cookies=site_cookie,
headers=ua,
proxies=Config().get_proxies() if site_info.get("proxy") else None
).get_res(url=site_url)
if res and res.status_code == 200:
if not SiteHelper.is_logged_in(res.text):
log.warn(f"【Sites】{site} {checkin_text}失败请检查Cookie")
return f"{site}{checkin_text}失败请检查Cookie"
else:
log.info(f"【Sites】{site} {checkin_text}成功")
return f"{site}{checkin_text}成功"
elif res is not None:
log.warn(f"【Sites】{site} {checkin_text}失败,状态码:{res.status_code}")
return f"{site}{checkin_text}失败,状态码:{res.status_code}"
else:
log.warn(f"【Sites】{site} {checkin_text}失败,无法打开网站")
return f"{site}{checkin_text}失败,无法打开网站!"
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.warn("【Sites】%s 签到出错:%s" % (site, str(e)))
return f"{site} 签到出错:{str(e)}"

View File

@ -0,0 +1,110 @@
import requests
import log
from app.helper import ChromeHelper, SubmoduleHelper
from app.utils import RequestUtils, ExceptionUtils
from app.utils.commons import singleton
from config import Config
@singleton
class SiteUserInfoFactory(object):
def __init__(self):
self._site_schema = SubmoduleHelper.import_submodules('app.sites.siteuserinfo',
filter_func=lambda _, obj: hasattr(obj, 'schema'))
self._site_schema.sort(key=lambda x: x.order)
log.debug(f"【Sites】: 已经加载的站点解析 {self._site_schema}")
def __build_class(self, html_text):
for site_schema in self._site_schema:
try:
if site_schema.match(html_text):
return site_schema
except Exception as e:
ExceptionUtils.exception_traceback(e)
return None
def build(self, url, site_name, site_cookie=None, ua=None, emulate=None, proxy=False):
if not site_cookie:
return None
log.debug(f"【Sites】站点 {site_name} url={url} site_cookie={site_cookie} ua={ua}")
session = requests.Session()
# 检测环境,有浏览器内核的优先使用仿真签到
chrome = ChromeHelper()
if emulate and chrome.get_status():
if not chrome.visit(url=url, ua=ua, cookie=site_cookie):
log.error("【Sites】%s 无法打开网站" % site_name)
return None
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.error("【Sites】%s 跳转站点失败" % site_name)
return None
# 判断是否已签到
html_text = chrome.get_html()
else:
proxies = Config().get_proxies() if proxy else None
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
# 第一次登录反爬
if html_text.find("title") == -1:
i = html_text.find("window.location")
if i == -1:
return None
tmp_url = url + html_text[i:html_text.find(";")] \
.replace("\"", "").replace("+", "").replace(" ", "").replace("window.location=", "")
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=tmp_url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
else:
log.error("【Sites】站点 %s 被反爬限制:%s, 状态码:%s" % (site_name, url, res.status_code))
return None
# 兼容假首页情况,假首页通常没有 <link rel="search" 属性
if '"search"' not in html_text and '"csrf-token"' not in html_text:
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url + "/index.php")
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
elif res is not None:
log.error(f"【Sites】站点 {site_name} 连接失败,状态码:{res.status_code}")
return None
else:
log.error(f"【Sites】站点 {site_name} 无法访问:{url}")
return None
# 解析站点类型
site_schema = self.__build_class(html_text)
if not site_schema:
log.error("【Sites】站点 %s 无法识别站点类型" % site_name)
return None
return site_schema(site_name, url, site_cookie, html_text, session=session, ua=ua)

View File

@ -1,366 +0,0 @@
import json
from datetime import datetime
from multiprocessing.dummy import Pool as ThreadPool
from threading import Lock
import requests
import log
from app.helper import ChromeHelper, SubmoduleHelper, DbHelper
from app.message import Message
from app.sites.sites import Sites
from app.utils import RequestUtils, ExceptionUtils
from app.utils.commons import singleton
from config import Config
lock = Lock()
@singleton
class SiteUserInfo(object):
sites = None
dbhelper = None
message = None
_MAX_CONCURRENCY = 10
_last_update_time = None
_sites_data = {}
def __init__(self):
# 加载模块
self._site_schema = SubmoduleHelper.import_submodules('app.sites.siteuserinfo',
filter_func=lambda _, obj: hasattr(obj, 'schema'))
self._site_schema.sort(key=lambda x: x.order)
log.debug(f"【Sites】加载站点解析{self._site_schema}")
self.init_config()
def init_config(self):
self.sites = Sites()
self.dbhelper = DbHelper()
self.message = Message()
# 站点上一次更新时间
self._last_update_time = None
# 站点数据
self._sites_data = {}
def __build_class(self, html_text):
for site_schema in self._site_schema:
try:
if site_schema.match(html_text):
return site_schema
except Exception as e:
ExceptionUtils.exception_traceback(e)
return None
def build(self, url, site_name, site_cookie=None, ua=None, emulate=None, proxy=False):
if not site_cookie:
return None
session = requests.Session()
log.debug(f"【Sites】站点 {site_name} url={url} site_cookie={site_cookie} ua={ua}")
# 检测环境,有浏览器内核的优先使用仿真签到
chrome = ChromeHelper()
if emulate and chrome.get_status():
if not chrome.visit(url=url, ua=ua, cookie=site_cookie):
log.error("【Sites】%s 无法打开网站" % site_name)
return None
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.error("【Sites】%s 跳转站点失败" % site_name)
return None
# 判断是否已签到
html_text = chrome.get_html()
else:
proxies = Config().get_proxies() if proxy else None
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
# 第一次登录反爬
if html_text.find("title") == -1:
i = html_text.find("window.location")
if i == -1:
return None
tmp_url = url + html_text[i:html_text.find(";")] \
.replace("\"", "").replace("+", "").replace(" ", "").replace("window.location=", "")
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=tmp_url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
else:
log.error("【Sites】站点 %s 被反爬限制:%s, 状态码:%s" % (site_name, url, res.status_code))
return None
# 兼容假首页情况,假首页通常没有 <link rel="search" 属性
if '"search"' not in html_text and '"csrf-token"' not in html_text:
res = RequestUtils(cookies=site_cookie,
session=session,
headers=ua,
proxies=proxies
).get_res(url=url + "/index.php")
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
elif res is not None:
log.error(f"【Sites】站点 {site_name} 连接失败,状态码:{res.status_code}")
return None
else:
log.error(f"【Sites】站点 {site_name} 无法访问:{url}")
return None
# 解析站点类型
site_schema = self.__build_class(html_text)
if not site_schema:
log.error("【Sites】站点 %s 无法识别站点类型" % site_name)
return None
return site_schema(site_name, url, site_cookie, html_text, session=session, ua=ua)
def __refresh_site_data(self, site_info):
"""
更新单个site 数据信息
:param site_info:
:return:
"""
site_name = site_info.get("name")
site_url = site_info.get("strict_url")
if not site_url:
return
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
unread_msg_notify = site_info.get("unread_msg_notify")
chrome = site_info.get("chrome")
proxy = site_info.get("proxy")
try:
site_user_info = self.build(url=site_url,
site_name=site_name,
site_cookie=site_cookie,
ua=ua,
emulate=chrome,
proxy=proxy)
if site_user_info:
log.debug(f"【Sites】站点 {site_name} 开始以 {site_user_info.site_schema()} 模型解析")
# 开始解析
site_user_info.parse()
log.debug(f"【Sites】站点 {site_name} 解析完成")
# 获取不到数据时,仅返回错误信息,不做历史数据更新
if site_user_info.err_msg:
self._sites_data.update({site_name: {"err_msg": site_user_info.err_msg}})
return
# 发送通知,存在未读消息
self.__notify_unread_msg(site_name, site_user_info, unread_msg_notify)
self._sites_data.update(
{
site_name: {
"upload": site_user_info.upload,
"username": site_user_info.username,
"user_level": site_user_info.user_level,
"join_at": site_user_info.join_at,
"download": site_user_info.download,
"ratio": site_user_info.ratio,
"seeding": site_user_info.seeding,
"seeding_size": site_user_info.seeding_size,
"leeching": site_user_info.leeching,
"bonus": site_user_info.bonus,
"url": site_url,
"err_msg": site_user_info.err_msg,
"message_unread": site_user_info.message_unread
}
})
return site_user_info
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.error(f"【Sites】站点 {site_name} 获取流量数据失败:{str(e)}")
def __notify_unread_msg(self, site_name, site_user_info, unread_msg_notify):
if site_user_info.message_unread <= 0:
return
if self._sites_data.get(site_name, {}).get('message_unread') == site_user_info.message_unread:
return
if not unread_msg_notify:
return
# 解析出内容,则发送内容
if len(site_user_info.message_unread_contents) > 0:
for head, date, content in site_user_info.message_unread_contents:
msg_title = f"【站点 {site_user_info.site_name} 消息】"
msg_text = f"时间:{date}\n标题:{head}\n内容:\n{content}"
self.message.send_site_message(title=msg_title, text=msg_text)
else:
self.message.send_site_message(
title=f"站点 {site_user_info.site_name} 收到 {site_user_info.message_unread} 条新消息,请登陆查看")
def refresh_pt_date_now(self):
"""
强制刷新站点数据
"""
self.__refresh_all_site_data(force=True)
def get_pt_date(self, specify_sites=None, force=False):
"""
获取站点上传下载量
"""
self.__refresh_all_site_data(force=force, specify_sites=specify_sites)
return self._sites_data
def __refresh_all_site_data(self, force=False, specify_sites=None):
"""
多线程刷新站点下载上传量默认间隔6小时
"""
if not self.sites.get_sites():
return
with lock:
if not force \
and not specify_sites \
and self._last_update_time \
and (datetime.now() - self._last_update_time).seconds < 6 * 3600:
return
if specify_sites \
and not isinstance(specify_sites, list):
specify_sites = [specify_sites]
# 没有指定站点,默认使用全部站点
if not specify_sites:
refresh_sites = self.sites.get_sites(statistic=True)
else:
refresh_sites = [site for site in self.sites.get_sites(statistic=True) if
site.get("name") in specify_sites]
if not refresh_sites:
return
# 并发刷新
with ThreadPool(min(len(refresh_sites), self._MAX_CONCURRENCY)) as p:
site_user_infos = p.map(self.__refresh_site_data, refresh_sites)
site_user_infos = [info for info in site_user_infos if info]
# 登记历史数据
self.dbhelper.insert_site_statistics_history(site_user_infos)
# 实时用户数据
self.dbhelper.update_site_user_statistics(site_user_infos)
# 更新站点图标
self.dbhelper.update_site_favicon(site_user_infos)
# 实时做种信息
self.dbhelper.update_site_seed_info(site_user_infos)
# 站点图标重新加载
self.sites.init_favicons()
# 更新时间
self._last_update_time = datetime.now()
def get_pt_site_statistics_history(self, days=7):
"""
获取站点上传下载量
"""
site_urls = []
for site in self.sites.get_sites(statistic=True):
site_url = site.get("strict_url")
if site_url:
site_urls.append(site_url)
return self.dbhelper.get_site_statistics_recent_sites(days=days, strict_urls=site_urls)
def get_site_user_statistics(self, sites=None, encoding="RAW"):
"""
获取站点用户数据
:param sites: 站点名称
:param encoding: RAW/DICT
:return:
"""
statistic_sites = self.sites.get_sites(statistic=True)
if not sites:
site_urls = [site.get("strict_url") for site in statistic_sites]
else:
site_urls = [site.get("strict_url") for site in statistic_sites
if site.get("name") in sites]
raw_statistics = self.dbhelper.get_site_user_statistics(strict_urls=site_urls)
if encoding == "RAW":
return raw_statistics
return self.__todict(raw_statistics)
def get_pt_site_activity_history(self, site, days=365 * 2):
"""
查询站点 上传下载做种数据
:param site: 站点名称
:param days: 最大数据量
:return:
"""
site_activities = [["time", "upload", "download", "bonus", "seeding", "seeding_size"]]
sql_site_activities = self.dbhelper.get_site_statistics_history(site=site, days=days)
for sql_site_activity in sql_site_activities:
timestamp = datetime.strptime(sql_site_activity.DATE, '%Y-%m-%d').timestamp() * 1000
site_activities.append(
[timestamp,
sql_site_activity.UPLOAD,
sql_site_activity.DOWNLOAD,
sql_site_activity.BONUS,
sql_site_activity.SEEDING,
sql_site_activity.SEEDING_SIZE])
return site_activities
def get_pt_site_seeding_info(self, site):
"""
查询站点 做种分布信息
:param site: 站点名称
:return: seeding_info:[uploader_num, seeding_size]
"""
site_seeding_info = {"seeding_info": []}
seeding_info = self.dbhelper.get_site_seeding_info(site=site)
if not seeding_info:
return site_seeding_info
site_seeding_info["seeding_info"] = json.loads(seeding_info[0])
return site_seeding_info
@staticmethod
def __todict(raw_statistics):
statistics = []
for site in raw_statistics:
statistics.append({"site": site.SITE,
"username": site.USERNAME,
"user_level": site.USER_LEVEL,
"join_at": site.JOIN_AT,
"update_at": site.UPDATE_AT,
"upload": site.UPLOAD,
"download": site.DOWNLOAD,
"ratio": site.RATIO,
"seeding": site.SEEDING,
"leeching": site.LEECHING,
"seeding_size": site.SEEDING_SIZE,
"bonus": site.BONUS,
"url": site.URL,
"msg_unread": site.MSG_UNREAD
})
return statistics

View File

@ -1,21 +1,28 @@
import json
import os
import random
import re
import shutil
import time
import traceback
from datetime import datetime
from functools import lru_cache
from multiprocessing.dummy import Pool as ThreadPool
from threading import Lock
from lxml import etree
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as es
from selenium.webdriver.support.wait import WebDriverWait
import log
from app.conf import SiteConf
from app.helper import ChromeHelper, SiteHelper, DbHelper
from app.message import Message
from app.utils import RequestUtils, StringUtils, ExceptionUtils, PathUtils, SystemUtils
from app.sites.site_user_info_factory import SiteUserInfoFactory
from app.conf import SiteConf
from app.utils import RequestUtils, StringUtils, ExceptionUtils
from app.utils.commons import singleton
from config import Config, RMT_SUBEXT
from config import Config
lock = Lock()
@singleton
@ -26,11 +33,13 @@ class Sites:
_sites = []
_siteByIds = {}
_siteByUrls = {}
_sites_data = {}
_site_favicons = {}
_rss_sites = []
_brush_sites = []
_statistic_sites = []
_signin_sites = []
_last_update_time = None
_MAX_CONCURRENCY = 10
@ -42,6 +51,10 @@ class Sites:
self.message = Message()
# 原始站点列表
self._sites = []
# 站点数据
self._sites_data = {}
# 站点数据更新时间
self._last_update_time = None
# ID存储站点
self._siteByIds = {}
# URL存储站点
@ -55,7 +68,7 @@ class Sites:
# 开启签到功能站点:
self._signin_sites = []
# 站点图标
self.init_favicons()
self.__init_favicons()
# 站点数据
self._sites = self.dbhelper.get_config_site()
for site in self._sites:
@ -100,8 +113,7 @@ class Sites:
"unread_msg_notify": True if site_note.get("message") == "Y" else False,
"chrome": True if site_note.get("chrome") == "Y" else False,
"proxy": True if site_note.get("proxy") == "Y" else False,
"subtitle": True if site_note.get("subtitle") == "Y" else False,
"strict_url": StringUtils.get_base_url(site_signurl or site_rssurl)
"subtitle": True if site_note.get("subtitle") == "Y" else False
}
# 以ID存储
self._siteByIds[site.ID] = site_info
@ -110,7 +122,7 @@ class Sites:
if site_strict_url:
self._siteByUrls[site_strict_url] = site_info
def init_favicons(self):
def __init_favicons(self):
"""
加载图标到内存
"""
@ -202,6 +214,129 @@ class Sites:
return site.get("download_setting")
return None
def __refresh_all_site_data(self, force=False, specify_sites=None):
"""
多线程刷新站点下载上传量默认间隔6小时
"""
if not self._sites:
return
with lock:
if not force \
and not specify_sites \
and self._last_update_time \
and (datetime.now() - self._last_update_time).seconds < 6 * 3600:
return
if specify_sites \
and not isinstance(specify_sites, list):
specify_sites = [specify_sites]
# 没有指定站点,默认使用全部站点
if not specify_sites:
refresh_sites = self.get_sites(statistic=True)
else:
refresh_sites = [site for site in self.get_sites(statistic=True) if site.get("name") in specify_sites]
if not refresh_sites:
return
# 并发刷新
with ThreadPool(min(len(refresh_sites), self._MAX_CONCURRENCY)) as p:
site_user_infos = p.map(self.__refresh_site_data, refresh_sites)
site_user_infos = [info for info in site_user_infos if info]
# 登记历史数据
self.dbhelper.insert_site_statistics_history(site_user_infos)
# 实时用户数据
self.dbhelper.update_site_user_statistics(site_user_infos)
# 更新站点图标
self.dbhelper.update_site_favicon(site_user_infos)
# 实时做种信息
self.dbhelper.update_site_seed_info(site_user_infos)
# 站点图标重新加载
self.__init_favicons()
# 更新时间
self._last_update_time = datetime.now()
def __refresh_site_data(self, site_info):
"""
更新单个site 数据信息
:param site_info:
:return:
"""
site_name = site_info.get("name")
site_url = self.__get_site_strict_url(site_info)
if not site_url:
return
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
unread_msg_notify = site_info.get("unread_msg_notify")
chrome = site_info.get("chrome")
proxy = site_info.get("proxy")
try:
site_user_info = SiteUserInfoFactory().build(url=site_url,
site_name=site_name,
site_cookie=site_cookie,
ua=ua,
emulate=chrome,
proxy=proxy)
if site_user_info:
log.debug(f"【Sites】站点 {site_name} 开始以 {site_user_info.site_schema()} 模型解析")
# 开始解析
site_user_info.parse()
log.debug(f"【Sites】站点 {site_name} 解析完成")
# 获取不到数据时,仅返回错误信息,不做历史数据更新
if site_user_info.err_msg:
self._sites_data.update({site_name: {"err_msg": site_user_info.err_msg}})
return
# 发送通知,存在未读消息
self.__notify_unread_msg(site_name, site_user_info, unread_msg_notify)
self._sites_data.update({site_name: {
"upload": site_user_info.upload,
"username": site_user_info.username,
"user_level": site_user_info.user_level,
"join_at": site_user_info.join_at,
"download": site_user_info.download,
"ratio": site_user_info.ratio,
"seeding": site_user_info.seeding,
"seeding_size": site_user_info.seeding_size,
"leeching": site_user_info.leeching,
"bonus": site_user_info.bonus,
"url": site_url,
"err_msg": site_user_info.err_msg,
"message_unread": site_user_info.message_unread}
})
return site_user_info
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.error("【Sites】站点 %s 获取流量数据失败:%s - %s" % (site_name, str(e), traceback.format_exc()))
def __notify_unread_msg(self, site_name, site_user_info, unread_msg_notify):
if site_user_info.message_unread <= 0:
return
if self._sites_data.get(site_name, {}).get('message_unread') == site_user_info.message_unread:
return
if not unread_msg_notify:
return
# 解析出内容,则发送内容
if len(site_user_info.message_unread_contents) > 0:
for head, date, content in site_user_info.message_unread_contents:
msg_title = f"【站点 {site_user_info.site_name} 消息】"
msg_text = f"时间:{date}\n标题:{head}\n内容:\n{content}"
self.message.send_site_message(title=msg_title, text=msg_text)
else:
self.message.send_site_message(
title=f"站点 {site_user_info.site_name} 收到 {site_user_info.message_unread} 条新消息,请登陆查看")
def test_connection(self, site_id):
"""
测试站点连通性
@ -255,6 +390,220 @@ class Sites:
else:
return False, "无法打开网站", seconds
def signin(self):
"""
站点并发签到
"""
sites = self.get_sites(signin=True)
if not sites:
return
with ThreadPool(min(len(sites), self._MAX_CONCURRENCY)) as p:
status = p.map(self.__signin_site, sites)
if status:
self.message.send_site_signin_message(status)
@staticmethod
def __signin_site(site_info):
"""
签到一个站点
"""
if not site_info:
return ""
site = site_info.get("name")
try:
site_url = site_info.get("signurl")
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
if not site_url or not site_cookie:
log.warn("【Sites】未配置 %s 的站点地址或Cookie无法签到" % str(site))
return ""
chrome = ChromeHelper()
if site_info.get("chrome") and chrome.get_status():
# 首页
log.info("【Sites】开始站点仿真签到%s" % site)
home_url = StringUtils.get_base_url(site_url)
if not chrome.visit(url=home_url, ua=ua, cookie=site_cookie):
log.warn("【Sites】%s 无法打开网站" % site)
return f"{site}】无法打开网站!"
# 循环检测是否过cf
cloudflare = chrome.pass_cloudflare()
if not cloudflare:
log.warn("【Sites】%s 跳转站点失败" % site)
return f"{site}】跳转站点失败!"
# 判断是否已签到
html_text = chrome.get_html()
if not html_text:
log.warn("【Sites】%s 获取站点源码失败" % site)
return f"{site}】获取站点源码失败!"
# 查找签到按钮
html = etree.HTML(html_text)
xpath_str = None
for xpath in SiteConf.SITE_CHECKIN_XPATH:
if html.xpath(xpath):
xpath_str = xpath
break
if re.search(r'已签|签到已得', html_text, re.IGNORECASE) \
and not xpath_str:
log.info("【Sites】%s 今日已签到" % site)
return f"{site}】今日已签到"
if not xpath_str:
if SiteHelper.is_logged_in(html_text):
log.warn("【Sites】%s 未找到签到按钮,模拟登录成功" % site)
return f"{site}】模拟登录成功"
else:
log.info("【Sites】%s 未找到签到按钮,且模拟登录失败" % site)
return f"{site}】模拟登录失败!"
# 开始仿真
try:
checkin_obj = WebDriverWait(driver=chrome.browser, timeout=6).until(
es.element_to_be_clickable((By.XPATH, xpath_str)))
if checkin_obj:
checkin_obj.click()
log.info("【Sites】%s 仿真签到成功" % site)
return f"{site}】仿真签到成功"
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.warn("【Sites】%s 仿真签到失败:%s" % (site, str(e)))
return f"{site}】签到失败!"
# 模拟登录
else:
if site_url.find("attendance.php") != -1:
checkin_text = "签到"
else:
checkin_text = "模拟登录"
log.info(f"【Sites】开始站点{checkin_text}{site}")
# 访问链接
res = RequestUtils(cookies=site_cookie,
headers=ua,
proxies=Config().get_proxies() if site_info.get("proxy") else None
).get_res(url=site_url)
if res and res.status_code == 200:
if not SiteHelper.is_logged_in(res.text):
log.warn(f"【Sites】{site} {checkin_text}失败请检查Cookie")
return f"{site}{checkin_text}失败请检查Cookie"
else:
log.info(f"【Sites】{site} {checkin_text}成功")
return f"{site}{checkin_text}成功"
elif res is not None:
log.warn(f"【Sites】{site} {checkin_text}失败,状态码:{res.status_code}")
return f"{site}{checkin_text}失败,状态码:{res.status_code}"
else:
log.warn(f"【Sites】{site} {checkin_text}失败,无法打开网站")
return f"{site}{checkin_text}失败,无法打开网站!"
except Exception as e:
log.error("【Sites】%s 签到出错:%s - %s" % (site, str(e), traceback.format_exc()))
return f"{site} 签到出错:{str(e)}"
def refresh_pt_date_now(self):
"""
强制刷新站点数据
"""
self.__refresh_all_site_data(force=True)
def get_pt_date(self, specify_sites=None, force=False):
"""
获取站点上传下载量
"""
self.__refresh_all_site_data(force=force, specify_sites=specify_sites)
return self._sites_data
def get_pt_site_statistics_history(self, days=7):
"""
获取站点上传下载量
"""
site_urls = []
for site in self.get_sites(statistic=True):
site_url = self.__get_site_strict_url(site)
if site_url:
site_urls.append(site_url)
return self.dbhelper.get_site_statistics_recent_sites(days=days, strict_urls=site_urls)
def get_site_user_statistics(self, sites=None, encoding="RAW"):
"""
获取站点用户数据
:param sites: 站点名称
:param encoding: RAW/DICT
:return:
"""
statistic_sites = self.get_sites(statistic=True)
if not sites:
site_urls = [self.__get_site_strict_url(site) for site in statistic_sites]
else:
site_urls = [self.__get_site_strict_url(site) for site in statistic_sites
if site.get("name") in sites]
raw_statistics = self.dbhelper.get_site_user_statistics(strict_urls=site_urls)
if encoding == "RAW":
return raw_statistics
return self.__todict(raw_statistics)
@staticmethod
def __todict(raw_statistics):
statistics = []
for site in raw_statistics:
statistics.append({"site": site.SITE,
"username": site.USERNAME,
"user_level": site.USER_LEVEL,
"join_at": site.JOIN_AT,
"update_at": site.UPDATE_AT,
"upload": site.UPLOAD,
"download": site.DOWNLOAD,
"ratio": site.RATIO,
"seeding": site.SEEDING,
"leeching": site.LEECHING,
"seeding_size": site.SEEDING_SIZE,
"bonus": site.BONUS,
"url": site.URL,
"msg_unread": site.MSG_UNREAD
})
return statistics
def get_pt_site_activity_history(self, site, days=365 * 2):
"""
查询站点 上传下载做种数据
:param site: 站点名称
:param days: 最大数据量
:return:
"""
site_activities = [["time", "upload", "download", "bonus", "seeding", "seeding_size"]]
sql_site_activities = self.dbhelper.get_site_statistics_history(site=site, days=days)
for sql_site_activity in sql_site_activities:
timestamp = datetime.strptime(sql_site_activity.DATE, '%Y-%m-%d').timestamp() * 1000
site_activities.append(
[timestamp,
sql_site_activity.UPLOAD,
sql_site_activity.DOWNLOAD,
sql_site_activity.BONUS,
sql_site_activity.SEEDING,
sql_site_activity.SEEDING_SIZE])
return site_activities
def get_pt_site_seeding_info(self, site):
"""
查询站点 做种分布信息
:param site: 站点名称
:return: seeding_info:[uploader_num, seeding_size]
"""
site_seeding_info = {"seeding_info": []}
seeding_info = self.dbhelper.get_site_seeding_info(site=site)
if not seeding_info:
return site_seeding_info
site_seeding_info["seeding_info"] = json.loads(seeding_info[0])
return site_seeding_info
@staticmethod
def __get_site_strict_url(site):
if not site:
return
site_url = site.get("signurl") or site.get("rssurl")
if site_url:
return StringUtils.get_base_url(site_url)
return ""
def get_site_attr(self, url):
"""
整合公有站点和私有站点的属性
@ -424,116 +773,3 @@ class Sites:
if note:
infos = json.loads(note)
return infos
def download_subtitle_from_site(self, media_info, cookie, ua, download_dir):
"""
从站点下载字幕文件并保存到本地
"""
def __get_url_subtitle_name(disposition, url):
"""
从下载请求中获取字幕文件名
"""
fname = re.findall(r"filename=\"?(.+)\"?", disposition or "")
if fname:
fname = str(fname[0].encode('ISO-8859-1').decode()).split(";")[0].strip()
if fname.endswith('"'):
fname = fname[:-1]
elif url and os.path.splitext(url)[-1] in (RMT_SUBEXT + ['.zip']):
fname = url.split("/")[-1]
else:
fname = str(datetime.now())
return fname
def __transfer_subtitle(source_sub_file, media_file):
"""
转移字幕
"""
new_sub_file = "%s%s" % (os.path.splitext(media_file)[0], os.path.splitext(source_sub_file)[-1])
if os.path.exists(new_sub_file):
return 1
else:
return SystemUtils.copy(source_sub_file, new_sub_file)
if not media_info.page_url:
return
# 字幕下载目录
log.info("【Sites】开始从站点下载字幕%s" % media_info.page_url)
if not download_dir:
log.warn("【Sites】未找到字幕下载目录")
return
# 读取网站代码
request = RequestUtils(cookies=cookie, headers=ua)
res = request.get_res(media_info.page_url)
if res and res.status_code == 200:
if not res.text:
log.warn(f"【Sites】读取页面代码失败{media_info.page_url}")
return
html = etree.HTML(res.text)
sublink_list = []
for xpath in SiteConf.SITE_SUBTITLE_XPATH:
sublinks = html.xpath(xpath)
if sublinks:
for sublink in sublinks:
if not sublink:
continue
if not sublink.startswith("http"):
base_url = StringUtils.get_base_url(media_info.page_url)
if sublink.startswith("/"):
sublink = "%s%s" % (base_url, sublink)
else:
sublink = "%s/%s" % (base_url, sublink)
sublink_list.append(sublink)
# 下载所有字幕文件
for sublink in sublink_list:
log.info(f"【Sites】找到字幕下载链接{sublink},开始下载...")
# 下载
ret = request.get_res(sublink)
if ret and ret.status_code == 200:
# 创建目录
if not os.path.exists(download_dir):
os.makedirs(download_dir)
# 保存ZIP
file_name = __get_url_subtitle_name(ret.headers.get('content-disposition'), sublink)
if not file_name:
log.warn(f"【Sites】链接不是字幕文件{sublink}")
continue
if file_name.lower().endswith(".zip"):
# ZIP包
zip_file = os.path.join(self._save_tmp_path, file_name)
# 解压路径
zip_path = os.path.splitext(zip_file)[0]
with open(zip_file, 'wb') as f:
f.write(ret.content)
# 解压文件
shutil.unpack_archive(zip_file, zip_path, format='zip')
# 遍历转移文件
for sub_file in PathUtils.get_dir_files(in_path=zip_path, exts=RMT_SUBEXT):
target_sub_file = os.path.join(download_dir,
os.path.splitext(os.path.basename(sub_file))[0])
log.info(f"【Sites】转移字幕 {sub_file}{target_sub_file}")
__transfer_subtitle(sub_file, target_sub_file)
# 删除临时文件
try:
shutil.rmtree(zip_path)
os.remove(zip_file)
except Exception as err:
ExceptionUtils.exception_traceback(err)
else:
sub_file = os.path.join(self._save_tmp_path, file_name)
# 保存
with open(sub_file, 'wb') as f:
f.write(ret.content)
target_sub_file = os.path.join(download_dir,
os.path.splitext(os.path.basename(sub_file))[0])
log.info(f"【Sites】转移字幕 {sub_file}{target_sub_file}")
__transfer_subtitle(sub_file, target_sub_file)
else:
log.error(f"【Sites】下载字幕文件失败{sublink}")
continue
if sublink_list:
log.info(f"【Sites】{media_info.page_url} 页面字幕下载完成")
elif res is not None:
log.warn(f"【Sites】连接 {media_info.page_url} 失败,状态码:{res.status_code}")
else:
log.warn(f"【Sites】无法打开链接{media_info.page_url}")

View File

@ -1,31 +0,0 @@
# -*- coding: utf-8 -*-
from abc import ABCMeta, abstractmethod
from app.utils import StringUtils
class _ISiteSigninHandler(metaclass=ABCMeta):
"""
实现站点签到的基类所有站点签到类都需要继承此类并实现match和signin方法
实现类放置到sitesignin目录下将会自动加载
"""
# 匹配的站点Url每一个实现类都需要设置为自己的站点Url
site_url = ""
@abstractmethod
def match(self, url):
"""
根据站点Url判断是否匹配当前站点签到类大部分情况使用默认实现即可
:param url: 站点Url
:return: 是否匹配如匹配则会调用该类的signin方法
"""
return True if StringUtils.url_equal(url, self.site_url) else False
@abstractmethod
def signin(self, site_info: dict):
"""
执行签到操作
:param site_info: 站点信息含有站点Url站点CookieUA等信息
:return: 签到结果信息
"""
pass

212
app/speedlimiter.py Normal file
View File

@ -0,0 +1,212 @@
from app.conf import SystemConfig
from app.downloader import Downloader
from app.mediaserver import MediaServer
from app.utils import ExceptionUtils
from app.utils.commons import singleton
from app.utils.types import DownloaderType, MediaServerType
from app.helper.security_helper import SecurityHelper
from apscheduler.schedulers.background import BackgroundScheduler
from config import Config
import log
@singleton
class SpeedLimiter:
downloader = None
mediaserver = None
limit_enabled = False
limit_flag = False
qb_limit = False
qb_download_limit = 0
qb_upload_limit = 0
qb_upload_ratio = 0
tr_limit = False
tr_download_limit = 0
tr_upload_limit = 0
tr_upload_ratio = 0
unlimited_ips = {"ipv4": "0.0.0.0/0", "ipv6": "::/0"}
auto_limit = False
bandwidth = 0
_scheduler = None
def __init__(self):
self.init_config()
def init_config(self):
self.downloader = Downloader()
self.mediaserver = MediaServer()
config = SystemConfig().get_system_config("SpeedLimit")
if config:
try:
self.bandwidth = int(float(config.get("bandwidth") or 0)) * 1000000
residual_ratio = float(config.get("residual_ratio") or 1)
if residual_ratio > 1:
residual_ratio = 1
allocation = (config.get("allocation") or "1:1").split(":")
if len(allocation) != 2 or not str(allocation[0]).isdigit() or not str(allocation[-1]).isdigit():
allocation = ["1", "1"]
self.qb_upload_ratio = round(int(allocation[0]) / (int(allocation[-1]) + int(allocation[0])) * residual_ratio, 2)
self.tr_upload_ratio = round(int(allocation[-1]) / (int(allocation[-1]) + int(allocation[0])) * residual_ratio, 2)
except Exception as e:
ExceptionUtils.exception_traceback(e)
self.bandwidth = 0
self.qb_upload_ratio = 0
self.tr_upload_ratio = 0
self.auto_limit = True if self.bandwidth and (self.qb_upload_ratio or self.tr_upload_ratio) else False
try:
self.qb_download_limit = int(float(config.get("qb_download") or 0)) * 1024
self.qb_upload_limit = int(float(config.get("qb_upload") or 0)) * 1024
except Exception as e:
ExceptionUtils.exception_traceback(e)
self.qb_download_limit = 0
self.qb_upload_limit = 0
self.qb_limit = True if self.qb_download_limit or self.qb_upload_limit or self.auto_limit else False
try:
self.tr_download_limit = int(float(config.get("tr_download") or 0))
self.tr_upload_limit = int(float(config.get("tr_upload") or 0))
except Exception as e:
self.tr_download_limit = 0
self.tr_upload_limit = 0
ExceptionUtils.exception_traceback(e)
self.tr_limit = True if self.tr_download_limit or self.tr_upload_limit or self.auto_limit else False
self.limit_enabled = True if self.qb_limit or self.tr_limit else False
self.unlimited_ips["ipv4"] = config.get("ipv4") or "0.0.0.0/0"
self.unlimited_ips["ipv6"] = config.get("ipv6") or "::/0"
else:
self.limit_enabled = False
# 移出现有任务
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
ExceptionUtils.exception_traceback(e)
# 启动限速任务
if self.limit_enabled:
self._scheduler = BackgroundScheduler(timezone=Config().get_timezone())
self._scheduler.add_job(func=self.__check_playing_sessions,
args=[self.mediaserver.get_type(), True],
trigger='interval',
seconds=300)
self._scheduler.print_jobs()
self._scheduler.start()
log.info("播放限速服务启动")
def __start(self):
"""
开始限速
"""
if self.qb_limit:
self.downloader.set_speed_limit(
downloader=DownloaderType.QB,
download_limit=self.qb_download_limit,
upload_limit=self.qb_upload_limit
)
if not self.limit_flag:
log.info(f"【SpeedLimiter】Qbittorrent下载器开始限速")
if self.tr_limit:
self.downloader.set_speed_limit(
downloader=DownloaderType.TR,
download_limit=self.tr_download_limit,
upload_limit=self.tr_upload_limit
)
if not self.limit_flag:
log.info(f"【SpeedLimiter】Transmission下载器开始限速")
self.limit_flag = True
def __stop(self):
"""
停止限速
"""
if self.qb_limit:
self.downloader.set_speed_limit(
downloader=DownloaderType.QB,
download_limit=0,
upload_limit=0
)
if self.limit_flag:
log.info(f"【SpeedLimiter】Qbittorrent下载器停止限速")
if self.tr_limit:
self.downloader.set_speed_limit(
downloader=DownloaderType.TR,
download_limit=0,
upload_limit=0
)
if self.limit_flag:
log.info(f"【SpeedLimiter】Transmission下载器停止限速")
self.limit_flag = False
def emby_action(self, message):
"""
检查emby Webhook消息
"""
if self.limit_enabled and message.get("Event") in ["playback.start", "playback.stop"]:
self.__check_playing_sessions(mediaserver_type=MediaServerType.EMBY, time_check=False)
def jellyfin_action(self, message):
"""
检查jellyfin Webhook消息
"""
pass
def plex_action(self, message):
"""
检查plex Webhook消息
"""
pass
def __check_playing_sessions(self, mediaserver_type, time_check=False):
"""
检查是否限速
"""
if mediaserver_type != self.mediaserver.get_type():
return
playing_sessions = self.mediaserver.get_playing_sessions()
limit_flag = False
if mediaserver_type == MediaServerType.EMBY:
total_bit_rate = 0
for session in playing_sessions:
if not SecurityHelper.allow_access(self.unlimited_ips, session.get("RemoteEndPoint")) \
and session.get("NowPlayingItem").get("MediaType") == "Video":
total_bit_rate += int(session.get("NowPlayingItem").get("Bitrate")) or 0
if total_bit_rate:
limit_flag = True
if self.auto_limit:
residual_bandwidth = (self.bandwidth - total_bit_rate)
if residual_bandwidth < 0:
self.qb_upload_limit = 10*1024
self.tr_upload_limit = 10
else:
qb_upload_limit = residual_bandwidth / 8 / 1024 * self.qb_upload_ratio
tr_upload_limit = residual_bandwidth / 8 / 1024 * self.tr_upload_ratio
self.qb_upload_limit = qb_upload_limit * 1024 if qb_upload_limit > 10 else 10*1024
self.tr_upload_limit = tr_upload_limit if tr_upload_limit > 10 else 10
elif mediaserver_type == MediaServerType.JELLYFIN:
pass
elif mediaserver_type == MediaServerType.PLEX:
pass
else:
return
if time_check or self.auto_limit:
if limit_flag:
self.__start()
else:
self.__stop()
else:
if not self.limit_flag and limit_flag:
self.__start()
elif self.limit_flag and not limit_flag:
self.__stop()
else:
pass

363
app/subtitle.py Normal file
View File

@ -0,0 +1,363 @@
import datetime
import os.path
import re
import shutil
from lxml import etree
import log
from app.conf import SiteConf
from app.helper import OpenSubtitles
from app.utils import RequestUtils, PathUtils, SystemUtils, StringUtils, ExceptionUtils
from app.utils.commons import singleton
from app.utils.types import MediaType
from config import Config, RMT_SUBEXT
@singleton
class Subtitle:
opensubtitles = None
_save_tmp_path = None
_server = None
_host = None
_api_key = None
_remote_path = None
_local_path = None
_opensubtitles_enable = False
def __init__(self):
self.init_config()
def init_config(self):
self.opensubtitles = OpenSubtitles()
self._save_tmp_path = Config().get_temp_path()
if not os.path.exists(self._save_tmp_path):
os.makedirs(self._save_tmp_path)
subtitle = Config().get_config('subtitle')
if subtitle:
self._server = subtitle.get("server")
if self._server == "chinesesubfinder":
self._api_key = subtitle.get("chinesesubfinder", {}).get("api_key")
self._host = subtitle.get("chinesesubfinder", {}).get('host')
if self._host:
if not self._host.startswith('http'):
self._host = "http://" + self._host
if not self._host.endswith('/'):
self._host = self._host + "/"
self._local_path = subtitle.get("chinesesubfinder", {}).get("local_path")
self._remote_path = subtitle.get("chinesesubfinder", {}).get("remote_path")
else:
self._opensubtitles_enable = subtitle.get("opensubtitles", {}).get("enable")
def download_subtitle(self, items, server=None):
"""
字幕下载入口
:param items: {"type":, "file", "file_ext":, "name":, "title", "year":, "season":, "episode":, "bluray":}
:param server: 字幕下载服务器
:return: 是否成功消息内容
"""
if not items:
return False, "参数有误"
_server = self._server if not server else server
if not _server:
return False, "未配置字幕下载器"
if _server == "opensubtitles":
if server or self._opensubtitles_enable:
return self.__download_opensubtitles(items)
elif _server == "chinesesubfinder":
return self.__download_chinesesubfinder(items)
return False, "未配置字幕下载器"
def __search_opensubtitles(self, item):
"""
爬取OpenSubtitles.org字幕
"""
if not self.opensubtitles:
return []
return self.opensubtitles.search_subtitles(item)
def __download_opensubtitles(self, items):
"""
调用OpenSubtitles Api下载字幕
"""
if not self.opensubtitles:
return False, "未配置OpenSubtitles"
subtitles_cache = {}
success = False
ret_msg = ""
for item in items:
if not item:
continue
if not item.get("name") or not item.get("file"):
continue
if item.get("type") == MediaType.TV and not item.get("imdbid"):
log.warn("【Subtitle】电视剧类型需要imdbid检索字幕跳过...")
ret_msg = "电视剧需要imdbid检索字幕"
continue
subtitles = subtitles_cache.get(item.get("name"))
if subtitles is None:
log.info(
"【Subtitle】开始从Opensubtitle.org检索字幕: %simdbid=%s" % (item.get("name"), item.get("imdbid")))
subtitles = self.__search_opensubtitles(item)
if not subtitles:
subtitles_cache[item.get("name")] = []
log.info("【Subtitle】%s 未检索到字幕" % item.get("name"))
ret_msg = "%s 未检索到字幕" % item.get("name")
else:
subtitles_cache[item.get("name")] = subtitles
log.info("【Subtitle】opensubtitles.org返回数据%s" % len(subtitles))
if not subtitles:
continue
# 成功数
subtitle_count = 0
for subtitle in subtitles:
# 标题
if not item.get("imdbid"):
if str(subtitle.get('title')) != "%s (%s)" % (item.get("name"), item.get("year")):
continue
# 季
if item.get('season') \
and str(subtitle.get('season').replace("Season", "").strip()) != str(item.get('season')):
continue
# 集
if item.get('episode') \
and str(subtitle.get('episode')) != str(item.get('episode')):
continue
# 字幕文件名
SubFileName = subtitle.get('description')
# 下载链接
Download_Link = subtitle.get('link')
# 下载后的字幕文件路径
Media_File = "%s.chi.zh-cn%s" % (item.get("file"), item.get("file_ext"))
log.info("【Subtitle】正在从opensubtitles.org下载字幕 %s%s " % (SubFileName, Media_File))
# 下载
ret = RequestUtils(cookies=self.opensubtitles.get_cookie(),
headers=self.opensubtitles.get_ua()).get_res(Download_Link)
if ret and ret.status_code == 200:
# 保存ZIP
file_name = self.__get_url_subtitle_name(ret.headers.get('content-disposition'), Download_Link)
if not file_name:
continue
zip_file = os.path.join(self._save_tmp_path, file_name)
zip_path = os.path.splitext(zip_file)[0]
with open(zip_file, 'wb') as f:
f.write(ret.content)
# 解压文件
shutil.unpack_archive(zip_file, zip_path, format='zip')
# 遍历转移文件
for sub_file in PathUtils.get_dir_files(in_path=zip_path, exts=RMT_SUBEXT):
self.__transfer_subtitle(sub_file, Media_File)
# 删除临时文件
try:
shutil.rmtree(zip_path)
os.remove(zip_file)
except Exception as err:
ExceptionUtils.exception_traceback(err)
else:
log.error("【Subtitle】下载字幕文件失败%s" % Download_Link)
continue
# 最多下载3个字幕
subtitle_count += 1
if subtitle_count > 2:
break
if not subtitle_count:
if item.get('episode'):
log.info("【Subtitle】%s%s季 第%s集 未找到符合条件的字幕" % (
item.get("name"), item.get("season"), item.get("episode")))
ret_msg = "%s%s季 第%s集 未找到符合条件的字幕" % (
item.get("name"), item.get("season"), item.get("episode"))
else:
log.info("【Subtitle】%s 未找到符合条件的字幕" % item.get("name"))
ret_msg = "%s 未找到符合条件的字幕" % item.get("name")
else:
log.info("【Subtitle】%s 共下载了 %s 个字幕" % (item.get("name"), subtitle_count))
ret_msg = "%s 共下载了 %s 个字幕" % (item.get("name"), subtitle_count)
success = True
if success:
return True, ret_msg
else:
return False, ret_msg
def __download_chinesesubfinder(self, items):
"""
调用ChineseSubFinder下载字幕
"""
if not self._host or not self._api_key:
return False, "未配置ChineseSubFinder"
req_url = "%sapi/v1/add-job" % self._host
notify_items = []
success = False
ret_msg = ""
for item in items:
if not item:
continue
if not item.get("name") or not item.get("file"):
continue
if item.get("bluray"):
file_path = "%s.mp4" % item.get("file")
else:
if os.path.splitext(item.get("file"))[-1] != item.get("file_ext"):
file_path = "%s%s" % (item.get("file"), item.get("file_ext"))
else:
file_path = item.get("file")
# 路径替换
if self._local_path and self._remote_path and file_path.startswith(self._local_path):
file_path = file_path.replace(self._local_path, self._remote_path).replace('\\', '/')
# 一个名称只建一个任务
if file_path not in notify_items:
notify_items.append(file_path)
log.info("【Subtitle】通知ChineseSubFinder下载字幕: %s" % file_path)
params = {
"video_type": 0 if item.get("type") == MediaType.MOVIE else 1,
"physical_video_file_full_path": file_path,
"task_priority_level": 3,
"media_server_inside_video_id": "",
"is_bluray": item.get("bluray")
}
try:
res = RequestUtils(headers={
"Authorization": "Bearer %s" % self._api_key
}).post(req_url, json=params)
if not res or res.status_code != 200:
log.error("【Subtitle】调用ChineseSubFinder API失败")
ret_msg = "调用ChineseSubFinder API失败"
else:
# 如果文件目录没有识别的nfo元数据 此接口会返回控制符推测是ChineseSubFinder的原因
# emby refresh元数据时异步的
if res.text:
job_id = res.json().get("job_id")
message = res.json().get("message")
if not job_id:
log.warn("【Subtitle】ChineseSubFinder下载字幕出错%s" % message)
ret_msg = "ChineseSubFinder下载字幕出错%s" % message
else:
log.info("【Subtitle】ChineseSubFinder任务添加成功%s" % job_id)
ret_msg = "ChineseSubFinder任务添加成功%s" % job_id
success = True
else:
log.error("【Subtitle】%s 目录缺失nfo元数据" % file_path)
ret_msg = "%s 目录下缺失nfo元数据" % file_path
except Exception as e:
ExceptionUtils.exception_traceback(e)
log.error("【Subtitle】连接ChineseSubFinder出错" + str(e))
ret_msg = "连接ChineseSubFinder出错%s" % str(e)
if success:
return True, ret_msg
else:
return False, ret_msg
@staticmethod
def __transfer_subtitle(sub_file, media_file):
"""
转移字幕
"""
new_sub_file = "%s%s" % (os.path.splitext(media_file)[0], os.path.splitext(sub_file)[-1])
if os.path.exists(new_sub_file):
return 1
else:
return SystemUtils.copy(sub_file, new_sub_file)
def download_subtitle_from_site(self, media_info, cookie, ua, download_dir):
"""
从站点下载字幕文件并保存到本地
"""
if not media_info.page_url:
return
# 字幕下载目录
log.info("【Subtitle】开始从站点下载字幕%s" % media_info.page_url)
if not download_dir:
log.warn("【Subtitle】未找到字幕下载目录")
return
# 读取网站代码
request = RequestUtils(cookies=cookie, headers=ua)
res = request.get_res(media_info.page_url)
if res and res.status_code == 200:
if not res.text:
log.warn(f"【Subtitle】读取页面代码失败{media_info.page_url}")
return
html = etree.HTML(res.text)
sublink_list = []
for xpath in SiteConf.SITE_SUBTITLE_XPATH:
sublinks = html.xpath(xpath)
if sublinks:
for sublink in sublinks:
if not sublink:
continue
if not sublink.startswith("http"):
base_url = StringUtils.get_base_url(media_info.page_url)
if sublink.startswith("/"):
sublink = "%s%s" % (base_url, sublink)
else:
sublink = "%s/%s" % (base_url, sublink)
sublink_list.append(sublink)
# 下载所有字幕文件
for sublink in sublink_list:
log.info(f"【Subtitle】找到字幕下载链接{sublink},开始下载...")
# 下载
ret = request.get_res(sublink)
if ret and ret.status_code == 200:
# 创建目录
if not os.path.exists(download_dir):
os.makedirs(download_dir)
# 保存ZIP
file_name = self.__get_url_subtitle_name(ret.headers.get('content-disposition'), sublink)
if not file_name:
log.warn(f"【Subtitle】链接不是字幕文件{sublink}")
continue
if file_name.lower().endswith(".zip"):
# ZIP包
zip_file = os.path.join(self._save_tmp_path, file_name)
# 解压路径
zip_path = os.path.splitext(zip_file)[0]
with open(zip_file, 'wb') as f:
f.write(ret.content)
# 解压文件
shutil.unpack_archive(zip_file, zip_path, format='zip')
# 遍历转移文件
for sub_file in PathUtils.get_dir_files(in_path=zip_path, exts=RMT_SUBEXT):
target_sub_file = os.path.join(download_dir,
os.path.splitext(os.path.basename(sub_file))[0])
log.info(f"【Subtitle】转移字幕 {sub_file}{target_sub_file}")
self.__transfer_subtitle(sub_file, target_sub_file)
# 删除临时文件
try:
shutil.rmtree(zip_path)
os.remove(zip_file)
except Exception as err:
ExceptionUtils.exception_traceback(err)
else:
sub_file = os.path.join(self._save_tmp_path, file_name)
# 保存
with open(sub_file, 'wb') as f:
f.write(ret.content)
target_sub_file = os.path.join(download_dir,
os.path.splitext(os.path.basename(sub_file))[0])
log.info(f"【Subtitle】转移字幕 {sub_file}{target_sub_file}")
self.__transfer_subtitle(sub_file, target_sub_file)
else:
log.error(f"【Subtitle】下载字幕文件失败{sublink}")
continue
if sublink_list:
log.info(f"【Subtitle】{media_info.page_url} 页面字幕下载完成")
elif res is not None:
log.warn(f"【Subtitle】连接 {media_info.page_url} 失败,状态码:{res.status_code}")
else:
log.warn(f"【Subtitle】无法打开链接{media_info.page_url}")
@staticmethod
def __get_url_subtitle_name(disposition, url):
"""
从下载请求中获取字幕文件名
"""
file_name = re.findall(r"filename=\"?(.+)\"?", disposition or "")
if file_name:
file_name = str(file_name[0].encode('ISO-8859-1').decode()).split(";")[0].strip()
if file_name.endswith('"'):
file_name = file_name[:-1]
elif url and os.path.splitext(url)[-1] in (RMT_SUBEXT + ['.zip']):
file_name = url.split("/")[-1]
else:
file_name = str(datetime.datetime.now())
return file_name

View File

@ -243,8 +243,6 @@ class StringUtils:
"""
获取URL根地址
"""
if not url:
return ""
scheme, netloc = StringUtils.get_url_netloc(url)
return f"{scheme}://{netloc}"
@ -275,7 +273,7 @@ class StringUtils:
if season_re:
mtype = MediaType.TV
season_num = int(cn2an.cn2an(season_re.group(1), mode='smart'))
episode_re = re.search(r"\s*([0-9一二三四五六七八九十百零]+)\s*集", content, re.IGNORECASE)
episode_re = re.search(r"\s*([0-9一二三四五六七八九十]+)\s*集", content, re.IGNORECASE)
if episode_re:
mtype = MediaType.TV
episode_num = int(cn2an.cn2an(episode_re.group(1), mode='smart'))
@ -285,7 +283,7 @@ class StringUtils:
if year_re:
year = year_re.group(1)
key_word = re.sub(
r'\s*[0-9一二三四五六七八九十]+\s*季|第\s*[0-9一二三四五六七八九十百零]+\s*集|[\s(]+(\d{4})[\s)]*', '',
r'\s*[0-9一二三四五六七八九十]+\s*季|第\s*[0-9一二三四五六七八九十]+\s*集|[\s(]+(\d{4})[\s)]*', '',
content,
flags=re.IGNORECASE).strip()
if key_word:

View File

@ -11,6 +11,9 @@ class MediaType(Enum):
class DownloaderType(Enum):
QB = 'Qbittorrent'
TR = 'Transmission'
Client115 = '115网盘'
Aria2 = 'Aria2'
PikPak = 'PikPak'
class SyncType(Enum):
@ -56,6 +59,8 @@ class OsType(Enum):
class IndexerType(Enum):
JACKETT = "Jackett"
PROWLARR = "Prowlarr"
BUILTIN = "Indexer"
@ -90,33 +95,5 @@ class SiteSchema(Enum):
TNode = "TNode"
# 可监听事件
class EventType(Enum):
# Emby Webhook通知
EmbyWebhook = "emby.webhook"
# Jellyfin Webhook通知
JellyfinWebhook = "jellyfin.webhook"
# Plex Webhook通知
PlexWebhook = "plex.webhook"
# 新增下载
DownloadAdd = "download.add"
# 下载失败
DownloadFail = "download.fail"
# 入库完成
TransferFinished = "transfer.finished"
# 入库失败
TransferFail = "transfer.fail"
# 下载字幕
SubtitleDownload = "subtitle.download"
# 新增订阅
SubscribeAdd = "subscribe.add"
# 订阅完成
SubscribeFinished = "subscribe.finished"
# 交互消息
MessageIncoming = "message.incoming"
# 电影类型关键字
MovieTypes = ['MOV', '电影']
# 电视剧类型关键字
TvTypes = ['TV', '电视剧']

View File

@ -2,7 +2,6 @@ import json
import os
from werkzeug.security import generate_password_hash
from app.helper import DbHelper
from app.plugins import PluginManager
from app.utils import StringUtils, ExceptionUtils
from config import Config
@ -233,7 +232,7 @@ def update_config():
if not _config.get("security", {}).get("api_key"):
_config['security']['api_key'] = _config.get("security",
{}).get("subscribe_token") \
or StringUtils.generate_random_str()
or StringUtils.generate_random_str()
if _config.get('security', {}).get('subscribe_token'):
_config['security'].pop('subscribe_token')
overwrite_cofig = True
@ -277,8 +276,7 @@ def update_config():
"season_poster": True,
"season_banner": True,
"season_thumb": True,
"episode_thumb": False,
"episode_thumb_ffmpeg": False}
"episode_thumb": False}
}
overwrite_cofig = True
@ -319,6 +317,18 @@ def update_config():
_config['transmission'].pop('save_path')
if _config.get('transmission', {}).get('save_containerpath'):
_config['transmission'].pop('save_containerpath')
if _config.get('client115', {}).get('save_path'):
_config['client115'].pop('save_path')
if _config.get('client115', {}).get('save_containerpath'):
_config['client115'].pop('save_containerpath')
if _config.get('aria2', {}).get('save_path'):
_config['aria2'].pop('save_path')
if _config.get('aria2', {}).get('save_containerpath'):
_config['aria2'].pop('save_containerpath')
if _config.get('pikpak', {}).get('save_path'):
_config['pikpak'].pop('save_path')
if _config.get('pikpak', {}).get('save_containerpath'):
_config['pikpak'].pop('save_containerpath')
overwrite_cofig = True
elif isinstance(_config.get('downloaddir'), dict):
downloaddir_list = []
@ -742,30 +752,6 @@ def update_config():
except Exception as e:
ExceptionUtils.exception_traceback(e)
# 字幕兼容旧配置
try:
subtitle = Config().get_config('subtitle') or {}
if subtitle:
if subtitle.get("server") == "opensubtitles":
PluginManager().save_plugin_config(pid="OpenSubtitles",
conf={
"enable": subtitle.get("opensubtitles", {}).get("enable")
})
else:
chinesesubfinder = subtitle.get("chinesesubfinder", {})
PluginManager().save_plugin_config(pid="ChineseSubFinder", conf={
"host": chinesesubfinder.get("host"),
"api_key": chinesesubfinder.get("api_key"),
"local_path": chinesesubfinder.get("local_path"),
"remote_path": chinesesubfinder.get("remote_path")
})
# 删除旧配置
_config.pop("subtitle")
overwrite_cofig = True
except Exception as e:
ExceptionUtils.exception_traceback(e)
# 重写配置文件
if overwrite_cofig:
Config().save_config(_config)

View File

@ -5,7 +5,7 @@ from threading import Lock
import ruamel.yaml
# 种子名/文件名要素分隔字符
SPLIT_CHARS = r"\.|\s+|\(|\)|\[|]|-|\+|【|】|/||;|&|\||#|_|「|」|~"
SPLIT_CHARS = r"\.|\s+|\(|\)|\[|]|-|\+|【|】|/||;|&|\||#|_|「|」|||~"
# 默认User-Agent
DEFAULT_UA = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36"
# 收藏了的媒体的目录名名字可以改在Emby中点击红星则会自动将电影转移到此分类下需要在Emby Webhook中配置用户行为通知
@ -179,9 +179,6 @@ class Config(object):
def get_inner_config_path(self):
return os.path.join(self.get_root_path(), "config")
def get_script_path(self):
return os.path.join(self.get_inner_config_path(), "scripts")
def get_domain(self):
domain = (self.get_config('app') or {}).get('domain')
if domain and not domain.startswith('http'):

View File

@ -52,8 +52,6 @@ app:
wallpaper: bing
# Debug mode
debug: true
# 开启后只有Releases更新才会有更新提示
releases_update_only: false
# 【配置媒体库信息】
media:
@ -167,8 +165,11 @@ scraper_pic:
season_thumb: true
# 集
episode_thumb: false
# 开启后,读取视频文件生成缩略图
episode_thumb_ffmpeg: false
# 【配置消息通知服务】
message:
# 【Emby播放状态通知白名单】配置了Emby webhooks插件回调时用户播放媒体库中的媒体时会发送消息通知本处配置哪些用户的设备不通知避免打扰配置格式用户:设备名称,可用 - 增加多项
webhook_ignore:
# 【配置文件夹监控】:文件夹内容发生变化时自动识别转移
sync:
@ -178,7 +179,7 @@ sync:
# 【配置站点检索信息】
pt:
# 【下载使用的客户端软件】qbittorrent、transmission
# 【下载使用的客户端软件】qbittorrent、transmission、client115、aria2
pt_client: qbittorrent
# 【下载软件监控开关】是否监控下载软件true、false如为true则下载完成会自动转移和重命名如为false则不会处理
# 下载软件监控与Sync下载目录同步不要同时开启否则功能存在重复
@ -187,7 +188,9 @@ pt:
pt_monitor_only: true
# 【下载完成后转移到媒体库的转移模式】link、copy、softlink、move、rclone、rclonecopy、minio、miniocopy详情参考顶部说明
rmt_mode: link
#【聚合检索使用的检索器】builtin
#【聚合检索使用的检索器】jackett、prowlarr、builtin需要配置jackett或prowlarr对应的配置区域builtin为内置索引器需要在配置文件目录/sites目录下存入对应的站点配置文件
# 1、通过微信发送关键字实时检索下载发送格式示例电视剧 西部世界、西部世界第1季、西部世界第1季第2集、西部世界 2022只会匹配真实名称命中后会自动下载使用说明参考https://github.com/jxxghp/nas-tools/wiki/
# 2、使用WEB UI中的搜索界面搜索资源会识别显示真实名称并显示媒体图片和评分等信息会同时匹配种子名称跟真实名称
search_indexer: builtin
# 【内建索引器使用的站点】:只有在该站点列表中内建索引器搜索时才会使用
indexer_sites:
@ -209,6 +212,22 @@ pt:
# 【搜索结果数量限制】:每个站点返回搜索结果的最大数量
site_search_result_num: 100
# 【配置Jackett检索器】
jackett:
# 【Jackett地址】Jackett地址和端口格式http(s)://IP:PORT
host:
# 【Jackett ApiKey】Jackett配置页面右上角复制API Key
api_key:
# 【Jackett管理密码】如未设置可为空
password:
# 【配置prowlarr检索器】
prowlarr:
# 【Prowlarr地址】
host:
# 【Prowlarr ApiKey】Prowlarr设置页面获取API Key
api_key:
# 【配置qBittorrent下载软件】pt区的pt_client如配置为qbittorrent则需要同步配置该项
qbittorrent:
# 【qBittorrent IP地址和端口】注意如果qb启动了HTTPS证书则需要配置为https://IP
@ -231,6 +250,29 @@ transmission:
trusername:
trpassword:
# 配置 115 网盘下载器
client115:
# 115 Cookie 抓包获取
cookie:
# 配置Aria2下载器
aria2:
# Aria2地址
host:
# Aria2 RPC端口
port:
# 密码令牌
secret:
# 配置 pikpak 网盘下载器
pikpak:
# 用户名
username:
# 密码
password:
# 代理
proxy:
# 【下载目录】:配置下载目录,自按分类下载到指定目录
downloaddir:
@ -248,11 +290,30 @@ douban:
interval:
# 【同步数据类型】同步哪些类型的收藏数据do 在看wish 想看collect 看过,用逗号分隔配置
types: "wish"
# 【自动开载开关】:同步到豆瓣的数据后是否自动检索站点并下载
# 【自动开载开关】:同步到豆瓣的数据后是否自动检索站点并下载需要配置Jackett
auto_search: true
# 【自动添加RSS开关】站点检索找不到的记录是否自动添加RSS订阅可实现未搜索到的自动追更
auto_rss: true
# 【配置字幕自动下载】
subtitle:
# 【下载渠道】opensubtitles、chinesesubfinder
server: opensubtitles
# opensubtitles.org
opensubtitles:
# 是否启用
enable: true
# 配置ChineseSubFinder的服务器地址和API KeyAPI Key在ChineseSubFinder->配置中心->实验室->API Key处生成
chinesesubfinder:
# IP地址和端口
host:
# API KEY
api_key:
# NASTOOL媒体的映射路径
local_path:
# ChineseSubFinder媒体的映射路径
remote_path:
# 【配置安全】
security:
# 【媒体服务器webhook允许ip范围】即只有如下范围的IP才允许调用webhook

View File

@ -1,6 +1,6 @@
FROM alpine
RUN apk add --no-cache libffi-dev \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/NAStool/nas-tools/master/package_list.txt)) \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/jxxghp/nas-tools/master/package_list.txt)) \
&& ln -sf /usr/share/zoneinfo/${TZ} /etc/localtime \
&& echo "${TZ}" > /etc/timezone \
&& ln -sf /usr/bin/python3 /usr/bin/python \
@ -10,7 +10,7 @@ RUN apk add --no-cache libffi-dev \
&& chmod +x /usr/bin/mc \
&& pip install --upgrade pip setuptools wheel \
&& pip install cython \
&& pip install -r https://raw.githubusercontent.com/NAStool/nas-tools/master/requirements.txt \
&& pip install -r https://raw.githubusercontent.com/jxxghp/nas-tools/master/requirements.txt \
&& apk del libffi-dev \
&& npm install pm2 -g \
&& rm -rf /tmp/* /root/.cache /var/cache/apk/*
@ -21,7 +21,7 @@ ENV LANG="C.UTF-8" \
NASTOOL_CN_UPDATE=true \
NASTOOL_VERSION=master \
PS1="\u@\h:\w \$ " \
REPO_URL="https://github.com/NAStool/nas-tools.git" \
REPO_URL="https://github.com/jxxghp/nas-tools.git" \
PYPI_MIRROR="https://pypi.tuna.tsinghua.edu.cn/simple" \
ALPINE_MIRROR="mirrors.ustc.edu.cn" \
PUID=0 \

View File

@ -1,6 +1,6 @@
FROM alpine
RUN apk add --no-cache libffi-dev \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/NAStool/nas-tools/dev/package_list.txt)) \
&& apk add --no-cache $(echo $(wget --no-check-certificate -qO- https://raw.githubusercontent.com/jxxghp/nas-tools/dev/package_list.txt)) \
&& ln -sf /usr/share/zoneinfo/${TZ} /etc/localtime \
&& echo "${TZ}" > /etc/timezone \
&& ln -sf /usr/bin/python3 /usr/bin/python \
@ -10,7 +10,7 @@ RUN apk add --no-cache libffi-dev \
&& chmod +x /usr/bin/mc \
&& pip install --upgrade pip setuptools wheel \
&& pip install cython \
&& pip install -r https://raw.githubusercontent.com/NAStool/nas-tools/dev/requirements.txt \
&& pip install -r https://raw.githubusercontent.com/jxxghp/nas-tools/dev/requirements.txt \
&& apk del libffi-dev \
&& npm install pm2 -g \
&& rm -rf /tmp/* /root/.cache /var/cache/apk/*
@ -21,7 +21,7 @@ ENV LANG="C.UTF-8" \
NASTOOL_CN_UPDATE=true \
NASTOOL_VERSION=dev \
PS1="\u@\h:\w \$ " \
REPO_URL="https://github.com/NAStool/nas-tools.git" \
REPO_URL="https://github.com/jxxghp/nas-tools.git" \
PYPI_MIRROR="https://pypi.tuna.tsinghua.edu.cn/simple" \
ALPINE_MIRROR="mirrors.ustc.edu.cn" \
PUID=0 \

View File

@ -16,7 +16,7 @@ RUN apk add --no-cache libffi-dev \
&& ln -sf /usr/bin/python3 /usr/bin/python \
&& pip install --upgrade pip setuptools wheel \
&& pip install cython \
&& pip install -r https://raw.githubusercontent.com/NAStool/nas-tools/master/requirements.txt \
&& pip install -r https://raw.githubusercontent.com/jxxghp/nas-tools/master/requirements.txt \
&& npm install pm2 -g \
&& apk del --purge libffi-dev gcc musl-dev libxml2-dev libxslt-dev \
&& pip uninstall -y cython \
@ -28,7 +28,7 @@ ENV LANG="C.UTF-8" \
NASTOOL_CN_UPDATE=true \
NASTOOL_VERSION=lite \
PS1="\u@\h:\w \$ " \
REPO_URL="https://github.com/NAStool/nas-tools.git" \
REPO_URL="https://github.com/jxxghp/nas-tools.git" \
PYPI_MIRROR="https://pypi.tuna.tsinghua.edu.cn/simple" \
ALPINE_MIRROR="mirrors.ustc.edu.cn" \
PUID=0 \

View File

@ -12,7 +12,7 @@ services:
- PGID=0 # 想切换为哪个用户来运行程序该用户的gid
- UMASK=000 # 掩码权限默认000可以考虑设置为022
- NASTOOL_AUTO_UPDATE=false # 如需在启动容器时自动升级程程序请设置为true
#- REPO_URL=https://ghproxy.com/https://github.com/NAStool/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
#- REPO_URL=https://ghproxy.com/https://github.com/jxxghp/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
restart: always
network_mode: bridge
hostname: nas-tools

View File

@ -18,11 +18,11 @@
**注意**
- 媒体目录的设置必须符合 [配置说明](https://github.com/NAStool/nas-tools#%E9%85%8D%E7%BD%AE) 的要求。
- 媒体目录的设置必须符合 [配置说明](https://github.com/jxxghp/nas-tools#%E9%85%8D%E7%BD%AE) 的要求。
- umask含义详见http://www.01happy.com/linux-umask-analyze 。
- 创建后请根据 [配置说明](https://github.com/NAStool/nas-tools#%E9%85%8D%E7%BD%AE) 及该文件本身的注释,修改`config/config.yaml`,修改好后再重启容器,最后访问`http://<ip>:<web_port>`
- 创建后请根据 [配置说明](https://github.com/jxxghp/nas-tools#%E9%85%8D%E7%BD%AE) 及该文件本身的注释,修改`config/config.yaml`,修改好后再重启容器,最后访问`http://<ip>:<web_port>`
**docker cli**
@ -41,7 +41,7 @@ docker run -d \
jxxghp/nas-tools
```
如果你访问github的网络不太好可以考虑在创建容器时增加设置一个环境变量`-e REPO_URL="https://ghproxy.com/https://github.com/NAStool/nas-tools.git" \`。
如果你访问github的网络不太好可以考虑在创建容器时增加设置一个环境变量`-e REPO_URL="https://ghproxy.com/https://github.com/jxxghp/nas-tools.git" \`。
**docker-compose**
@ -63,7 +63,7 @@ services:
- UMASK=000 # 掩码权限默认000可以考虑设置为022
- NASTOOL_AUTO_UPDATE=false # 如需在启动容器时自动升级程程序请设置为true
- NASTOOL_CN_UPDATE=false # 如果开启了容器启动自动升级程序并且网络不太友好时可以设置为true会使用国内源进行软件更新
#- REPO_URL=https://ghproxy.com/https://github.com/NAStool/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
#- REPO_URL=https://ghproxy.com/https://github.com/jxxghp/nas-tools.git # 当你访问github网络很差时可以考虑解释本行注释
restart: always
network_mode: bridge
hostname: nas-tools

View File

@ -49,6 +49,7 @@ parsel==1.6.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
pikpakapi==0.1.1
proces==0.1.2
prompt-toolkit==3.0.31
ptyprocess==0.7.0

26
run.py
View File

@ -37,13 +37,13 @@ from web.main import App
from app.utils import SystemUtils, ConfigLoadCache
from app.utils.commons import INSTANCES
from app.db import init_db, update_db, init_data
from app.helper import IndexerHelper, DisplayHelper, init_chrome
from app.helper import IndexerHelper, DisplayHelper, ChromeHelper
from app.brushtask import BrushTask
from app.rsschecker import RssChecker
from app.scheduler import run_scheduler, restart_scheduler
from app.sync import run_monitor, restart_monitor
from app.torrentremover import TorrentRemover
from app.plugins import PluginManager
from app.speedlimiter import SpeedLimiter
from check_config import update_config, check_config
from version import APP_VERSION
@ -60,7 +60,7 @@ def sigal_handler(num, stack):
sys.exit()
def get_run_config(forcev4=False):
def get_run_config():
"""
获取运行配置
"""
@ -72,9 +72,7 @@ def get_run_config(forcev4=False):
app_conf = Config().get_config('app')
if app_conf:
if forcev4:
_web_host = "0.0.0.0"
elif app_conf.get("web_host"):
if app_conf.get("web_host"):
_web_host = app_conf.get("web_host").replace('[', '').replace(']', '')
_web_port = int(app_conf.get('web_port')) if str(app_conf.get('web_port', '')).isdigit() else 3000
_ssl_cert = app_conf.get('ssl_cert')
@ -110,8 +108,6 @@ def init_system():
def start_service():
log.console("开始启动服务...")
# 加载索引器配置
IndexerHelper()
# 启动虚拟显示
DisplayHelper()
# 启动定时服务
@ -124,8 +120,13 @@ def start_service():
RssChecker()
# 启动自动删种服务
TorrentRemover()
# 加载插件
PluginManager()
# 启动播放限速服务
SpeedLimiter()
# 加载索引器配置
IndexerHelper()
# 初始化浏览器
if not is_windows_exe:
ChromeHelper().init_driver()
def monitor_config():
@ -193,9 +194,6 @@ if __name__ == '__main__':
if len(os.popen("tasklist| findstr %s" % os.path.basename(sys.executable), 'r').read().splitlines()) <= 2:
p1 = threading.Thread(target=traystart, daemon=True)
p1.start()
else:
# 初始化浏览器驱动
init_chrome()
# gunicorn 启动
App.run(**get_run_config(is_windows_exe))
App.run(**get_run_config())

View File

@ -1 +1 @@
APP_VERSION = 'v2.9.2'
APP_VERSION = 'v2.9.1'

View File

@ -28,18 +28,19 @@ from app.media import Category, Media, Bangumi, DouBan
from app.media.meta import MetaInfo, MetaBase
from app.mediaserver import MediaServer
from app.message import Message, MessageCenter
from app.plugins import PluginManager, EventManager
from app.rss import Rss
from app.rsschecker import RssChecker
from app.scheduler import stop_scheduler
from app.sites import Sites, SiteUserInfo, SiteSignin, SiteCookie
from app.sites import Sites
from app.sites.sitecookie import SiteCookie
from app.subscribe import Subscribe
from app.subtitle import Subtitle
from app.sync import Sync, stop_monitor
from app.torrentremover import TorrentRemover
from app.speedlimiter import SpeedLimiter
from app.utils import StringUtils, EpisodeFormat, RequestUtils, PathUtils, \
SystemUtils, ExceptionUtils, Torrent
from app.utils.types import RmtMode, OsType, SearchType, DownloaderType, SyncType, MediaType, MovieTypes, TvTypes, \
EventType
from app.utils.types import RmtMode, OsType, SearchType, DownloaderType, SyncType, MediaType, MovieTypes, TvTypes
from config import RMT_MEDIAEXT, TMDB_IMAGE_W500_URL, RMT_SUBEXT, Config
from web.backend.search_torrents import search_medias_for_web, search_media_by_message
from web.backend.web_utils import WebUtils
@ -48,6 +49,7 @@ from web.backend.web_utils import WebUtils
class WebAction:
dbhelper = None
_actions = {}
TvTypes = ['TV', '电视剧']
def __init__(self):
self.dbhelper = DbHelper()
@ -163,7 +165,6 @@ class WebAction:
"get_rss_history": self.get_rss_history,
"get_transfer_history": self.get_transfer_history,
"get_unknown_list": self.get_unknown_list,
"get_unknown_list_by_page": self.get_unknown_list_by_page,
"get_customwords": self.get_customwords,
"get_directorysync": self.get_directorysync,
"get_users": self.get_users,
@ -187,7 +188,6 @@ class WebAction:
"get_download_dirs": self.__get_download_dirs,
"find_hardlinks": self.__find_hardlinks,
"update_sites_cookie_ua": self.__update_sites_cookie_ua,
"update_site_cookie_ua": self.__update_site_cookie_ua,
"set_site_captcha_code": self.__set_site_captcha_code,
"update_torrent_remove_task": self.__update_torrent_remove_task,
"get_torrent_remove_task": self.__get_torrent_remove_task,
@ -207,8 +207,7 @@ class WebAction:
"media_person": self.__media_person,
"person_medias": self.__person_medias,
"save_user_script": self.__save_user_script,
"run_directory_sync": self.__run_directory_sync,
"update_plugin_config": self.__update_plugin_config
"run_directory_sync": self.__run_directory_sync
}
def action(self, cmd, data=None):
@ -251,16 +250,10 @@ class WebAction:
stop_scheduler()
# 停止监控
stop_monitor()
# 关闭虚拟显示
DisplayHelper().stop_service()
# 关闭刷流
BrushTask().stop_service()
# 关闭自定义订阅
RssChecker().stop_service()
# 关闭插件
PluginManager().stop_service()
# 签退
logout_user()
# 关闭虚拟显示
DisplayHelper().quit()
# 重启进程
if os.name == "nt":
os.kill(os.getpid(), getattr(signal, "SIGKILL", signal.SIGTERM))
@ -280,7 +273,7 @@ class WebAction:
commands = {
"/ptr": {"func": TorrentRemover().auto_remove_torrents, "desp": "删种"},
"/ptt": {"func": Downloader().transfer, "desp": "下载文件转移"},
"/pts": {"func": SiteSignin().signin, "desp": "站点签到"},
"/pts": {"func": Sites().signin, "desp": "站点签到"},
"/rst": {"func": Sync().transfer_all_sync, "desp": "目录同步"},
"/rss": {"func": Rss().rssdownload, "desp": "RSS订阅"},
"/db": {"func": DoubanSync().sync, "desp": "豆瓣同步"},
@ -290,16 +283,6 @@ class WebAction:
"/utf": {"func": WebAction().unidentification, "desp": "重新识别"},
"/udt": {"func": WebAction().update_system, "desp": "系统更新"}
}
# 触发事件
EventManager().send_event(EventType.MessageIncoming, {
"channel": in_from.value,
"user_id": user_id,
"user_name": user_name,
"message": msg
})
command = commands.get(msg)
message = Message()
@ -334,7 +317,7 @@ class WebAction:
"https": "http://%s" % cfg_value, "http": "http://%s" % cfg_value}
else:
cfg['app']['proxies'] = {"https": "%s" %
cfg_value, "http": "%s" % cfg_value}
cfg_value, "http": "%s" % cfg_value}
else:
cfg['app']['proxies'] = {"https": None, "http": None}
return cfg
@ -343,6 +326,11 @@ class WebAction:
vals = cfg_value.split(",")
cfg['douban']['users'] = vals
return cfg
# 索引器
if cfg_key == "jackett.indexers":
vals = cfg_value.split("\n")
cfg['jackett']['indexers'] = vals
return cfg
# 最大支持三层赋值
keys = cfg_key.split(".")
if keys:
@ -431,7 +419,7 @@ class WebAction:
commands = {
"autoremovetorrents": TorrentRemover().auto_remove_torrents,
"pttransfer": Downloader().transfer,
"ptsignin": SiteSignin().signin,
"ptsignin": Sites().signin,
"sync": Sync().transfer_all_sync,
"rssdownload": Rss().rssdownload,
"douban": DoubanSync().sync,
@ -642,7 +630,42 @@ class WebAction:
progress = round(torrent.get('progress') * 100)
# 主键
key = torrent.get('hash')
elif Client == DownloaderType.TR:
elif Client == DownloaderType.Client115:
state = "Downloading"
dlspeed = StringUtils.str_filesize(torrent.get('peers'))
upspeed = StringUtils.str_filesize(torrent.get('rateDownload'))
speed = "%s%sB/s %s%sB/s" % (chr(8595),
dlspeed, chr(8593), upspeed)
# 进度
progress = round(torrent.get('percentDone'), 1)
# 主键
key = torrent.get('info_hash')
elif Client == DownloaderType.Aria2:
if torrent.get('status') != 'active':
state = "Stoped"
speed = "已暂停"
else:
state = "Downloading"
dlspeed = StringUtils.str_filesize(
torrent.get('downloadSpeed'))
upspeed = StringUtils.str_filesize(
torrent.get('uploadSpeed'))
speed = "%s%sB/s %s%sB/s" % (chr(8595),
dlspeed, chr(8593), upspeed)
# 进度
progress = round(int(torrent.get('completedLength')) /
int(torrent.get("totalLength")), 1) * 100
# 主键
key = torrent.get('gid')
elif Client == DownloaderType.PikPak:
key = torrent.get('id')
if torrent.get('finish'):
speed = "PikPak: 下载完成"
else:
speed = "PikPak: 下载中"
state = ""
progress = ""
else:
if torrent.status in ['stopped']:
state = "Stoped"
speed = "已暂停"
@ -656,14 +679,9 @@ class WebAction:
progress = round(torrent.progress, 1)
# 主键
key = torrent.id
else:
continue
torrent_info = {
'id': key,
'speed': speed,
'state': state,
'progress': progress
}
torrent_info = {'id': key, 'speed': speed,
'state': state, 'progress': progress}
if torrent_info not in DispTorrents:
DispTorrents.append(torrent_info)
return {"retcode": 0, "torrents": DispTorrents}
@ -859,8 +877,6 @@ class WebAction:
# 根据flag删除文件
source_path = paths[0].SOURCE_PATH
source_filename = paths[0].SOURCE_FILENAME
# 删除该识别记录对应的转移记录
self.dbhelper.delete_transfer_blacklist("%s/%s" % (source_path, source_filename))
dest = paths[0].DEST
dest_path = paths[0].DEST_PATH
dest_filename = paths[0].DEST_FILENAME
@ -2161,7 +2177,7 @@ class WebAction:
else:
res = RequestUtils(timeout=5).get_res(target)
seconds = int((datetime.datetime.now() -
start_time).microseconds / 1000)
start_time).microseconds / 1000)
if not res:
return {"res": False, "time": "%s 毫秒" % seconds}
elif res.ok:
@ -2182,7 +2198,7 @@ class WebAction:
resp = {"code": 0}
resp.update(
{"dataset": SiteUserInfo().get_pt_site_activity_history(data["name"])})
{"dataset": Sites().get_pt_site_activity_history(data["name"])})
return resp
@staticmethod
@ -2196,12 +2212,13 @@ class WebAction:
return {"code": 1, "msg": "查询参数错误"}
resp = {"code": 0}
_, _, site, upload, download = SiteUserInfo().get_pt_site_statistics_history(data["days"] + 1)
_, _, site, upload, download = Sites(
).get_pt_site_statistics_history(data["days"] + 1)
# 调整为dataset组织数据
dataset = [["site", "upload", "download"]]
dataset.extend([[site, upload, download]
for site, upload, download in zip(site, upload, download)])
for site, upload, download in zip(site, upload, download)])
resp.update({"dataset": dataset})
return resp
@ -2217,7 +2234,7 @@ class WebAction:
resp = {"code": 0}
seeding_info = SiteUserInfo().get_pt_site_seeding_info(
seeding_info = Sites().get_pt_site_seeding_info(
data["name"]).get("seeding_info", [])
# 调整为dataset组织数据
dataset = [["seeders", "size"]]
@ -2416,13 +2433,14 @@ class WebAction:
page=CurrentPage)
# 补充存在与订阅状态
filetransfer = FileTransfer()
for res in res_list:
fav, rssid = self.get_media_exists_flag(mtype=Type,
title=res.get(
"title"),
year=res.get(
"year"),
mediaid=res.get("id"))
fav, rssid = filetransfer.get_media_exists_flag(mtype=Type,
title=res.get(
"title"),
year=res.get(
"year"),
mediaid=res.get("id"))
res.update({
'fav': fav,
'rssid': rssid
@ -3763,49 +3781,6 @@ class WebAction:
return {"code": 0, "items": Items}
def get_unknown_list_by_page(self, data):
"""
查询所有未识别记录
"""
PageNum = data.get("pagenum")
if not PageNum:
PageNum = 30
SearchStr = data.get("keyword")
CurrentPage = data.get("page")
if not CurrentPage:
CurrentPage = 1
else:
CurrentPage = int(CurrentPage)
totalCount, Records = self.dbhelper.get_transfer_unknown_paths_by_page(
SearchStr, CurrentPage, PageNum)
Items = []
for rec in Records:
if not rec.PATH:
continue
path = rec.PATH.replace("\\", "/") if rec.PATH else ""
path_to = rec.DEST.replace("\\", "/") if rec.DEST else ""
sync_mode = rec.MODE or ""
rmt_mode = ModuleConf.get_dictenum_key(ModuleConf.RMT_MODES,
sync_mode) if sync_mode else ""
Items.append({
"id": rec.ID,
"path": path,
"to": path_to,
"name": path,
"sync_mode": sync_mode,
"rmt_mode": rmt_mode,
})
TotalPage = floor(totalCount / PageNum) + 1
return {
"code": 0,
"total": totalCount,
"items": Items,
"totalPage": TotalPage,
"pageNum": PageNum,
"currentPage": CurrentPage
}
def unidentification(self):
"""
重新识别所有未识别记录
@ -3914,7 +3889,8 @@ class WebAction:
查询所有过滤规则
"""
RuleGroups = Filter().get_rule_infos()
sql_file = os.path.join(Config().get_script_path(), "init_filter.sql")
sql_file = os.path.join(Config().get_root_path(),
"config", "init_filter.sql")
with open(sql_file, "r", encoding="utf-8") as f:
sql_list = f.read().split(';\n')
Init_RuleGroups = []
@ -4078,15 +4054,21 @@ class WebAction:
if not media.imdb_id:
media.set_tmdb_info(Media().get_tmdb_info(mtype=media.type,
tmdbid=media.tmdb_id))
event_item = media.to_dict()
event_item.update({
"file": os.path.splitext(path)[0],
"file_ext": os.path.splitext(name)[-1],
"bluray": False
})
# 触发字幕下载事件
EventManager().send_event(EventType.SubtitleDownload, event_item)
return {"code": 0, "msg": "字幕下载任务已提交,正在后台运行。"}
subtitle_item = [{"type": media.type,
"file": os.path.splitext(path)[0],
"file_ext": os.path.splitext(name)[-1],
"name": media.en_name if media.en_name else media.cn_name,
"title": media.title,
"year": media.year,
"season": media.begin_season,
"episode": media.begin_episode,
"bluray": False,
"imdbid": media.imdb_id}]
success, retmsg = Subtitle().download_subtitle(items=subtitle_item)
if success:
return {"code": 0, "msg": retmsg}
else:
return {"code": -1, "msg": retmsg}
@staticmethod
def __get_download_setting(data):
@ -4276,17 +4258,6 @@ class WebAction:
Sites().init_config()
return {"code": retcode, "messages": messages}
def __update_site_cookie_ua(self, data):
"""
更新单个站点的Cookie和UA
"""
siteid = data.get("site_id")
cookie = data.get("site_cookie")
ua = data.get("site_ua")
self.dbhelper.update_site_cookie_ua(tid=siteid, cookie=cookie, ua=ua)
Sites().init_config()
return {"code": 0, "messages": "请求发送成功"}
@staticmethod
def __set_site_captcha_code(data):
"""
@ -4396,6 +4367,8 @@ class WebAction:
return {"code": 1}
try:
SystemConfig().set_system_config(key=key, value=value)
if key == "SpeedLimit":
SpeedLimiter().init_config()
return {"code": 0}
except Exception as e:
ExceptionUtils.exception_traceback(e)
@ -4411,7 +4384,7 @@ class WebAction:
sort_by = data.get("sort_by")
sort_on = data.get("sort_on")
site_hash = data.get("site_hash")
statistics = SiteUserInfo().get_site_user_statistics(sites=sites, encoding=encoding)
statistics = Sites().get_site_user_statistics(sites=sites, encoding=encoding)
if sort_by and sort_on in ["asc", "desc"]:
if sort_on == "asc":
statistics.sort(key=lambda x: x[sort_by])
@ -4469,7 +4442,7 @@ class WebAction:
cookie_str = ""
for content in content_list:
cookie_str += content.get("name") + \
"=" + content.get("value") + ";"
"=" + content.get("value") + ";"
if not cookie_str:
continue
site_info = Sites().get_sites(siteurl=domain)
@ -4484,7 +4457,8 @@ class WebAction:
return {"code": 0, "msg": f"成功更新 {success_count} 个站点的Cookie数据"}
return {"code": 0, "msg": "同步完成但未更新任何站点的Cookie"}
def media_detail(self, data):
@staticmethod
def media_detail(data):
"""
获取媒体详情
"""
@ -4503,10 +4477,10 @@ class WebAction:
"msg": "无法查询到TMDB信息"
}
# 查询存在及订阅状态
fav, rssid = self.get_media_exists_flag(mtype=mtype,
title=media_info.title,
year=media_info.year,
mediaid=media_info.tmdb_id)
fav, rssid = FileTransfer().get_media_exists_flag(mtype=mtype,
title=media_info.title,
year=media_info.year,
mediaid=media_info.tmdb_id)
MediaHander = Media()
return {
"code": 0,
@ -4614,52 +4588,3 @@ class WebAction:
"""
Sync().transfer_all_sync(sid=data.get("sid"))
return {"code": 0, "msg": "执行成功"}
@staticmethod
def __update_plugin_config(data):
"""
保存插件配置
"""
plugin_id = data.get("plugin")
config = data.get("config")
if not plugin_id:
return {"code": 1, "msg": "数据错误"}
PluginManager().save_plugin_config(pid=plugin_id, conf=config)
PluginManager().reload_plugin(plugin_id)
return {"code": 0, "msg": "保存成功"}
def get_media_exists_flag(self, mtype, title, year, mediaid):
"""
获取媒体存在标记是否存在是否订阅
:param: mtype 媒体类型
:param: title 媒体标题
:param: year 媒体年份
:param: mediaid TMDBID/DB:豆瓣ID/BG:Bangumi的ID
:return: 1-已订阅/2-已下载/0-不存在未订阅, RSSID
"""
if str(mediaid).isdigit():
tmdbid = mediaid
else:
tmdbid = None
if mtype in MovieTypes:
rssid = self.dbhelper.get_rss_movie_id(title=title, year=year, tmdbid=tmdbid)
else:
if not tmdbid:
meta_info = MetaInfo(title=title)
title = meta_info.get_name()
season = meta_info.get_season_string()
if season:
year = None
else:
season = None
rssid = self.dbhelper.get_rss_tv_id(title=title, year=year, season=season, tmdbid=tmdbid)
if rssid:
# 已订阅
fav = "1"
elif MediaServer().check_item_exists(title=title, year=year, tmdbid=tmdbid):
# 已下载
fav = "2"
else:
# 未订阅、未下载
fav = "0"
return fav, rssid

View File

@ -337,21 +337,6 @@ class SiteDelete(ClientResource):
return WebAction().api_action(cmd='del_site', data=self.parser.parse_args())
@site.route('/cookie/update')
class SiteUpdateCookie(ApiResource):
parser = reqparse.RequestParser()
parser.add_argument('site_id', type=int, help='更新站点ID', location='form')
parser.add_argument('site_cookie', type=str, help='Cookie', location='form')
parser.add_argument('site_ua', type=str, help='Ua', location='form')
@site.doc(parser=parser)
def post(self):
"""
更新站点Cookie和Ua
"""
return WebAction().api_action(cmd='update_site_cookie_ua', data=self.parser.parse_args())
@site.route('/statistics/activity')
class SiteStatisticsActivity(ClientResource):
parser = reqparse.RequestParser()
@ -612,7 +597,7 @@ class DownloadConfigUpdate(ClientResource):
parser.add_argument('download_limit', type=int, help='下载速度限制', location='form')
parser.add_argument('ratio_limit', type=int, help='分享率限制', location='form')
parser.add_argument('seeding_time_limit', type=int, help='做种时间限制', location='form')
parser.add_argument('downloader', type=str, help='下载器Qbittorrent/Transmission', location='form')
parser.add_argument('downloader', type=str, help='下载器Qbittorrent/Transmission/115网盘/Aria2', location='form')
@download.doc(parser=parser)
def post(self):

View File

@ -15,7 +15,7 @@ def get_login_wallpaper(today=datetime.datetime.strftime(datetime.datetime.now()
wallpaper = Config().get_config('app').get('wallpaper')
tmdbkey = Config().get_config('app').get('rmt_tmdbkey')
if (not wallpaper or wallpaper == "themoviedb") and tmdbkey:
img_url = __get_themoviedb_wallpaper()
img_url = __get_themoviedb_wallpaper(today)
else:
img_url = __get_bing_wallpaper(today)
if img_url:
@ -25,7 +25,7 @@ def get_login_wallpaper(today=datetime.datetime.strftime(datetime.datetime.now()
return ""
def __get_themoviedb_wallpaper():
def __get_themoviedb_wallpaper(today):
"""
获取TheMovieDb的随机背景图
"""

View File

@ -45,18 +45,14 @@ class WebUtils:
获取最新版本号
"""
try:
releases_update_only = Config().get_config("app").get("releases_update_only")
version_res = RequestUtils(proxies=Config().get_proxies()).get_res(
"https://api.github.com/repos/NAStool/nas-tools/releases/latest")
"https://api.github.com/repos/jxxghp/nas-tools/releases/latest")
commit_res = RequestUtils(proxies=Config().get_proxies()).get_res(
"https://api.github.com/repos/NAStool/nas-tools/commits/master")
"https://api.github.com/repos/jxxghp/nas-tools/commits/master")
if version_res and commit_res:
ver_json = version_res.json()
commit_json = commit_res.json()
if releases_update_only:
version = f"{ver_json['tag_name']}"
else:
version = f"{ver_json['tag_name']} {commit_json['sha'][:7]}"
version = f"{ver_json['tag_name']} {commit_json['sha'][:7]}"
url = ver_json["html_url"]
return version, url, True
except Exception as e:
@ -159,27 +155,3 @@ class WebUtils:
tmp_info.title = "%s%s" % (tmp_info.title, meta_info.begin_episode)
medias.append(tmp_info)
return medias
@staticmethod
def get_page_range(current_page, total_page):
"""
计算分页范围
"""
if total_page <= 5:
StartPage = 1
EndPage = total_page
else:
if current_page <= 3:
StartPage = 1
EndPage = 5
elif current_page >= total_page - 2:
StartPage = total_page - 4
EndPage = total_page
else:
StartPage = current_page - 2
if total_page > current_page + 2:
EndPage = current_page + 2
else:
EndPage = total_page
return range(StartPage, EndPage + 1)

View File

@ -26,11 +26,11 @@ from app.filter import Filter
from app.helper import SecurityHelper, MetaHelper, ChromeHelper, ThreadHelper
from app.indexer import Indexer
from app.media.meta import MetaInfo
from app.mediaserver import MediaServer
from app.mediaserver import WebhookEvent
from app.message import Message
from app.plugins import EventManager, PluginManager
from app.rsschecker import RssChecker
from app.sites import Sites, SiteUserInfo
from app.sites import Sites
from app.speedlimiter import SpeedLimiter
from app.subscribe import Subscribe
from app.sync import Sync
from app.torrentremover import TorrentRemover
@ -560,7 +560,7 @@ def statistics():
SiteRatios = []
SiteErrs = {}
# 站点上传下载
SiteData = SiteUserInfo().get_pt_date(specify_sites=refresh_site, force=refresh_force)
SiteData = Sites().get_pt_date(specify_sites=refresh_site, force=refresh_force)
if isinstance(SiteData, dict):
for name, data in SiteData.items():
if not data:
@ -589,7 +589,7 @@ def statistics():
SiteRatios.append(round(float(ratio), 1))
# 近期上传下载各站点汇总
CurrentUpload, CurrentDownload, _, _, _ = SiteUserInfo().get_pt_site_statistics_history(
CurrentUpload, CurrentDownload, _, _, _ = Sites().get_pt_site_statistics_history(
days=2)
# 站点用户数据
@ -873,8 +873,23 @@ def history():
keyword = request.args.get("s") or ""
current_page = request.args.get("page")
Result = WebAction().get_transfer_history({"keyword": keyword, "page": current_page, "pagenum": pagenum})
PageRange = WebUtils.get_page_range(current_page=Result.get("currentPage"),
total_page=Result.get("totalPage"))
if Result.get("totalPage") <= 5:
StartPage = 1
EndPage = Result.get("totalPage")
else:
if Result.get("currentPage") <= 3:
StartPage = 1
EndPage = 5
elif Result.get("currentPage") >= Result.get("totalPage") - 2:
StartPage = Result.get("totalPage") - 4
EndPage = Result.get("totalPage")
else:
StartPage = Result.get("currentPage") - 2
if Result.get("totalPage") > Result.get("currentPage") + 2:
EndPage = Result.get("currentPage") + 2
else:
EndPage = Result.get("totalPage")
PageRange = range(StartPage, EndPage + 1)
return render_template("rename/history.html",
TotalCount=Result.get("total"),
@ -903,9 +918,24 @@ def tmdbcache():
else:
current_page = int(current_page)
total_count, tmdb_caches = MetaHelper().dump_meta_data(search_str, current_page, page_num)
total_page = floor(total_count / page_num) + 1
page_range = WebUtils.get_page_range(current_page=current_page,
total_page=total_page)
if total_page <= 5:
start_page = 1
end_page = total_page
else:
if current_page <= 3:
start_page = 1
end_page = 5
else:
start_page = current_page - 3
if total_page > current_page + 3:
end_page = current_page + 3
else:
end_page = total_page
page_range = range(start_page, end_page + 1)
return render_template("rename/tmdbcache.html",
TotalCount=total_count,
@ -922,21 +952,10 @@ def tmdbcache():
@App.route('/unidentification', methods=['POST', 'GET'])
@login_required
def unidentification():
pagenum = request.args.get("pagenum")
keyword = request.args.get("s") or ""
current_page = request.args.get("page")
Result = WebAction().get_unknown_list_by_page({"keyword": keyword, "page": current_page, "pagenum": pagenum})
PageRange = WebUtils.get_page_range(current_page=Result.get("currentPage"),
total_page=Result.get("totalPage"))
Items = WebAction().get_unknown_list().get("items")
return render_template("rename/unidentification.html",
TotalCount=Result.get("total"),
Count=len(Result.get("items")),
Items=Result.get("items"),
Search=keyword,
CurrentPage=Result.get("currentPage"),
TotalPage=Result.get("totalPage"),
PageRange=PageRange,
PageNum=Result.get("currentPage"))
TotalCount=len(Items),
Items=Items)
# 文件管理页面
@ -1075,6 +1094,16 @@ def notification():
MessageClients=MessageClients)
# 字幕设置页面
@App.route('/subtitle', methods=['POST', 'GET'])
@login_required
def subtitle():
ChromeOk = ChromeHelper().get_status()
return render_template("setting/subtitle.html",
Config=Config().get_config(),
ChromeOk=ChromeOk)
# 用户管理页面
@App.route('/users', methods=['POST', 'GET'])
@login_required
@ -1124,15 +1153,6 @@ def rss_parser():
Count=len(RssParsers))
# 插件页面
@App.route('/plugin', methods=['POST', 'GET'])
@login_required
def plugin():
Plugins = PluginManager().get_plugins_conf()
return render_template("setting/plugin.html",
Plugins=Plugins)
# 事件响应
@App.route('/do', methods=['POST'])
@action_login_check
@ -1268,10 +1288,6 @@ def wechat():
# 解析消息内容
content = ""
if msg_type == "event":
# 校验用户有权限执行交互命令
if conf.get("adminUser") and not any(user_id == admin_user for admin_user in str(conf.get("adminUser")).split(";")):
Message().send_channel_msg(channel=SearchType.WX, title="用户无权限执行菜单命令", user_id=user_id)
return make_response(content, 200)
# 事件消息
event_key = DomUtils.tag_value(root_node, "EventKey")
if event_key:
@ -1303,11 +1319,8 @@ def plex_webhook():
return '不允许的IP地址请求'
request_json = json.loads(request.form.get('payload', {}))
log.debug("收到Plex Webhook报文%s" % str(request_json))
# 发送消息
ThreadHelper().start_thread(MediaServer().webhook_message_handler,
(request_json, MediaServerType.PLEX))
# 触发事件
EventManager().send_event(EventType.PlexWebhook, request_json)
ThreadHelper().start_thread(WebhookEvent().plex_action, (request_json,))
ThreadHelper().start_thread(SpeedLimiter().plex_action, (request_json,))
return 'Ok'
@ -1319,11 +1332,8 @@ def jellyfin_webhook():
return '不允许的IP地址请求'
request_json = request.get_json()
log.debug("收到Jellyfin Webhook报文%s" % str(request_json))
# 发送消息
ThreadHelper().start_thread(MediaServer().webhook_message_handler,
(request_json, MediaServerType.JELLYFIN))
# 触发事件
EventManager().send_event(EventType.JellyfinWebhook, request_json)
ThreadHelper().start_thread(WebhookEvent().jellyfin_action, (request_json,))
ThreadHelper().start_thread(SpeedLimiter().jellyfin_action, (request_json,))
return 'Ok'
@ -1335,11 +1345,8 @@ def emby_webhook():
return '不允许的IP地址请求'
request_json = json.loads(request.form.get('data', {}))
log.debug("收到Emby Webhook报文%s" % str(request_json))
# 发送消息
ThreadHelper().start_thread(MediaServer().webhook_message_handler,
(request_json, MediaServerType.EMBY))
# 触发事件
EventManager().send_event(EventType.EmbyWebhook, request_json)
ThreadHelper().start_thread(WebhookEvent().emby_action, (request_json,))
ThreadHelper().start_thread(SpeedLimiter().emby_action, (request_json,))
return 'Ok'

View File

@ -476,6 +476,19 @@ const navbar_list = [
</svg>
`,
},
{
name: "字幕",
page: "subtitle",
icon: html`
<!-- https://tabler-icons.io/static/tabler-icons/icons-png/badge-cc.png -->
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-badge-cc" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path>
<path d="M3 5m0 2a2 2 0 0 1 2 -2h14a2 2 0 0 1 2 2v10a2 2 0 0 1 -2 2h-14a2 2 0 0 1 -2 -2z"></path>
<path d="M10 10.5a1.5 1.5 0 0 0 -3 0v3a1.5 1.5 0 0 0 3 0"></path>
<path d="M17 10.5a1.5 1.5 0 0 0 -3 0v3a1.5 1.5 0 0 0 3 0"></path>
</svg>
`,
},
{
name: "豆瓣",
page: "douban",
@ -491,23 +504,6 @@ const navbar_list = [
</svg>
`,
},
{
name: "插件",
page: "plugin",
icon: html`
<!-- https://tabler-icons.io/static/tabler-icons/icons-png/brand-codesandbox.png -->
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-brand-codesandbox" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path>
<path d="M20 7.5v9l-4 2.25l-4 2.25l-4 -2.25l-4 -2.25v-9l4 -2.25l4 -2.25l4 2.25z"></path>
<path d="M12 12l4 -2.25l4 -2.25"></path>
<path d="M12 12l0 9"></path>
<path d="M12 12l-4 -2.25l-4 -2.25"></path>
<path d="M20 12l-4 2v4.75"></path>
<path d="M4 12l4 2l0 4.75"></path>
<path d="M8 5.25l4 2.25l4 -2.25"></path>
</svg>
`,
},
],
},
];
@ -530,7 +526,7 @@ export class LayoutNavbar extends CustomElement {
this.layout_userpris = navbar_list.map((item) => (item.name));
this._active_name = "";
this._update_appversion = "";
this._update_url = "https://github.com/NAStool/nas-tools";
this._update_url = "https://github.com/jxxghp/nas-tools";
this._is_update = false;
this.classList.add("navbar","navbar-vertical","navbar-expand-lg","lit-navbar-fixed","lit-navbar","lit-navbar-hide-scrollbar");
}
@ -575,7 +571,7 @@ export class LayoutNavbar extends CustomElement {
url = ret.url;
break;
case 2:
url = "https://github.com/NAStool/nas-tools/commits/master"
url = "https://github.com/jxxghp/nas-tools/commits/master"
break;
}
if (url) {

View File

@ -1,5 +1,4 @@
export * from "./custom/index.js";
export * from "./card/index.js";
export * from "./page/index.js";
export * from "./layout/index.js";
export * from "./plugin/index.js";
export * from "./layout/index.js";

View File

@ -28,6 +28,11 @@ export class PageDiscovery extends CustomElement {
title:"TMDB流行趋势",
subtype :"tmdb",
},
{
type: "MOV",
title:"豆瓣最新电影",
subtype :"dbnm",
},
{
type: "MOV",
title:"豆瓣热门电影",
@ -40,7 +45,7 @@ export class PageDiscovery extends CustomElement {
},
{
type: "TV",
title:"豆瓣热门",
title:"豆瓣热门电视剧",
subtype :"dbht",
},
{

View File

@ -1 +0,0 @@
export * from "./modal/index.js";

View File

@ -1,144 +0,0 @@
import { html, nothing } from "../../utility/lit-core.min.js";
import { CustomElement } from "../../utility/utility.js";
export class PluginModal extends CustomElement {
static properties = {
id: {attribute: "plugin-id"},
name: {attribute: "plugin-name"},
config: {attribute: "plugin-config", type: Object},
fields: {attribute: "plugin-fields", type: Array},
prefix: {attribute: "plugin-prefix"},
};
constructor() {
super();
this.id = "";
this.name = "";
this.config = {};
this.fields = [];
this.prefix = "";
}
__render_fields() {
let content = html``;
for (let field of this.fields) {
switch(field["type"]) {
case "div":
content = html`${content}${this.__render_div(field)}`;
break;
case "details":
content = html`${content}${this.__render_details(field)}`;
break;
}
}
return content;
}
__render_div(field) {
let field_content = field["content"];
let div_content = html``;
for (let row of field_content) {
let row_content = html``;
for (let col of row) {
let col_type = col["type"];
switch(col_type) {
case "text":
row_content = html`${row_content}${this.__render_text(col)}`;
break;
case "switch":
row_content = html`${row_content}${this.__render_switch(col)}`;
break;
}
}
div_content = html`${div_content}<div class="row mb-2">${row_content}</div>`;
}
return div_content
}
__render_details(field) {
let title = field["summary"];
let tooltip = field["tooltip"];
return html`<details class="mb-2">
<summary class="summary mb-2">
${title} ${this.__render_note(tooltip)}
</summary>
${this.__render_div(field)}
</details>`
}
__render_text(field_content) {
let text_content = html``;
let title = field_content["title"];
let required = field_content["required"];
let tooltip = field_content["tooltip"];
let content = field_content["content"];
for (let index in content) {
let id = content[index]["id"];
let placeholder = content[index]["placeholder"];
if (index === "0") {
text_content = html`<div class="mb-1">
<label class="form-label ${required}">${title} ${this.__render_note(tooltip)}</label>
<input type="text" value="${this.config[id] || ""}" class="form-control" id="${this.prefix}${id}" placeholder="${placeholder}" autocomplete="off">
</div>`
} else {
text_content = html`${text_content}<div class="mb-3">
<input type="text" value="${this.config[id] || ""}" class="form-control" id="${this.prefix}${id}" placeholder="${placeholder}" autoComplete="off">
</div>`
}
}
return html`<div class="col-12 col-lg">${text_content}</div>`
}
__render_switch(field_content) {
let title = field_content["title"];
let required = field_content["required"];
let tooltip = field_content["tooltip"];
let id = field_content["id"];
let checkbox = html``;
if (this.config[id]) {
checkbox = html`<input class="form-check-input" type="checkbox" id="${this.prefix}${id}" checked>`
} else {
checkbox = html`<input class="form-check-input" type="checkbox" id="${this.prefix}${id}">`
}
return html`<div class="col-12 col-lg">
<div class="mb-1">
<label class="form-check form-switch ${required}">
${checkbox}
<span class="form-check-label">${title} ${this.__render_note(tooltip)}</span>
</label>
</div>
</div>`
}
__render_note(tooltip) {
if (tooltip) {
return html`<span class="form-help" data-bs-toggle="tooltip" title="${tooltip}">?</span>`;
}
}
render() {
return html`<div class="modal modal-blur fade" id="modal-plugin-${this.id}" tabindex="-1" role="dialog" aria-hidden="true"
data-bs-backdrop="static" data-bs-keyboard="false">
<div class="modal-dialog modal-lg modal-dialog-centered" role="document">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">${this.name}</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
${this.__render_fields()}
</div>
<div class="modal-footer">
<a href="javascript:save_plugin_config('${this.id}', '${this.prefix}')" class="btn btn-primary">
确定
</a>
</div>
</div>
</div>
</div>`
}
}
window.customElements.define("plugin-modal", PluginModal);

4
web/static/css/font-awesome.min.css vendored Normal file

File diff suppressed because one or more lines are too long

6
web/static/css/jsoneditor.min.css vendored Normal file

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 434 KiB

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More