mirror of
https://github.com/stashapp/CommunityScripts.git
synced 2026-05-02 06:27:20 -05:00
Update plugins RenameFile to 0.5.6 and FileMonitor to 1.0.3 (#490)
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
# FileMonitor: Ver 0.9.0 (By David Maisonave)
|
||||
# FileMonitor: Ver 1.0.3 (By David Maisonave)
|
||||
|
||||
FileMonitor is a [Stash](https://github.com/stashapp/stash) plugin with the following two main features:
|
||||
|
||||
@@ -10,8 +10,19 @@ FileMonitor is a [Stash](https://github.com/stashapp/stash) plugin with the foll
|
||||
From the GUI, FileMonitor can be started as a service or as a plugin. The recommended method is to start it as a service. When started as a service, it will jump on the Task Queue momentarily, and then disappear as it starts running in the background.
|
||||
|
||||
- To start monitoring file changes, go to **Stash->Settings->Task->[Plugin Tasks]->FileMonitor**, and click on the [Start Library Monitor Service] button.
|
||||
|
||||
- 
|
||||
- **Important Note**: At first, this will show up as a plugin in the Task Queue momentarily. It will then disappear from the Task Queue and run in the background as a service.
|
||||
- **Important Note**: At first, it will show up as a plugin in the Task Queue momentarily. It will then disappear from the Task Queue and run in the background as a service.
|
||||
- To check running status of FileMonitor, use the Settings->Tools->FileMonitor option.
|
||||
- 
|
||||
- If FileMonitor is running, it'll display the following screen:
|
||||
- 
|
||||
- There's also an icon that gets displayed on the top right corner of the Stash page. When FileMonitor is running this icon has a checkmark on it.
|
||||
- 
|
||||
- When FileMonitor is not running, the icon has an **X**.
|
||||
- 
|
||||
- However, this icon is not very practical, since the user still has to go to the Settings->Tools->FileMonitor page to force it to update the icon.
|
||||
|
||||
- To stop FileMonitor click on [Stop Library Monitor] button.
|
||||
- The **[Monitor as a Plugin]** option is mainly available for backwards compatibility and for test purposes.
|
||||
|
||||
@@ -37,10 +48,10 @@ To enable the scheduler go to **Stash->Settings->Plugins->Plugins->FileMonitor**
|
||||
- Auto Tag -> [Auto Tag] (Daily)
|
||||
- Maintenance -> [Clean] (every 2 days)
|
||||
- Maintenance -> [Clean Generated Files] (every 2 days)
|
||||
- Maintenance -> [Optimise Database] (Daily)
|
||||
- Maintenance -> [Optimize Database] (Daily)
|
||||
- Generated Content-> [Generate] (Every Sunday at 7AM)
|
||||
- Library -> [Scan] (Weekly) (Every Sunday at 3AM)
|
||||
- Backup -> [Backup] 2nd sunday of the month at 1AM
|
||||
- Backup -> [Backup] 2nd Sunday of the month at 1AM
|
||||
- The example tasks are disabled by default because they either have a zero frequency value or the time field is set to **DISABLED**.
|
||||
|
||||
To configure the schedule or to add new task, edit the **task_scheduler** section in the **filemonitor_config.py** file.
|
||||
@@ -92,7 +103,7 @@ To configure the schedule or to add new task, edit the **task_scheduler** sectio
|
||||
- The **validateDir** field can be used to define the plugin sub directory, which is checked to see if it exist before running the task.
|
||||
- **taskName** field is used to name the task to call for the associated plugin. It can not be used with "taskQue":False
|
||||
- **taskQue** field is used to call the plugin without using the Task Queue. I.E. "taskQue":False. When this field is set to False, the taskName field can NOT be used. Instead use taskMode to identify the task to call.
|
||||
- **taskMode** field is used in order to run the plugin without using the Task Queue. The plugin runs immediatly. Be careful not to confuse taskMode with taskName. Look in the plugin \*.yml file under the **tasks** section where it defines both the task-name and the task-mode.
|
||||
- **taskMode** field is used in order to run the plugin without using the Task Queue. The plugin runs immediately. Be careful not to confuse taskMode with taskName. Look in the plugin \*.yml file under the **tasks** section where it defines both the task-name and the task-mode.
|
||||
- Task can be scheduled to run monthly, weekly, hourly, and by minutes.
|
||||
- The scheduler list uses two types of syntax. One is **weekday** based, and the other is **frequency** based.
|
||||
|
||||
@@ -142,9 +153,12 @@ To configure the schedule or to add new task, edit the **task_scheduler** sectio
|
||||
- pip install -r requirements.txt
|
||||
- Or manually install each requirement:
|
||||
- `pip install stashapp-tools --upgrade`
|
||||
- `pip install pyYAML`
|
||||
- `pip install requests`
|
||||
- `pip install watchdog`
|
||||
- `pip install schedule`
|
||||
- `pip install pyyaml`
|
||||
|
||||
Note: pyyaml is only needed for a Docker setup.
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -170,3 +184,88 @@ Please use the following link to report FileMonitor bugs:
|
||||
Please use the following link to report FileMonitor Feature Request:[FileMonitor Feature Reques](https://github.com/David-Maisonave/Axter-Stash/issues/new?assignees=&labels=Enhancement&projects=&template=feature_request_plugin.yml&title=%F0%9F%92%A1%EF%B8%8F%5BEnhancement%5D%3A%5BFileMonitor%5D+Your_Short_title)
|
||||
|
||||
Please do **NOT** use the feature request to include any problems associated with errors. Instead use the bug report for error issues.
|
||||
|
||||
## Docker
|
||||
|
||||
### Single Stash Docker Installation
|
||||
|
||||
**Note:** This section is for users who have a single instance of Stash Docker installed, and do NOT have Stash installed on the host machine.
|
||||
|
||||
- FileMonitor requires watchdog module in order to work. Although the watchdog module loads and runs on Docker, it fails to function because Docker fails to report file changes.
|
||||
- FileMonitor can work with Docker Stash setup if it's executed externally on the host OS. To do this, start FileMonitor on the command line and pass the Stash URL and docker YML file. (**--url** and **--docker**)
|
||||
- Example1:
|
||||
|
||||
```
|
||||
python filemonitor.py --url http://localhost:9999 --docker "C:\Users\MyUser\AppData\Local\Docker\wsl\Stash27.2\docker-compose.yml"
|
||||
```
|
||||
|
||||
- Example2: (with ApiKey)
|
||||
- If Stash Docker is configured with a password, an ApiKey is needed, and has to be passed on the command line (**--apikey**).
|
||||
|
||||
```
|
||||
python filemonitor.py --url http://localhost:9999 --docker "C:\Users\MyUser\AppData\Local\Docker\wsl\Stash27.2\docker-compose.yml" --apikey "zNDU0MDk3N30.4nZVLk3xikjJZfZ0JTPA_Fic8JveycCI6IkpXVCJ9.eyJ1aWQiOiJheHRlJhbGciOiJIUzI1NiIsInR5I6IkFQSUtleSIsImlhdCI6MTcFx3DZe5U21ZDcC3c"
|
||||
```
|
||||
|
||||
- The **docker-compose.yml** file should be located in the folder associated with the Docker Stash container, and it list the mapped paths which FileMonitor uses to determine the host path which is mapped to the Docker path.
|
||||
- For more information, see [Using FileMonitor as a script](https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/FileMonitor#Using-FileMonitor-as-a-script)
|
||||
- For more information on creating a Docker Stash setup, see (https://github.com/David-Maisonave/Axter-Stash/tree/main/Docker)
|
||||
|
||||
### Multiple Stash Docker Configuration
|
||||
|
||||
**Note:** This section applies to users who have multiple Stash Docker instances running, and also have Stash installed and running on the host machine.
|
||||
|
||||
- FileMonitor can be configured to run on the host machine, and update all the Stash Docker instances when an associated file change occurs. To activate this option change the filemonitor_config.py file by setting the **dockers** field with the information associated with each Stash Docker instance.
|
||||
- There are three examples that are commented out in the **dockers** field, which users can easily modify to configure for their particular Stash Docker instances.
|
||||
- The following is the uncommented example from the **filemonitor_config.py** file.
|
||||
|
||||
```Python
|
||||
# Docker notification from host machine
|
||||
"dockers": [
|
||||
# A simple basic example with only one bind mount path.
|
||||
{"GQL":"http://localhost:9995", "apiKey":"", "bindMounts":[{r"C:\Video":"/mnt/Video"}]},
|
||||
|
||||
# Example having 8 bind mount paths.
|
||||
{"GQL":"http://localhost:9997", "apiKey":"", "bindMounts":[
|
||||
{r"C:\Users\admin3\AppData\Local\Docker\wsl\ManyMnt\data":"/data"},
|
||||
{r"C:\Users\admin3\Videos":"/external"},
|
||||
{r"C:\Users\admin3\Pictures":"/external2"},
|
||||
{r"C:\Users\admin3\Downloads":"/external3"},
|
||||
{r"E:\Downloads":"/external4"},
|
||||
{r"E:\Celeb":"/external5"},
|
||||
{r"F:\Hentai":"/external6"},
|
||||
{r"Z:\Temp":"/external7"},
|
||||
]
|
||||
},
|
||||
|
||||
# Example using the apiKey for a password configured Stash installation.
|
||||
{"GQL":"http://localhost:9994", "apiKey":"eyJhb3676zgdUzI1NiIsInR5cCI6IwfXVCJ9.ewJ1aWQiOiJheHRlweIsInN1YiI6IkFQSUtleSIsImlhdewrweczNDU0MDk3N30.4nZVLk3xikjJZfZ0JTPA_Fic8JvFx3DZe5U21Zasdag", "bindMounts":[
|
||||
{r"C:\Users\admin3\AppData\Local\Docker\wsl\MyStashContainer\data":"/data"},
|
||||
{r"C:\Vid":"/mnt/Vid"},
|
||||
{r"C:\Users\admin3\Downloads":"/mnt/Downloads"},
|
||||
]
|
||||
},
|
||||
],
|
||||
```
|
||||
|
||||
- Each Stash Docker instance requires three fields, which are case sensitive.
|
||||
- **GQL**: This is the Stash URL which is used by the host machine to access the particular Stash Docker instance. Note: Do **NOT** include graphql in the URL.
|
||||
- **apiKey**: This is a required field, but the value can be empty if the Stash instances doesn't require a password.
|
||||
- **bindMounts**: At least one bind mount path must be specified.
|
||||
- The first string defines the host path (**C:\Video**), and the second string defines the Docker mount path (**/mnt/Video**). These paths are listed on Docker-Desktop under Containers->ContainerName->[Bind Mounts] tab.
|
||||
- The host path must be a fully qualified host local path. It can **not** be a relative path **(./../Videos)** and it can **not** be a URL with a local network domain name **(\\\\MyComputerName\\SharedPath\\MyFolder)**.
|
||||
- If the host path contains a backslash, start the string with an r. Example: **r"C:\Vid"**
|
||||
- If any of the below mount paths are included, they will be ignored because they could trigger a feedback loop.
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- ./config:/root/.stash
|
||||
- ./metadata:/metadata
|
||||
- ./cache:/cache
|
||||
- ./blobs:/blobs
|
||||
- ./generated:/generated
|
||||
|
||||
### Stash Docker Installer
|
||||
|
||||
If you need help installing Stash Docker, use the Stash Docker installer in the following link: (https://github.com/David-Maisonave/Axter-Stash/tree/main/Docker)
|
||||
|
||||
## Future Planned Features or Fixes
|
||||
|
||||
- Have the FileMonitor running status ICON update the icon without having to go to the Settings->Tools->FileMonitor page. Planned for version 1.2.0.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -3,15 +3,22 @@
|
||||
# Get the latest developers version from following link: https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/FileMonitor
|
||||
# Note: To call this script outside of Stash, pass argument --url and the Stash URL.
|
||||
# Example: python filemonitor.py --url http://localhost:9999
|
||||
try:
|
||||
import ModulesValidate
|
||||
ModulesValidate.modulesInstalled(["stashapp-tools", "watchdog", "schedule", "requests"], silent=True)
|
||||
except Exception as e:
|
||||
import traceback, sys
|
||||
tb = traceback.format_exc()
|
||||
print(f"ModulesValidate Exception. Error: {e}\nTraceBack={tb}", file=sys.stderr)
|
||||
import os, sys, time, pathlib, argparse, platform, traceback, logging
|
||||
from StashPluginHelper import StashPluginHelper
|
||||
import watchdog # pip install watchdog # https://pythonhosted.org/watchdog/
|
||||
from watchdog.observers import Observer # This is also needed for event attributes
|
||||
from StashPluginHelper import taskQueue
|
||||
from threading import Lock, Condition
|
||||
from multiprocessing import shared_memory
|
||||
from filemonitor_config import config
|
||||
from filemonitor_task_examples import task_examples
|
||||
from filemonitor_self_unit_test import self_unit_test
|
||||
from datetime import datetime
|
||||
|
||||
config['task_scheduler'] = config['task_scheduler'] + task_examples['task_scheduler']
|
||||
if self_unit_test['selfUnitTest_repeat']:
|
||||
@@ -25,12 +32,15 @@ STOP_RUNNING_SIG = 32
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--url', '-u', dest='stash_url', type=str, help='Add Stash URL')
|
||||
parser.add_argument('--trace', '-t', dest='trace', action='store_true', help='Enables debug trace mode.')
|
||||
parser.add_argument('--stop', '-s', dest='stop', action='store_true', help='Stop (kill) a running FileMonitor task.')
|
||||
parser.add_argument('--stop', '-s', dest='stop', action='store_true', help='Stop a running FileMonitor task.')
|
||||
parser.add_argument('--kill_que', '-k', dest='kill_job_task_que', type=str, help='Kill job on Task Queue while running in service mode (command line mode).')
|
||||
parser.add_argument('--restart', '-r', dest='restart', action='store_true', help='Restart FileMonitor.')
|
||||
parser.add_argument('--silent', '--quit', '-q', dest='quit', action='store_true', help='Run in silent mode. No output to console or stderr. Use this when running from pythonw.exe')
|
||||
parser.add_argument('--apikey', '-a', dest='apikey', type=str, help='API Key')
|
||||
parser.add_argument('--docker', '-d', dest='docker', type=str, help='Docker compose YML file.')
|
||||
parse_args = parser.parse_args()
|
||||
|
||||
|
||||
logToErrSet = 0
|
||||
logToNormSet = 0
|
||||
if parse_args.quit:
|
||||
@@ -54,8 +64,20 @@ stash = StashPluginHelper(
|
||||
maxbytes=5*1024*1024,
|
||||
apiKey=parse_args.apikey
|
||||
)
|
||||
stash.Status(logLevel=logging.DEBUG)
|
||||
|
||||
doJsonReturnModeTypes = ["getFileMonitorRunningStatus", "start_library_monitor_service_json", "stop_library_monitor_json"]
|
||||
doJsonReturn = False
|
||||
doJsonReturnFileMonitorStatus = False
|
||||
if len(sys.argv) < 2 and stash.PLUGIN_TASK_NAME in doJsonReturnModeTypes:
|
||||
doJsonReturn = True
|
||||
doJsonReturnFileMonitorStatus = True
|
||||
stash.log_to_norm = stash.LogTo.FILE
|
||||
if stash.PLUGIN_TASK_NAME.endswith("_json"):
|
||||
stash.PLUGIN_TASK_NAME = stash.PLUGIN_TASK_NAME[:-5]
|
||||
|
||||
stash.status(logLevel=logging.DEBUG)
|
||||
stash.Log(f"\nStarting (__file__={__file__}) (stash.CALLED_AS_STASH_PLUGIN={stash.CALLED_AS_STASH_PLUGIN}) (stash.DEBUG_TRACING={stash.DEBUG_TRACING}) (stash.DRY_RUN={stash.DRY_RUN}) (stash.PLUGIN_TASK_NAME={stash.PLUGIN_TASK_NAME})************************************************")
|
||||
stash.Trace(f"stash.JSON_INPUT={stash.JSON_INPUT}")
|
||||
|
||||
exitMsg = "Change success!!"
|
||||
mutex = Lock()
|
||||
@@ -63,6 +85,7 @@ signal = Condition(mutex)
|
||||
shouldUpdate = False
|
||||
|
||||
SHAREDMEMORY_NAME = "DavidMaisonaveAxter_FileMonitor" # Unique name for shared memory
|
||||
SHAREDMEMORY_SIZE = 4
|
||||
RECURSIVE = stash.pluginSettings["recursiveDisabled"] == False
|
||||
SCAN_MODIFIED = stash.pluginConfig["scanModified"]
|
||||
RUN_CLEAN_AFTER_DELETE = stash.pluginConfig["runCleanAfterDelete"]
|
||||
@@ -84,8 +107,89 @@ if CREATE_SPECIAL_FILE_TO_EXIT and os.path.isfile(SPECIAL_FILE_NAME):
|
||||
|
||||
fileExtTypes = stash.pluginConfig['fileExtTypes'].split(",") if stash.pluginConfig['fileExtTypes'] != "" else []
|
||||
includePathChanges = stash.pluginConfig['includePathChanges'] if len(stash.pluginConfig['includePathChanges']) > 0 else stash.STASH_PATHS
|
||||
includePathChanges = includePathChanges[:] # Make a copy of the list, and not a reference
|
||||
hostIncludePathChanges = includePathChanges[:]
|
||||
excludePathChanges = stash.pluginConfig['excludePathChanges']
|
||||
turnOnSchedulerDeleteDup = stash.pluginSettings['turnOnSchedulerDeleteDup']
|
||||
NotInLibraryTagName = stash.pluginConfig['NotInLibraryTagName']
|
||||
|
||||
filemonitor_config_dev_file = f"{stash.PLUGINS_PATH}{os.sep}FileMonitor{os.sep}filemonitor_config_dev.py"
|
||||
if os.path.exists(filemonitor_config_dev_file):
|
||||
stash.Log(f"Getting {filemonitor_config_dev_file} configuration settings.")
|
||||
from filemonitor_config_dev import config_dev
|
||||
config['dockers'] = config_dev['dockers']
|
||||
|
||||
dockerMapVolumes = {}
|
||||
dockerReverseMapVolumes = {}
|
||||
dockerObservedPaths = {}
|
||||
if not parse_args.docker == None and len(parse_args.docker) > 0:
|
||||
if stash.IS_DOCKER:
|
||||
stash.Error("You are running this script from within Docker. This is NOT supported. Run this script in the host machine instead. Performing early exit due to unsupported action.")
|
||||
sys.exit(50) # ERROR_NOT_SUPPORTED: The request is not supported.
|
||||
stash.Log(f"Docker compose YML file = {parse_args.docker}")
|
||||
ModulesValidate.modulesInstalled(["pyyaml"], silent=True)
|
||||
import yaml
|
||||
dockerStashPath = pathlib.Path(parse_args.docker).resolve().parent
|
||||
with open(parse_args.docker, "r", encoding='utf-8-sig') as stream:
|
||||
try:
|
||||
data_loaded = yaml.safe_load(stream)
|
||||
for service in data_loaded['services']:
|
||||
for volume in data_loaded['services'][service]['volumes']:
|
||||
volSplit = volume.replace(":ro", "").split(":/")
|
||||
hostPath = volSplit[0]
|
||||
# Do not scan Stash interanl working folders
|
||||
if volSplit[1] == "root/.stash" or volSplit[1] == "metadata" or volSplit[1] == "cache" or volSplit[1] == "blobs" or volSplit[1] == "generated":
|
||||
continue
|
||||
if volSplit[0].startswith("./../../"):
|
||||
dockerStashParentPath = pathlib.Path(dockerStashPath).resolve().parent
|
||||
hostPath = f"{pathlib.Path(dockerStashParentPath).resolve().parent}{hostPath[8:]}"
|
||||
elif volSplit[0].startswith("./../"):
|
||||
hostPath = f"{pathlib.Path(dockerStashPath).resolve().parent}{hostPath[5:]}"
|
||||
elif volSplit[0].startswith("./"):
|
||||
hostPath = f"{dockerStashPath}{hostPath[1:]}"
|
||||
elif volSplit[0].startswith("/"):
|
||||
continue
|
||||
dockerMapVolumes[hostPath] = f"/{volSplit[1]}"
|
||||
dockerReverseMapVolumes[f"/{volSplit[1]}"] = hostPath
|
||||
for hostPath in dockerMapVolumes:
|
||||
stash.Log(f"Host-Path = {hostPath}, Docker-Path = {dockerMapVolumes[hostPath]}")
|
||||
except yaml.YAMLError as e:
|
||||
import traceback
|
||||
tb = traceback.format_exc()
|
||||
stash.Error(f"Exception while parsing Docker file {parse_args.docker}; Error: {e}\nTraceBack={tb}")
|
||||
|
||||
if stash.IS_DOCKER and stash.PLUGIN_TASK_NAME != "stop_library_monitor" and not parse_args.stop and stash.PLUGIN_TASK_NAME != "getFileMonitorRunningStatus":
|
||||
stash.Error("You are running this script from within Docker. This is NOT supported. Run this script in the host machine instead.")
|
||||
stash.Warn("For more information on running FileMonitor on host machine see following link:\n https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/FileMonitor#Docker")
|
||||
stash.Warn("Performing early exit because FileMonitor has to run on the host machine, and can NOT run on Docker directly.")
|
||||
sys.exit(10) # ERROR_BAD_ENVIRONMENT: The environment is incorrect.
|
||||
# Alternate error: sys.exit(160) # ERROR_BAD_ARGUMENTS: One or more arguments are not correct.
|
||||
|
||||
dockerStashes = {}
|
||||
for docker in stash.pluginConfig['dockers']:
|
||||
stash.Log(f"Adding monitoring to Docker Stash {docker['GQL']}")
|
||||
dockerStashes[docker['GQL']] = StashPluginHelper(
|
||||
stash_url=docker['GQL'],
|
||||
debugTracing=parse_args.trace,
|
||||
settings=settings,
|
||||
config=config,
|
||||
logToErrSet=logToErrSet,
|
||||
logToNormSet=8,
|
||||
maxbytes=5*1024*1024,
|
||||
apiKey=docker['apiKey']
|
||||
)
|
||||
for bindMount in docker['bindMounts']:
|
||||
for key in bindMount:
|
||||
if len(key) == 0:
|
||||
continue
|
||||
# Do not scan Stash interanl working folders
|
||||
if bindMount[key] == "/root/.stash" or bindMount[key] == "/metadata" or bindMount[key] == "/cache" or bindMount[key] == "/blobs" or bindMount[key] == "/generated":
|
||||
continue
|
||||
stash.Log(f"Adding monitoring for host path '{key}' which is Docker mount path '{bindMount[key]}' for Stash {docker['GQL']}")
|
||||
includePathChanges += [key]
|
||||
stash.Log(f"This Stash instance GQL = {stash.STASH_URL}")
|
||||
# for path in includePathChanges:
|
||||
# stash.Log(f"[post] includePathChange = {path}")
|
||||
|
||||
if stash.DRY_RUN:
|
||||
stash.Log("Dry run mode is enabled.")
|
||||
@@ -93,34 +197,52 @@ stash.Trace(f"(SCAN_MODIFIED={SCAN_MODIFIED}) (SCAN_ON_ANY_EVENT={SCAN_ON_ANY_EV
|
||||
|
||||
StartFileMonitorAsAPluginTaskName = "Monitor as a Plugin"
|
||||
StartFileMonitorAsAServiceTaskName = "Start Library Monitor Service"
|
||||
|
||||
StartFileMonitorAsAPluginTaskID = "start_library_monitor"
|
||||
StartFileMonitorAsAServiceTaskID = "start_library_monitor_service"
|
||||
StopFileMonitorAsAPluginTaskID = "stop_library_monitor"
|
||||
SYNC_LIBRARY_REMOVE = "sync_library_remove"
|
||||
SYNC_LIBRARY_TAG = "sync_library_tag"
|
||||
CLEAR_SYNC_LIBRARY_TAG = "clear_sync_tags_task"
|
||||
|
||||
FileMonitorPluginIsOnTaskQue = stash.CALLED_AS_STASH_PLUGIN
|
||||
StopLibraryMonitorWaitingInTaskQueue = False
|
||||
JobIdInTheQue = 0
|
||||
def isJobWaitingToRun():
|
||||
JobIdOf_StartAsAServiceTask = None
|
||||
def isJobWaitingToRun(getJobIdOf_StartAsAServiceTask = False):
|
||||
global StopLibraryMonitorWaitingInTaskQueue
|
||||
global JobIdInTheQue
|
||||
global JobIdOf_StartAsAServiceTask
|
||||
global FileMonitorPluginIsOnTaskQue
|
||||
FileMonitorPluginIsOnTaskQue = False
|
||||
jobIsWaiting = False
|
||||
taskQue = stash.job_queue()
|
||||
if taskQue == None:
|
||||
return jobIsWaiting
|
||||
for jobDetails in taskQue:
|
||||
stash.Trace(f"(Job ID({jobDetails['id']})={jobDetails})")
|
||||
if jobDetails['status'] == "READY":
|
||||
if jobDetails['description'] == "Running plugin task: Stop Library Monitor":
|
||||
StopLibraryMonitorWaitingInTaskQueue = True
|
||||
JobIdInTheQue = jobDetails['id']
|
||||
jobIsWaiting = True
|
||||
elif jobDetails['status'] == "RUNNING" and jobDetails['description'].find(StartFileMonitorAsAPluginTaskName) > -1:
|
||||
FileMonitorPluginIsOnTaskQue = True
|
||||
if getJobIdOf_StartAsAServiceTask:
|
||||
if jobDetails['status'] == "RUNNING" and jobDetails['description'].find(StartFileMonitorAsAServiceTaskName) > -1:
|
||||
JobIdOf_StartAsAServiceTask = jobDetails['id']
|
||||
stash.Trace(f"Found current running task '{jobDetails['description']}' with Job ID {JobIdOf_StartAsAServiceTask}")
|
||||
return True
|
||||
else:
|
||||
if jobDetails['status'] == "READY":
|
||||
if jobDetails['description'] == "Running plugin task: Stop Library Monitor":
|
||||
StopLibraryMonitorWaitingInTaskQueue = True
|
||||
JobIdInTheQue = jobDetails['id']
|
||||
jobIsWaiting = True
|
||||
elif jobDetails['status'] == "RUNNING" and jobDetails['description'].find(StartFileMonitorAsAPluginTaskName) > -1:
|
||||
FileMonitorPluginIsOnTaskQue = True
|
||||
JobIdInTheQue = 0
|
||||
return jobIsWaiting
|
||||
|
||||
if stash.CALLED_AS_STASH_PLUGIN and stash.PLUGIN_TASK_NAME == StartFileMonitorAsAPluginTaskID:
|
||||
if stash.PLUGIN_TASK_NAME == StartFileMonitorAsAPluginTaskID:
|
||||
stash.Trace(f"isJobWaitingToRun() = {isJobWaitingToRun()})")
|
||||
|
||||
elif stash.PLUGIN_TASK_NAME == StartFileMonitorAsAServiceTaskID:
|
||||
stash.Trace(f"isJobWaitingToRun() = {isJobWaitingToRun(True)})")
|
||||
|
||||
|
||||
class StashScheduler: # Stash Scheduler
|
||||
def __init__(self):
|
||||
import schedule # pip install schedule # https://github.com/dbader/schedule
|
||||
@@ -224,16 +346,24 @@ class StashScheduler: # Stash Scheduler
|
||||
|
||||
result = None
|
||||
if task['task'] == "Clean":
|
||||
result = self.jobIdOutput(stash.metadata_clean(dry_run=stash.DRY_RUN))
|
||||
elif task['task'] == "Clean Path":
|
||||
result = self.jobIdOutput(stash.metadata_clean(paths=targetPaths, dry_run=stash.DRY_RUN))
|
||||
elif task['task'] == "Clean Generated Files":
|
||||
result = self.jobIdOutput(stash.metadata_clean_generated())
|
||||
elif task['task'] == "Generate":
|
||||
result = self.jobIdOutput(stash.metadata_generate())
|
||||
elif task['task'] == "Generate Phashes":
|
||||
result = self.jobIdOutput(stash.metadata_generate({"phashes": True}))
|
||||
elif task['task'] == "Backup":
|
||||
result = self.jobIdOutput(self.runBackupTask(task))
|
||||
elif task['task'] == "Scan":
|
||||
result = self.jobIdOutput(stash.metadata_scan())
|
||||
elif task['task'] == "Scan Path":
|
||||
result = self.jobIdOutput(stash.metadata_scan(paths=targetPaths))
|
||||
elif task['task'] == "Auto Tag":
|
||||
result = self.jobIdOutput(stash.metadata_autotag())
|
||||
elif task['task'] == "Auto Tag Path":
|
||||
result = self.jobIdOutput(stash.metadata_autotag(paths=targetPaths))
|
||||
elif task['task'] == "Optimise Database":
|
||||
result = self.jobIdOutput(stash.optimise_database())
|
||||
@@ -261,6 +391,11 @@ class StashScheduler: # Stash Scheduler
|
||||
if 'msg' in task and task['msg'] != "":
|
||||
Msg = task['msg']
|
||||
result = stash.TraceOnce(Msg)
|
||||
elif task['task'] == "DebugOnce":
|
||||
Msg = "Scheduled DebugOnce."
|
||||
if 'msg' in task and task['msg'] != "":
|
||||
Msg = task['msg']
|
||||
result = stash.DebugOnce(Msg)
|
||||
elif task['task'] == "CheckStashIsRunning":
|
||||
result = self.checkStashIsRunning(task)
|
||||
elif task['task'] == "python":
|
||||
@@ -292,7 +427,7 @@ class StashScheduler: # Stash Scheduler
|
||||
if 'args' in task and len(task['args']) > 0:
|
||||
args = args + [task['args']]
|
||||
stash.Log(f"Executing command arguments {args}.")
|
||||
return f"Execute process PID = {stash.ExecuteProcess(args)}"
|
||||
return f"Execute process PID = {stash.executeProcess(args)}"
|
||||
else:
|
||||
stash.Error(f"Can not run task '{task['task']}', because it's missing 'command' field.")
|
||||
return None
|
||||
@@ -307,7 +442,7 @@ class StashScheduler: # Stash Scheduler
|
||||
detached = True
|
||||
if 'detach' in task:
|
||||
detached = task['detach']
|
||||
return f"Python process PID = {stash.ExecutePythonScript(args, ExecDetach=detached)}"
|
||||
return f"Python process PID = {stash.executePythonScript(args, ExecDetach=detached)}"
|
||||
else:
|
||||
stash.Error(f"Can not run task '{task['task']}', because it's missing 'script' field.")
|
||||
return None
|
||||
@@ -345,8 +480,8 @@ class StashScheduler: # Stash Scheduler
|
||||
taskMode = task['taskMode']
|
||||
if ('taskQue' in task and task['taskQue'] == False) or taskName == None:
|
||||
stash.Log(f"Running plugin task pluginID={task['task']}, task mode = {taskMode}. {validDirMsg}")
|
||||
# Asynchronous threading logic to call run_plugin, because it's a blocking call.
|
||||
stash.run_plugin(plugin_id=task['task'], task_mode=taskMode, asyn=True)
|
||||
# Asynchronous threading logic to call runPlugin, because it's a blocking call.
|
||||
stash.runPlugin(plugin_id=task['task'], task_mode=taskMode, asyn=True)
|
||||
return None
|
||||
else:
|
||||
stash.Trace(f"Adding to Task Queue plugin task pluginID={task['task']}, task name = {taskName}. {validDirMsg}")
|
||||
@@ -362,11 +497,11 @@ class StashScheduler: # Stash Scheduler
|
||||
except:
|
||||
pass
|
||||
stash.Error("Failed to get response from Stash.")
|
||||
if platform.system() == "Windows":
|
||||
if stash.IS_WINDOWS:
|
||||
execPath = f"{pathlib.Path(stash.PLUGINS_PATH).resolve().parent}{os.sep}stash-win.exe"
|
||||
elif platform.system() == "Darwin": # MacOS
|
||||
elif stash.IS_MAC_OS:
|
||||
execPath = f"{pathlib.Path(stash.PLUGINS_PATH).resolve().parent}{os.sep} stash-macos "
|
||||
elif platform.system().lower().startswith("linux"):
|
||||
elif stash.IS_LINUX:
|
||||
# ToDo: Need to verify this method will work for (stash-linux-arm32v6, stash-linux-arm32v7, and stash-linux-arm64v8)
|
||||
if platform.system().lower().find("32v6") > -1:
|
||||
execPath = f"{pathlib.Path(stash.PLUGINS_PATH).resolve().parent}{os.sep}stash-linux-arm32v6"
|
||||
@@ -376,7 +511,7 @@ class StashScheduler: # Stash Scheduler
|
||||
execPath = f"{pathlib.Path(stash.PLUGINS_PATH).resolve().parent}{os.sep}stash-linux-arm64v8"
|
||||
else:
|
||||
execPath = f"{pathlib.Path(stash.PLUGINS_PATH).resolve().parent}{os.sep}stash-linux"
|
||||
elif platform.system().lower().startswith("freebsd"):
|
||||
elif stash.IS_FREEBSD:
|
||||
execPath = f"{pathlib.Path(stash.PLUGINS_PATH).resolve().parent}{os.sep}stash-freebsd"
|
||||
elif 'command' not in task or task['command'] == "":
|
||||
stash.Error("Can not start Stash, because failed to determine platform OS. As a workaround, add 'command' field to this task.")
|
||||
@@ -391,7 +526,7 @@ class StashScheduler: # Stash Scheduler
|
||||
else:
|
||||
stash.Error("Could not start Stash, because could not find executable Stash file '{execPath}'")
|
||||
return None
|
||||
result = f"Execute process PID = {stash.ExecuteProcess(args)}"
|
||||
result = f"Execute process PID = {stash.executeProcess(args)}"
|
||||
time.sleep(sleepAfterStart)
|
||||
if "RunAfter" in task and len(task['RunAfter']) > 0:
|
||||
for runAfterTask in task['RunAfter']:
|
||||
@@ -456,12 +591,14 @@ lastScanJob = {
|
||||
JOB_ENDED_STATUSES = ["FINISHED", "CANCELLED"]
|
||||
|
||||
def start_library_monitor():
|
||||
from watchdog.observers import Observer # This is also needed for event attributes
|
||||
import watchdog # pip install watchdog # https://pythonhosted.org/watchdog/
|
||||
global shouldUpdate
|
||||
global TargetPaths
|
||||
global lastScanJob
|
||||
try:
|
||||
# Create shared memory buffer which can be used as singleton logic or to get a signal to quit task from external script
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=True, size=4)
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=True, size=SHAREDMEMORY_SIZE)
|
||||
except:
|
||||
stash.Error(f"Could not open shared memory map ({SHAREDMEMORY_NAME}). Change File Monitor must be running. Can not run multiple instance of Change File Monitor. Stop FileMonitor before trying to start it again.")
|
||||
return
|
||||
@@ -559,8 +696,24 @@ def start_library_monitor():
|
||||
|
||||
# Iterate through includePathChanges
|
||||
for path in includePathChanges:
|
||||
observer.schedule(event_handler, path, recursive=RECURSIVE)
|
||||
stash.Log(f"Observing {path}")
|
||||
pathToObserve = path
|
||||
if not parse_args.docker == None and len(parse_args.docker) > 0:
|
||||
if not pathToObserve.startswith("/"):
|
||||
pathToObserve = f"/{pathToObserve}"
|
||||
stash.Debug(f"Converting Docker path '{pathToObserve}' to Host-path; original-path={path}")
|
||||
if pathToObserve in dockerReverseMapVolumes:
|
||||
pathToObserve = dockerReverseMapVolumes[pathToObserve]
|
||||
for dockerPath in dockerReverseMapVolumes:
|
||||
if pathToObserve.startswith(f"{dockerPath}/"):
|
||||
pathToObserve = pathToObserve.replace(f"{dockerPath}/", f"{dockerReverseMapVolumes[dockerPath]}/")
|
||||
break
|
||||
pathToObserve = pathToObserve.replace('/', os.sep)
|
||||
dockerObservedPaths[f"{pathToObserve}{os.sep}"] = path
|
||||
stash.Log(f"Observing {pathToObserve}")
|
||||
if not os.path.exists(pathToObserve):
|
||||
stash.Error(f"Skipping path '{pathToObserve}' because it does not exist!!!")
|
||||
continue
|
||||
observer.schedule(event_handler, pathToObserve, recursive=RECURSIVE)
|
||||
observer.schedule(event_handler, SPECIAL_FILE_DIR, recursive=RECURSIVE)
|
||||
stash.Trace(f"Observing FileMonitor path {SPECIAL_FILE_DIR}")
|
||||
observer.start()
|
||||
@@ -592,9 +745,9 @@ def start_library_monitor():
|
||||
if lastScanJob['timeOutDelayProcess'] > MAX_TIMEOUT_FOR_DELAY_PATH_PROCESS:
|
||||
lastScanJob['timeOutDelayProcess'] = MAX_TIMEOUT_FOR_DELAY_PATH_PROCESS
|
||||
timeOutInSeconds = lastScanJob['timeOutDelayProcess']
|
||||
stash.LogOnce(f"Awaiting file change-trigger, with a short timeout ({timeOutInSeconds} seconds), because of active delay path processing.")
|
||||
stash.Log(f"Awaiting file change-trigger, with a short timeout ({timeOutInSeconds} seconds), because of active delay path processing.")
|
||||
else:
|
||||
stash.LogOnce(f"Waiting for a file change-trigger. Timeout = {timeOutInSeconds} seconds.")
|
||||
stash.Log(f"Waiting for a file change-trigger. Timeout = {timeOutInSeconds} seconds.")
|
||||
signal.wait(timeout=timeOutInSeconds)
|
||||
if lastScanJob['DelayedProcessTargetPaths'] != []:
|
||||
stash.TraceOnce(f"Processing delay scan for path(s) {lastScanJob['DelayedProcessTargetPaths']}")
|
||||
@@ -613,18 +766,19 @@ def start_library_monitor():
|
||||
if TargetPath == SPECIAL_FILE_NAME:
|
||||
if os.path.isfile(SPECIAL_FILE_NAME):
|
||||
shm_buffer[0] = STOP_RUNNING_SIG
|
||||
stash.Log(f"[SpFl]Detected trigger file to kill FileMonitor. {SPECIAL_FILE_NAME}", printTo = stash.LOG_TO_FILE + stash.LOG_TO_CONSOLE + stash.LOG_TO_STASH)
|
||||
stash.Log(f"[SpFl]Detected trigger file to kill FileMonitor. {SPECIAL_FILE_NAME}", printTo = stash.LogTo.FILE + stash.LogTo.CONSOLE + stash.LogTo.STASH)
|
||||
else:
|
||||
stash.Trace(f"[SpFl]Did not find file {SPECIAL_FILE_NAME}.")
|
||||
|
||||
# Make sure special file does not exist, incase change was missed.
|
||||
if CREATE_SPECIAL_FILE_TO_EXIT and os.path.isfile(SPECIAL_FILE_NAME) and shm_buffer[0] == CONTINUE_RUNNING_SIG:
|
||||
shm_buffer[0] = STOP_RUNNING_SIG
|
||||
stash.Log(f"[SpFl]Detected trigger file to kill FileMonitor. {SPECIAL_FILE_NAME}", printTo = stash.LOG_TO_FILE + stash.LOG_TO_CONSOLE + stash.LOG_TO_STASH)
|
||||
stash.Log(f"[SpFl]Detected trigger file to kill FileMonitor. {SPECIAL_FILE_NAME}", printTo = stash.LogTo.FILE + stash.LogTo.CONSOLE + stash.LogTo.STASH)
|
||||
TargetPaths = []
|
||||
TmpTargetPaths = list(set(TmpTargetPaths))
|
||||
if TmpTargetPaths != [] or lastScanJob['DelayedProcessTargetPaths'] != []:
|
||||
stash.Log(f"Triggering Stash scan for path(s) {TmpTargetPaths}")
|
||||
# ToDo: Add check to see if Docker Map path
|
||||
stash.Log(f"Triggering Stash scan for path(s) {TmpTargetPaths} and/or {lastScanJob['DelayedProcessTargetPaths']}")
|
||||
if lastScanJob['DelayedProcessTargetPaths'] != [] or len(TmpTargetPaths) > 1 or TmpTargetPaths[0] != SPECIAL_FILE_DIR:
|
||||
if not stash.DRY_RUN:
|
||||
if lastScanJob['id'] > -1:
|
||||
@@ -657,11 +811,35 @@ def start_library_monitor():
|
||||
lastScanJob['DelayedProcessTargetPaths'].append(path)
|
||||
stash.Trace(f"lastScanJob['DelayedProcessTargetPaths'] = {lastScanJob['DelayedProcessTargetPaths']}")
|
||||
if lastScanJob['id'] == -1:
|
||||
stash.Trace(f"Calling metadata_scan for paths '{TmpTargetPaths}'")
|
||||
lastScanJob['id'] = int(stash.metadata_scan(paths=TmpTargetPaths))
|
||||
lastScanJob['TargetPaths'] = TmpTargetPaths
|
||||
lastScanJob['timeAddedToTaskQueue'] = time.time()
|
||||
stash.Trace(f"metadata_scan JobId = {lastScanJob['id']}, Start-Time = {lastScanJob['timeAddedToTaskQueue']}, paths = {lastScanJob['TargetPaths']}")
|
||||
taskqueue = taskQueue(stash.job_queue())
|
||||
if taskqueue.tooManyScanOnTaskQueue(7):
|
||||
stash.Log(f"[metadata_scan] Skipping updating Stash for paths '{TmpTargetPaths}', because too many scans on Task Queue.")
|
||||
else:
|
||||
if not parse_args.docker == None and len(parse_args.docker) > 0:
|
||||
CpyTmpTargetPaths = list(set(TmpTargetPaths))
|
||||
TmpTargetPaths = []
|
||||
for CpyTmpTargetPath in CpyTmpTargetPaths:
|
||||
for key in dockerObservedPaths:
|
||||
if CpyTmpTargetPath.startswith(key):
|
||||
HostTmpTargetPath = CpyTmpTargetPath
|
||||
CpyTmpTargetPath = f"{dockerObservedPaths[key]}/{CpyTmpTargetPath[len(key):]}"
|
||||
stash.Log(f"Converted Host-Path {HostTmpTargetPath} to Docker-Path {CpyTmpTargetPath}")
|
||||
TmpTargetPaths += [CpyTmpTargetPath]
|
||||
break
|
||||
if len(stash.pluginConfig['dockers']) > 0:
|
||||
for TmpTargetPath in TmpTargetPaths:
|
||||
for docker in stash.pluginConfig['dockers']:
|
||||
for bindMount in docker['bindMounts']:
|
||||
for key in bindMount:
|
||||
if TmpTargetPath.startswith(key):
|
||||
stash.Log(f"Sending notification to Stash Docker {docker['GQL']} for file system change in path '{bindMount[key]}' which is host path {key}.")
|
||||
dockerStashes[docker['GQL']].Log(f"File system change in path '{bindMount[key]}' which is host path {key}.")
|
||||
dockerStashes[docker['GQL']].metadata_scan(paths=bindMount[key])
|
||||
stash.Trace(f"[metadata_scan] Calling metadata_scan for paths '{TmpTargetPaths}'")
|
||||
lastScanJob['id'] = int(stash.metadata_scan(paths=TmpTargetPaths))
|
||||
lastScanJob['TargetPaths'] = TmpTargetPaths
|
||||
lastScanJob['timeAddedToTaskQueue'] = time.time()
|
||||
stash.Trace(f"metadata_scan JobId = {lastScanJob['id']}, Start-Time = {lastScanJob['timeAddedToTaskQueue']}, paths = {lastScanJob['TargetPaths']}")
|
||||
if RUN_CLEAN_AFTER_DELETE and RunCleanMetadata:
|
||||
stash.metadata_clean(paths=TmpTargetPaths, dry_run=stash.DRY_RUN)
|
||||
if RUN_GENERATE_CONTENT:
|
||||
@@ -698,7 +876,7 @@ def stop_library_monitor():
|
||||
os.remove(SPECIAL_FILE_NAME)
|
||||
stash.Trace("Opening shared memory map.")
|
||||
try:
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=False, size=4)
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=False, size=SHAREDMEMORY_SIZE)
|
||||
except:
|
||||
# If FileMonitor is running as plugin, then it's expected behavior that SharedMemory will not be available.
|
||||
stash.Trace(f"Could not open shared memory map ({SHAREDMEMORY_NAME}). Change File Monitor must not be running.")
|
||||
@@ -710,11 +888,13 @@ def stop_library_monitor():
|
||||
stash.Trace(f"Shared memory map opended, and flag set to {shm_buffer[0]}")
|
||||
shm_a.close()
|
||||
shm_a.unlink() # Call unlink only once to release the shared memory
|
||||
if doJsonReturnFileMonitorStatus:
|
||||
sys.stdout.write("{" + f"{stash.PLUGIN_TASK_NAME} : 'complete', FileMonitorStatus:'NOT running', IS_DOCKER:'{stash.IS_DOCKER}'" + "}")
|
||||
|
||||
def start_library_monitor_service():
|
||||
# First check if FileMonitor is already running
|
||||
try:
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=False, size=4)
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=False, size=SHAREDMEMORY_SIZE)
|
||||
shm_a.close()
|
||||
shm_a.unlink()
|
||||
stash.Error("FileMonitor is already running. Need to stop FileMonitor before trying to start it again.")
|
||||
@@ -723,33 +903,150 @@ def start_library_monitor_service():
|
||||
pass
|
||||
stash.Trace("FileMonitor is not running, so it's safe to start it as a service.")
|
||||
args = [f"{pathlib.Path(__file__).resolve().parent}{os.sep}filemonitor.py", '--url', f"{stash.STASH_URL}"]
|
||||
if JobIdOf_StartAsAServiceTask != None:
|
||||
args += ["-k", JobIdOf_StartAsAServiceTask]
|
||||
if stash.API_KEY:
|
||||
args = args + ["-a", stash.API_KEY]
|
||||
stash.ExecutePythonScript(args)
|
||||
|
||||
if parse_args.stop or parse_args.restart or stash.PLUGIN_TASK_NAME == "stop_library_monitor":
|
||||
stop_library_monitor()
|
||||
if parse_args.restart:
|
||||
args += ["-a", stash.API_KEY]
|
||||
results = stash.executePythonScript(args)
|
||||
stash.Trace(f"executePythonScript results='{results}'")
|
||||
if doJsonReturnFileMonitorStatus:
|
||||
time.sleep(5)
|
||||
stash.run_plugin_task(plugin_id=stash.PLUGIN_ID, task_name=StartFileMonitorAsAPluginTaskName)
|
||||
stash.Trace(f"Restart FileMonitor EXIT")
|
||||
sys.stdout.write("{" + f"{stash.PLUGIN_TASK_NAME} : 'complete', FileMonitorStatus:'RUNNING', IS_DOCKER:'{stash.IS_DOCKER}'" + "}")
|
||||
|
||||
def synchronize_library(removeScene=False):
|
||||
stash.startSpinningProcessBar()
|
||||
scenes = stash.find_scenes(fragment='id tags {id name} files {path}')
|
||||
qtyResults = len(scenes)
|
||||
Qty = 0
|
||||
stash.Log(f"count = {qtyResults}")
|
||||
stash.stopSpinningProcessBar()
|
||||
sceneIDs = stash.find_scenes(fragment='id files {path}')
|
||||
for scene in scenes:
|
||||
Qty += 1
|
||||
stash.progressBar(Qty, qtyResults)
|
||||
scenePartOfLibrary = False
|
||||
for path in stash.STASH_PATHS:
|
||||
if scene['files'][0]['path'].startswith(path):
|
||||
scenePartOfLibrary = True
|
||||
break
|
||||
if scenePartOfLibrary == False:
|
||||
stash.Log(f"Scene ID={scene['id']}; path={scene['files'][0]['path']} not part of Stash Library")
|
||||
if removeScene:
|
||||
stash.destroy_scene(scene['id'])
|
||||
stash.Log(f"Removed Scene ID={scene['id']}; path={scene['files'][0]['path']}")
|
||||
else:
|
||||
stash.addTag(scene, NotInLibraryTagName, ignoreAutoTag=True)
|
||||
stash.Trace(f"Tagged ({NotInLibraryTagName}) Scene ID={scene['id']}; path={scene['files'][0]['path']}")
|
||||
|
||||
def manageTagggedScenes(clearTag=True):
|
||||
tagId = stash.find_tags(q=NotInLibraryTagName)
|
||||
if len(tagId) > 0 and 'id' in tagId[0]:
|
||||
tagId = tagId[0]['id']
|
||||
else:
|
||||
stash.Trace(f"Stop FileMonitor EXIT")
|
||||
elif stash.PLUGIN_TASK_NAME == StartFileMonitorAsAServiceTaskID:
|
||||
start_library_monitor_service()
|
||||
stash.Trace(f"{StartFileMonitorAsAServiceTaskID} EXIT")
|
||||
elif stash.PLUGIN_TASK_NAME == StartFileMonitorAsAPluginTaskID:
|
||||
start_library_monitor()
|
||||
stash.Trace(f"{StartFileMonitorAsAPluginTaskID} EXIT")
|
||||
elif not stash.CALLED_AS_STASH_PLUGIN:
|
||||
stash.Warn(f"Could not find tag ID for tag '{NotInLibraryTagName}'.")
|
||||
return
|
||||
QtyDup = 0
|
||||
QtyRemoved = 0
|
||||
QtyClearedTags = 0
|
||||
QtyFailedQuery = 0
|
||||
stash.Trace("#########################################################################")
|
||||
stash.startSpinningProcessBar()
|
||||
stash.Trace(f"Calling find_scenes with tagId={tagId}")
|
||||
sceneIDs = stash.find_scenes(f={"tags": {"value":tagId, "modifier":"INCLUDES"}}, fragment='id')
|
||||
stash.stopSpinningProcessBar()
|
||||
qtyResults = len(sceneIDs)
|
||||
stash.Trace(f"Found {qtyResults} scenes with tag ({NotInLibraryTagName}): sceneIDs = {sceneIDs}")
|
||||
for sceneID in sceneIDs:
|
||||
# stash.Trace(f"Getting scene data for scene ID {sceneID['id']}.")
|
||||
QtyDup += 1
|
||||
prgs = QtyDup / qtyResults
|
||||
stash.progressBar(QtyDup, qtyResults)
|
||||
scene = stash.find_scene(sceneID['id'])
|
||||
if scene == None or len(scene) == 0:
|
||||
stash.Warn(f"Could not get scene data for scene ID {sceneID['id']}.")
|
||||
QtyFailedQuery += 1
|
||||
continue
|
||||
# stash.Trace(f"scene={scene}")
|
||||
if clearTag:
|
||||
tags = [int(item['id']) for item in scene["tags"] if item['id'] != tagId]
|
||||
stash.TraceOnce(f"tagId={tagId}, len={len(tags)}, tags = {tags}")
|
||||
dataDict = {'id' : scene['id']}
|
||||
dataDict.update({'tag_ids' : tags})
|
||||
stash.Log(f"Updating scene with {dataDict}")
|
||||
stash.update_scene(dataDict)
|
||||
# stash.removeTag(scene, NotInLibraryTagName)
|
||||
QtyClearedTags += 1
|
||||
else:
|
||||
stash.destroy_scene(scene['id'])
|
||||
stash.Log(f"Removed Scene ID={scene['id']}; path={scene['files'][0]['path']}")
|
||||
QtyRemoved += 1
|
||||
stash.Log(f"QtyDup={QtyDup}, QtyClearedTags={QtyClearedTags}, QtyRemoved={QtyRemoved}, QtyFailedQuery={QtyFailedQuery}")
|
||||
|
||||
runTypeID=0
|
||||
runTypeName=["NothingToDo", "stop_library_monitor", "StartFileMonitorAsAServiceTaskID", "StartFileMonitorAsAPluginTaskID", "CommandLineStartLibMonitor"]
|
||||
|
||||
def getFileMonitorRunningStatus():
|
||||
FileMonitorStatus = "NOT running"
|
||||
try:
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=False, size=SHAREDMEMORY_SIZE)
|
||||
shm_a.close()
|
||||
shm_a.unlink()
|
||||
FileMonitorStatus = "RUNNING"
|
||||
stash.Log("FileMonitor is running...")
|
||||
except:
|
||||
pass
|
||||
stash.Log("FileMonitor is NOT running!!!")
|
||||
stash.Log(f"{stash.PLUGIN_TASK_NAME} complete")
|
||||
sys.stdout.write("{" + f"{stash.PLUGIN_TASK_NAME} : 'complete', FileMonitorStatus:'{FileMonitorStatus}', IS_DOCKER:'{stash.IS_DOCKER}'" + "}")
|
||||
|
||||
try:
|
||||
if parse_args.stop or parse_args.restart or stash.PLUGIN_TASK_NAME == "stop_library_monitor":
|
||||
runTypeID=1
|
||||
stop_library_monitor()
|
||||
if parse_args.restart:
|
||||
time.sleep(5)
|
||||
stash.run_plugin_task(plugin_id=stash.PLUGIN_ID, task_name=StartFileMonitorAsAPluginTaskName)
|
||||
stash.Trace(f"Restart FileMonitor EXIT")
|
||||
else:
|
||||
stash.Trace(f"Stop FileMonitor EXIT")
|
||||
elif stash.PLUGIN_TASK_NAME == StartFileMonitorAsAServiceTaskID:
|
||||
runTypeID=2
|
||||
start_library_monitor_service()
|
||||
stash.Trace(f"{StartFileMonitorAsAServiceTaskID} transitioning to service mode.")
|
||||
elif stash.PLUGIN_TASK_NAME == StartFileMonitorAsAPluginTaskID:
|
||||
runTypeID=3
|
||||
start_library_monitor()
|
||||
stash.Trace(f"{StartFileMonitorAsAPluginTaskID} EXIT")
|
||||
elif stash.PLUGIN_TASK_NAME == SYNC_LIBRARY_REMOVE:
|
||||
runTypeID=5
|
||||
synchronize_library(removeScene=tRUE)
|
||||
stash.Trace(f"{SYNC_LIBRARY_REMOVE} EXIT")
|
||||
elif stash.PLUGIN_TASK_NAME == SYNC_LIBRARY_TAG:
|
||||
runTypeID=6
|
||||
synchronize_library()
|
||||
stash.Trace(f"{SYNC_LIBRARY_TAG} EXIT")
|
||||
elif stash.PLUGIN_TASK_NAME == CLEAR_SYNC_LIBRARY_TAG:
|
||||
runTypeID=7
|
||||
manageTagggedScenes()
|
||||
stash.Trace(f"{CLEAR_SYNC_LIBRARY_TAG} EXIT")
|
||||
elif stash.PLUGIN_TASK_NAME == "getFileMonitorRunningStatus":
|
||||
getFileMonitorRunningStatus()
|
||||
stash.Debug(f"{stash.PLUGIN_TASK_NAME} EXIT")
|
||||
elif not stash.CALLED_AS_STASH_PLUGIN:
|
||||
runTypeID=4
|
||||
if parse_args.kill_job_task_que != None and parse_args.kill_job_task_que != "":
|
||||
# Removing the job from the Task Queue is really only needed for Linux, but it should be OK to do in general.
|
||||
stash.Log(f"Removing job ID {parse_args.kill_job_task_que} from the Task Queue, because transitioning to service mode.")
|
||||
stash.stop_job(parse_args.kill_job_task_que)
|
||||
start_library_monitor()
|
||||
stash.Trace("Command line FileMonitor EXIT")
|
||||
except Exception as e:
|
||||
tb = traceback.format_exc()
|
||||
stash.Error(f"Exception while running FileMonitor from the command line. Error: {e}\nTraceBack={tb}")
|
||||
stash.log.exception('Got exception on main handler')
|
||||
else:
|
||||
stash.Log(f"Nothing to do!!! (stash.PLUGIN_TASK_NAME={stash.PLUGIN_TASK_NAME})")
|
||||
|
||||
else:
|
||||
stash.Log(f"Nothing to do!!! (stash.PLUGIN_TASK_NAME={stash.PLUGIN_TASK_NAME})")
|
||||
except Exception as e:
|
||||
tb = traceback.format_exc()
|
||||
stash.Error(f"Exception while running FileMonitor. runType='{runTypeName[runTypeID]}'; Error: {e}\nTraceBack={tb}")
|
||||
if doJsonReturn:
|
||||
sys.stdout.write("{" + f"Exception : '{e}; See log file for TraceBack' " + "}")
|
||||
stash.Trace("\n*********************************\nEXITING ***********************\n*********************************")
|
||||
|
||||
# ToDo: Add option to add path to library if path not included when calling metadata_scan
|
||||
@@ -1,7 +1,10 @@
|
||||
name: FileMonitor
|
||||
description: Monitors the Stash library folders, and updates Stash if any changes occurs in the Stash library paths.
|
||||
version: 0.9.0
|
||||
version: 1.0.3
|
||||
url: https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/FileMonitor
|
||||
ui:
|
||||
javascript:
|
||||
- filemonitor.js
|
||||
settings:
|
||||
recursiveDisabled:
|
||||
displayName: No Recursive
|
||||
@@ -40,3 +43,15 @@ tasks:
|
||||
description: Run [Library Monitor] as a plugin (*Not recommended*)
|
||||
defaultArgs:
|
||||
mode: start_library_monitor
|
||||
- name: Synchronize Library Tag
|
||||
description: Tag (_NoLongerPartOfLibrary) scenes from database with paths no longer in Stash Library.
|
||||
defaultArgs:
|
||||
mode: sync_library_tag
|
||||
- name: Synchronize Library Clean
|
||||
description: Remove scenes from database with paths no longer in Stash Library.
|
||||
defaultArgs:
|
||||
mode: sync_library_remove
|
||||
- name: Clear Sync Tags
|
||||
description: Clear tag _NoLongerPartOfLibrary. Remove this tag from all files.
|
||||
defaultArgs:
|
||||
mode: clear_sync_tags_task
|
||||
|
||||
@@ -21,9 +21,11 @@ config = {
|
||||
|
||||
# The following tasks are scheduled weekly
|
||||
# Optional field for task "Scan", "Auto Tag", and "Clean" is 'paths'. For detail usage, see examples #A3: in filemonitor_task_examples.py
|
||||
{"task" : "Scan", "weekday" : "saturday", "time" : "03:00"}, # Library -> [Scan] (Weekly) (Every saturday at 3AM)
|
||||
{"task" : "Auto Tag", "weekday" : "saturday", "time" : "03:30"}, # Auto Tag -> [Auto Tag] (Weekly) (Every saturday at 3:30AM)
|
||||
{"task" : "Generate", "weekday" : "saturday", "time" : "04:00"}, # Generated Content-> [Generate] (Every saturday at 4AM)
|
||||
{"task" : "Backup", "weekday" : "saturday", "time" : "01:00"}, # Backup -> [Backup] (Weekly) (Every saturday at 1AM)
|
||||
{"task" : "Scan", "weekday" : "saturday", "time" : "02:30"}, # Library -> [Scan] (Weekly) (Every saturday at 2:30AM)
|
||||
{"task" : "Auto Tag", "weekday" : "saturday", "time" : "03:00"}, # Auto Tag -> [Auto Tag] (Weekly) (Every saturday at 3AM)
|
||||
{"task" : "Generate", "weekday" : "saturday", "time" : "03:30"}, # Generated Content-> [Generate] (Every saturday at 3:30AM)
|
||||
{"task" : "Generate Phashes", "weekday" : "saturday", "time" : "04:00"}, # [Generate Phashes] (Every saturday at 4AM)
|
||||
{"task" : "Clean", "weekday" : "saturday", "time" : "04:30"}, # Maintenance -> [Clean] (Every saturday at 4:30AM)
|
||||
{"task" : "Clean Generated Files", "weekday" : "saturday", "time" : "05:00"}, # Maintenance -> [Clean Generated Files] (Every saturday at 5AM)
|
||||
{"task" : "Optimise Database", "weekday" : "saturday", "time" : "05:30"}, # Maintenance -> [Optimise Database] (Every saturday at 5:30AM)
|
||||
@@ -42,7 +44,7 @@ config = {
|
||||
# 4 = 4th specified weekday of the month.
|
||||
# The Backup task is scheduled monthly
|
||||
# Optional field for task "Backup" is maxBackup. For detail usage, see example #A5 in filemonitor_task_examples.py
|
||||
{"task" : "Backup", "weekday" : "sunday", "time" : "01:00", "monthly" : 2}, # Backup -> [Backup] 2nd sunday of the month at 1AM (01:00)
|
||||
# {"task" : "Backup", "weekday" : "saturday", "time" : "01:00", "monthly" : 2}, # Backup -> [Backup] 2nd sunday of the month at 1AM (01:00)
|
||||
|
||||
# The [CheckStashIsRunning] task checks if Stash is running. If not running, it will start up stash.
|
||||
# This task only works if FileMonitor is started as a service or in command line mode.
|
||||
@@ -63,6 +65,8 @@ config = {
|
||||
"runCleanAfterDelete": False,
|
||||
# Enable to run metadata_generate (Generate Content) after metadata scan.
|
||||
"runGenerateContent": False,
|
||||
# Tag name when tagging files that are no longer in Stash Library paths.
|
||||
"NotInLibraryTagName" : "_NoLongerPartOfLibrary",
|
||||
|
||||
# When populated (comma separated list [lower-case]), only scan for changes for specified file extension
|
||||
"fileExtTypes" : "", # Example: "mp4,mpg,mpeg,m2ts,wmv,avi,m4v,flv,mov,asf,mkv,divx,webm,ts,mp2t"
|
||||
@@ -85,6 +89,32 @@ config = {
|
||||
"createSpecFileToExit": True,
|
||||
# Enable to delete special file immediately after it's created in stop process.
|
||||
"deleteSpecFileInStop": False,
|
||||
# Docker notification from host machine
|
||||
"dockers": [ # Example Stash Docker configurations. For more details see https://github.com/David-Maisonave/Axter-Stash/blob/main/plugins/FileMonitor#Multiple-Stash-Docker-Configuration
|
||||
# # A simple basic example with only one bind mount path.
|
||||
# {"GQL":"http://localhost:9995", "apiKey":"", "bindMounts":[{r"C:\Video":"/mnt/Video"}]},
|
||||
|
||||
# # Example having 8 bind mount paths.
|
||||
# {"GQL":"http://localhost:9997", "apiKey":"", "bindMounts":[
|
||||
# {r"C:\Users\admin3\AppData\Local\Docker\wsl\ManyMnt\data":"/data"},
|
||||
# {r"C:\Users\admin3\Videos":"/external"},
|
||||
# {r"C:\Users\admin3\Pictures":"/external2"},
|
||||
# {r"C:\Users\admin3\Downloads":"/external3"},
|
||||
# {r"E:\Downloads":"/external4"},
|
||||
# {r"E:\Celeb":"/external5"},
|
||||
# {r"F:\Hentai":"/external6"},
|
||||
# {r"Z:\Temp":"/external7"},
|
||||
# ]
|
||||
# },
|
||||
|
||||
# # Example using the apiKey for a password configured Stash installation.
|
||||
# {"GQL":"http://localhost:9994", "apiKey":"eyJhb3676zgdUzI1NiIsInR5cCI6IwfXVCJ9.ewJ1aWQiOiJheHRlweIsInN1YiI6IkFQSUtleSIsImlhdewrweczNDU0MDk3N30.4nZVLk3xikjJZfZ0JTPA_Fic8JvFx3DZe5U21Zasdag", "bindMounts":[
|
||||
# {r"C:\Users\admin3\AppData\Local\Docker\wsl\MyStashContainer\data":"/data"},
|
||||
# {r"C:\Vid":"/mnt/Vid"},
|
||||
# {r"C:\Users\admin3\Downloads":"/mnt/Downloads"},
|
||||
# ]
|
||||
# },
|
||||
],
|
||||
|
||||
# Below are place holders for **possible** future features.
|
||||
# !!! Not yet implemented !!!
|
||||
|
||||
@@ -15,6 +15,7 @@ self_unit_test = {
|
||||
{"task" : "Trace", "minutes" : 1}, # Test plugin trace logging
|
||||
{"task" : "LogOnce", "seconds" :15}, # Test LogOnce
|
||||
{"task" : "TraceOnce", "seconds" : 5}, # Test TraceOnce
|
||||
{"task" : "DebugOnce", "seconds" : 5}, # Test DebugOnce
|
||||
{"task" : "CheckStashIsRunning", "RunAfter" : [{"task" : "Scan"},{"task" : "Backup", "maxBackup" : 0},{"task" : "Clean"}], "seconds" :15}, # Test RunAfter
|
||||
{"task" : "CheckStashIsRunning", "command" : "<stash_path>stash-win.exe", "seconds" :10}, # Check if Stash is running. If not running, start up Stash.
|
||||
# {"task" : "CheckStashIsRunning", "RunAfter" : [{"task" : "Scan"}], "seconds" :15}, # To test CheckStashIsRunning, kill Stash after starting FileMonitor service via following command:taskkill /F /IM "stash-win.exe"
|
||||
@@ -23,14 +24,18 @@ self_unit_test = {
|
||||
# Test [Delete Duplicates] with [Delete Duplicate Scheduler] disabled, and then with it enabled.
|
||||
{"task" : "DupFileManager", "taskName" : "Delete Duplicates", "validateDir" : "DupFileManager", "weekday" : "every", "time" : "06:17"}, # [Plugin Tasks] -> DupFileManager -> [Delete Duplicates]
|
||||
{"task" : "Generate", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Generate Phashes", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Clean", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Clean Path", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Auto Tag", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Auto Tag Path", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Optimise Database", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "pathParser", "taskName" : "Create Tags", "validateDir" : "pathParser", "weekday" : "every", "time" : "06:17"}, # In task queue as -> Running plugin task: Create Tags
|
||||
{"task" : "DupFileManager", "taskMode" : "tag_duplicates_task", "taskQue":False, "weekday" : "every", "time" : "06:17"}, # Does NOT run in the task queue
|
||||
{"task" : "DupFileManager", "taskName" : "Tag Duplicates", "validateDir" : "DupFileManager", "weekday" : "every", "time" : "06:17"}, # [Plugin Tasks] -> DupFileManager -> [Tag Duplicates]
|
||||
{"task" : "DupFileManager", "taskName" : "Delete Tagged Duplicates", "weekday" : "every", "time" : "06:17"}, # [Plugin Tasks] -> DupFileManager -> [Tag Duplicates]
|
||||
{"task" : "Scan","paths": [r"B:\_\SpecialSet", r"C:\foo"], "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Scan", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "Scan Path","paths": [r"B:\_\SpecialSet", r"C:\foo"], "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "GQL", "input" : "mutation OptimiseDatabase { optimiseDatabase }", "weekday" : "every", "time" : "06:17"}, # In task queue as -> Optimising database...
|
||||
{"task" : "Clean Generated Files", "weekday" : "every", "time" : "06:17"},
|
||||
{"task" : "RenameGeneratedFiles", "weekday" : "every", "time" : "06:17"}, # In task queue as -> Migrating scene hashes...
|
||||
|
||||
@@ -11,9 +11,9 @@ task_examples = {
|
||||
{"task" : "python", "script" : "<plugin_path>test_script_hello_world.py", "args" : "--MyArguments Hello", "weekday" : "monday", "time" : "DISABLED"}, # change "DISABLED" to valid time
|
||||
|
||||
# Example#A3: The following task types can optionally take a [paths] field. If the paths field does not exists, the paths in the Stash library is used.
|
||||
{"task" : "Scan", "paths" : [r"E:\MyVideos\downloads", r"V:\MyOtherVideos"], "weekday" : "sunday", "time" : "DISABLED"}, # Library -> [Scan]
|
||||
{"task" : "Auto Tag", "paths" : [r"E:\MyVideos\downloads", r"V:\MyOtherVideos"], "weekday" : "monday,tuesday,wednesday,thursday,friday,saturday,sunday", "time" : "DISABLED"}, # Auto Tag -> [Auto Tag]
|
||||
{"task" : "Clean", "paths" : ["E:\\MyVideos\\downloads", "V:\\MyOtherVideos"], "weekday" : "sunday", "time" : "DISABLED"}, # Generated Content-> [Generate]
|
||||
{"task" : "Scan Path", "paths" : [r"E:\MyVideos\downloads", r"V:\MyOtherVideos"], "weekday" : "sunday", "time" : "DISABLED"}, # Library -> [Scan]
|
||||
{"task" : "Auto Tag Path", "paths" : [r"E:\MyVideos\downloads", r"V:\MyOtherVideos"], "weekday" : "monday,tuesday,wednesday,thursday,friday,saturday,sunday", "time" : "DISABLED"}, # Auto Tag -> [Auto Tag]
|
||||
{"task" : "Clean Path", "paths" : ["E:\\MyVideos\\downloads", "V:\\MyOtherVideos"], "weekday" : "sunday", "time" : "DISABLED"}, # Generated Content-> [Generate]
|
||||
|
||||
# Example#A4: Task which calls Migrations -> [Rename generated files]
|
||||
{"task" : "RenameGeneratedFiles", "weekday" : "tuesday,thursday", "time" : "DISABLED"}, # (bi-weekly) example
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
stashapp-tools >= 0.2.50
|
||||
pyYAML
|
||||
watchdog
|
||||
requests
|
||||
watchdog
|
||||
schedule
|
||||
pyyaml
|
||||
11
plugins/FileMonitor/version_history/README.md
Normal file
11
plugins/FileMonitor/version_history/README.md
Normal file
@@ -0,0 +1,11 @@
|
||||
##### This page was added starting on version 1.0.0 to keep track of newly added features between versions.
|
||||
### 1.0.0
|
||||
- Added Tools-UI option to get FileMonitor running status.
|
||||
- Added Stash toolbar icon to get FileMonitor running status.
|
||||
### 1.0.1
|
||||
- Added Docker support.
|
||||
### 1.0.2
|
||||
- Added ability to monitor host file system for multiple Docker Stash installations.
|
||||
### 1.0.3
|
||||
- Added start and stop FileMonitor button to Tools-UI FileMonitor Status
|
||||
- Fixed bug associated with starting FileMonitor service with no jobs waiting.
|
||||
@@ -1,20 +1,43 @@
|
||||
# RenameFile: Ver 0.4.6 (By David Maisonave)
|
||||
RenameFile is a [Stash](https://github.com/stashapp/stash) plugin which performs the following tasks.
|
||||
- **Rename Scene File Name** (On-The-Fly)
|
||||
- **Append tag names** to file name
|
||||
- **Append Performer names** to file name
|
||||
# RenameFile: Ver 0.5.6 (By David Maisonave)
|
||||
|
||||
It allows users to rename the video (scene) file name by editing the [Title] field located in the scene [Edit] tab.
|
||||
In addition, the plugin optionally also appends tags and performers to the file name if the name does not already exist in the original file name.
|
||||
RenameFile is a [Stash](https://github.com/stashapp/stash) plugin. Starting version 0.5.5, user can add the current title to the title input field by clicking on the current title. Also, the Stash database gets updated directly instead of running a scan task as long as the database is version 68.
|
||||
|
||||
Note: This script is **largely** based on the [Renamer](https://github.com/Serechops/Serechops-Stash/tree/main/plugins/Renamer) script.
|
||||
- The plugin allows user to rename one scene at a time by editing the **[Title]** field and then clicking **[Save]**.
|
||||
|
||||
<img width="270" alt="RenameFileViaTitleUnderEditTab" src="https://github.com/user-attachments/assets/f27d0205-d4ed-44fb-9bb2-5b9a75cba2e0">
|
||||
<img width="270" alt="RenameFileViaTitle_AfterSaved" src="https://github.com/user-attachments/assets/bf5779ea-77b3-478a-8f72-2dba695db6f0">
|
||||
|
||||
- The [Title] field is located under the [Edit] tab.
|
||||
- After clicking **[Save]**, the change can be seen in File Explorer momentarily.
|
||||
|
||||
- <img width="560" alt="RenameFileViaTitle_AfterSaved_InExplorer" src="https://github.com/user-attachments/assets/60cd807b-dd49-4ac8-9eee-801050e20a2c">
|
||||
|
||||
- The plugin can optionally append the following fields if they do not already exist in the file name:
|
||||
|
||||
- title, performers, tags, studio, galleries, resolution, width, height, video_codec, frame_rate, date
|
||||
|
||||
- The newly added UI options allows user to perform the following actions when clicking on the fixed title heading.
|
||||
- Mouse-click: Append the title heading to the input title field.
|
||||
- Ctrl-click: Copy title heading to clipboard.
|
||||
- Shift-click: Replace content of input title field with title heading.
|
||||
- Alt-click: Copy URI (local file path) to clipboard.
|
||||
|
||||
### RenameFile vs RenameOnUpdate
|
||||
|
||||
- Although RenameFile has a similar name to other plugins (RenameOnUpdate, Renamer, etc..), it's main purpose is entirely different.
|
||||
- The main purpose of RenameFile is to rename one scene at a time, which is the scene being displayed on the web browser. The scene is renamed by using the Title field, which is used to rename the base (stem) of the file name.
|
||||
- Other plugins with similar names are used for mass renaming (rename all your scenes), and do not edit the base (stem).
|
||||
|
||||
### Using RenameFile
|
||||
- Open a scene (via Stash), and click on the [**Edit**] tab. Populate the [**Title**] field with the desired file name.
|
||||
- Note: Do **NOT** include the file folder name and do **NOT** include file extension.
|
||||
|
||||
- Open a scene (via Stash), and click on the [**Edit**] tab. Populate the [**Title**] field with the desired file name.
|
||||
- Note: Do **NOT** include the file folder name and do **NOT** include file extension.
|
||||
- After populating the Title field, click the save button.
|
||||
- **Warning:** On Windows, if Stash or any other player is playing the video, the RenameFile plugin will get an access denied error. Use one of the following two methods to avoid this error:
|
||||
- **Option#1:** Populate **handleExe** in renamefile_settings.py with the full path of handle.exe. RenameFile will use this program to close all opened file handles before renaming a file. See options section for more details.
|
||||
- **Option#2:** Refresh the browser for page playing the video before renaming the file via Title field.
|
||||
- After a few seconds, the file will get renamed and the screen will get updated with the new file name.
|
||||
- The append tags and perfomers option is disable by default. To enable these options go to the Settings->Plugins->Plugins->[RenameFile] field options, and enable the associated field.
|
||||
- The append tags and performers option is disable by default. To enable these options go to the Settings->Plugins->Plugins->[RenameFile] field options, and enable the associated field.
|
||||
- When [Append Tags] is enabled, by default tag names are appended to the file name only if the tags do not exist in the original name. Same applies to [Append Performers] option.
|
||||
- Since this plugin is largely based on the [Renamer](https://github.com/Serechops/Serechops-Stash/tree/main/plugins/Renamer) plugin, it inherited some of its features, like being able to include any of the following fields when auto-renaming is executed:
|
||||
- studio, performers, date, height, video_codec, frame_rate
|
||||
@@ -25,27 +48,25 @@ Note: This script is **largely** based on the [Renamer](https://github.com/Serec
|
||||
- Define key fields to use to format the file name. This is a comma seperated list, and the list should be in the desired format order. (Default=title,performers,studio,tags)
|
||||
- For example, if the user wants the performers name before the title, set the performers name first.
|
||||
- Example:"performers,title,tags".
|
||||
- This is an example of user adding height:"title,performers,tags,height"
|
||||
- Here's an example using all of the supported fields: "title,performers,tags,studio,galleries,resolution,width,height,video_codec,frame_rate,date".
|
||||
- The **resolution** field equals width + height.
|
||||
- The date field is **not** populated by default unless the user explicitly adds the date value to a scene.
|
||||
- If **[Key Fields]** is empty, the default value is used. (Default=title,performers,studio,tags)
|
||||
- This is an example of user adding height:"title,performers,tags,height"
|
||||
- Here's an example using all of the supported fields: "title,performers,tags,studio,galleries,resolution,width,height,video_codec,frame_rate,date".
|
||||
- The **resolution** field equals width + height.
|
||||
- The date field is **not** populated by default unless the user explicitly adds the date value to a scene.
|
||||
- If **[Key Fields]** is empty, the default value is used. (Default=title,performers,studio,tags)
|
||||
- There are additional options in renamefile_settings.py, but these options should only be changed by advanced users, and any changes should be tested first with the [Dry-Run] option enabled.
|
||||
|
||||
**Note:** On Windows 10/11, the file can not be renamed while it's playing. It will result in following error:
|
||||
`
|
||||
Error: [WinError 32] The process cannot access the file because it is being used by another process
|
||||
`
|
||||
To avoid this error, refresh the URL before changing the Title field.
|
||||
|
||||
### Requirements
|
||||
|
||||
- pip install -r requirements.txt
|
||||
- Or manually install each requirement:
|
||||
- `pip install stashapp-tools --upgrade`
|
||||
- `pip install pyYAML`
|
||||
- `pip install requests`
|
||||
- `pip install psutil`
|
||||
- For (Windows-Only) optional feature **handleExe**, download handle.exe:
|
||||
- https://learn.microsoft.com/en-us/sysinternals/downloads/handle
|
||||
|
||||
### Installation
|
||||
|
||||
- Follow **Requirements** instructions.
|
||||
- Create a folder named **RenameFile**, in the stash plugin directory (C:\Users\MyUserName\.stash\plugins).
|
||||
- Download the latest version from the following link: [RenameFile](https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/RenameFile), and copy the plugin files to folder.(**C:\Users\MyUserName\\.stash\plugins\RenameFile**).
|
||||
@@ -54,10 +75,17 @@ To avoid this error, refresh the URL before changing the Title field.
|
||||
That's it!!!
|
||||
|
||||
### Options
|
||||
|
||||
- Main options are accessible in the GUI via Settings->Plugins->Plugins->[RenameFile].
|
||||
- Advanced options are avialable in the **renamefile_settings.py** file. After making changes, go to http://localhost:9999/settings?tab=plugins, and click [Reload Plugins].
|
||||
- Advanced options are available in the **renamefile_settings.py** file. After making changes, go to http://localhost:9999/settings?tab=plugins, and click [Reload Plugins].
|
||||
- **handleExe** - Populate this field in order to allow RenameFile plugin to close all open file handles.
|
||||
- In Windows, a file can't be renamed if the file is opened by another process. In other words, if a file is being played by Stash or any other video player, the RenameFile plugin will get an access denied error when trying to rename the file.
|
||||
- As a workaround, the 'handleExe' field can be populated with the full path to handle.exe or handle64.exe. (See requirements for download link)
|
||||
- RenameFile can use the Handle.exe program to close all opened file handles by all processes before renaming the file.
|
||||
- **Warning:** This feature can cause the process playing the video to crash.
|
||||
|
||||
## Bugs and Feature Request
|
||||
|
||||
Please use the following link to report RenameFile bugs:
|
||||
[RenameFile Bug Report](https://github.com/David-Maisonave/Axter-Stash/issues/new?assignees=&labels=Plugin_Bug&projects=&template=bug_report_plugin.yml&title=%F0%9F%AA%B2%5BRenameFile%5D+Your_Short_title)
|
||||
|
||||
@@ -65,3 +93,6 @@ Please use the following link to report RenameFile Feature Request:[RenameFile F
|
||||
|
||||
Please do **NOT** use the feature request to include any problems associated with errors. Instead use the bug report for error issues.
|
||||
|
||||
**Note:** This script is **largely** based on the [Renamer](https://github.com/Serechops/Serechops-Stash/tree/main/plugins/Renamer) script.
|
||||
|
||||
### Future Planned Features or Fixes
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,11 +2,22 @@
|
||||
# By David Maisonave (aka Axter) Jul-2024 (https://www.axter.com/)
|
||||
# Get the latest developers version from following link: https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/RenameFile
|
||||
# Based on source code from https://github.com/Serechops/Serechops-Stash/tree/main/plugins/Renamer
|
||||
import os, sys, shutil, json, requests, hashlib, pathlib, logging
|
||||
|
||||
# To automatically install missing modules, uncomment the following lines of code.
|
||||
# try:
|
||||
# import ModulesValidate
|
||||
# ModulesValidate.modulesInstalled(["stashapp-tools", "requests"])
|
||||
# except Exception as e:
|
||||
# import traceback, sys
|
||||
# tb = traceback.format_exc()
|
||||
# print(f"ModulesValidate Exception. Error: {e}\nTraceBack={tb}", file=sys.stderr)
|
||||
|
||||
import os, sys, shutil, json, hashlib, pathlib, logging, time, traceback
|
||||
from pathlib import Path
|
||||
import stashapi.log as log # Importing stashapi.log as log for critical events ONLY
|
||||
from stashapi.stashapp import StashInterface
|
||||
from StashPluginHelper import StashPluginHelper
|
||||
from StashPluginHelper import taskQueue
|
||||
from renamefile_settings import config # Import settings from renamefile_settings.py
|
||||
|
||||
# **********************************************************************
|
||||
@@ -25,16 +36,14 @@ QUERY_ALL_SCENES = """
|
||||
# **********************************************************************
|
||||
# Global variables --------------------------------------------
|
||||
inputToUpdateScenePost = False
|
||||
doNothing = False
|
||||
exitMsg = "Change success!!"
|
||||
|
||||
# **********************************************************************
|
||||
# ----------------------------------------------------------------------
|
||||
settings = {
|
||||
"performerAppend": False,
|
||||
"studioAppend": False,
|
||||
"tagAppend": False,
|
||||
"yRenameEvenIfTitleEmpty": False,
|
||||
"z_keyFIeldsIncludeInFileName": False,
|
||||
"zafileRenameViaMove": False,
|
||||
"zfieldKeyList": DEFAULT_FIELD_KEY_LIST,
|
||||
"zmaximumTagKeys": 12,
|
||||
"zseparators": DEFAULT_SEPERATOR,
|
||||
@@ -46,21 +55,31 @@ stash = StashPluginHelper(
|
||||
config=config,
|
||||
maxbytes=10*1024*1024,
|
||||
)
|
||||
stash.Status(logLevel=logging.DEBUG)
|
||||
# stash.status(logLevel=logging.DEBUG)
|
||||
if stash.PLUGIN_ID in stash.PLUGIN_CONFIGURATION:
|
||||
stash.pluginSettings.update(stash.PLUGIN_CONFIGURATION[stash.PLUGIN_ID])
|
||||
if stash.IS_DOCKER:
|
||||
stash.log_to_wrn_set = stash.LogTo.STASH + stash.LogTo.FILE
|
||||
# ----------------------------------------------------------------------
|
||||
WRAPPER_STYLES = config["wrapper_styles"]
|
||||
POSTFIX_STYLES = config["postfix_styles"]
|
||||
|
||||
renameEvenIfTitleEmpty = stash.pluginSettings["yRenameEvenIfTitleEmpty"]
|
||||
|
||||
# Extract dry_run setting from settings
|
||||
dry_run = stash.pluginSettings["zzdryRun"]
|
||||
dry_run_prefix = ''
|
||||
try:
|
||||
if stash.JSON_INPUT['args']['hookContext']['input']: inputToUpdateScenePost = True # This avoids calling rename logic twice
|
||||
stash.Trace(f"hookContext={stash.JSON_INPUT['args']['hookContext']}")
|
||||
if stash.JSON_INPUT['args']['hookContext']['input']:
|
||||
if stash.JSON_INPUT['args']['hookContext']['input'] == None:
|
||||
doNothing = True
|
||||
stash.Log("input = None")
|
||||
else:
|
||||
inputToUpdateScenePost = True # This avoids calling rename logic twice
|
||||
except:
|
||||
pass
|
||||
stash.Trace("settings: %s " % (stash.pluginSettings,))
|
||||
stash.Warn("Exception thrown")
|
||||
|
||||
if dry_run:
|
||||
stash.Log("Dry run mode is enabled.")
|
||||
@@ -69,34 +88,42 @@ max_tag_keys = stash.pluginSettings["zmaximumTagKeys"] if stash.pluginSettings["
|
||||
# ToDo: Add split logic here to slpit possible string array into an array
|
||||
exclude_paths = config["pathToExclude"]
|
||||
exclude_paths = exclude_paths.split()
|
||||
stash.Trace(f"(exclude_paths={exclude_paths})")
|
||||
if len(exclude_paths) > 0:
|
||||
stash.Trace(f"(exclude_paths={exclude_paths})")
|
||||
excluded_tags = config["excludeTags"]
|
||||
# Extract tag whitelist from settings
|
||||
tag_whitelist = config["tagWhitelist"]
|
||||
if not tag_whitelist:
|
||||
tag_whitelist = ""
|
||||
stash.Trace(f"(tag_whitelist={tag_whitelist})")
|
||||
if len(tag_whitelist) > 0:
|
||||
stash.Trace(f"(tag_whitelist={tag_whitelist})")
|
||||
handleExe = stash.pluginConfig['handleExe']
|
||||
openedfile = None
|
||||
if handleExe != None and handleExe != "" and os.path.isfile(handleExe):
|
||||
# ModulesValidate.modulesInstalled(["psutil"], silent=True)
|
||||
from openedFile import openedFile
|
||||
openedfile = openedFile(handleExe, stash)
|
||||
|
||||
endpointHost = stash.JSON_INPUT['server_connection']['Host']
|
||||
if endpointHost == "0.0.0.0":
|
||||
endpointHost = "localhost"
|
||||
endpoint = f"{stash.JSON_INPUT['server_connection']['Scheme']}://{endpointHost}:{stash.JSON_INPUT['server_connection']['Port']}/graphql"
|
||||
|
||||
stash.Trace(f"(endpoint={endpoint})")
|
||||
move_files = stash.pluginSettings["zafileRenameViaMove"]
|
||||
# stash.Trace(f"(endpoint={endpoint})")
|
||||
move_files = stash.Setting("fileRenameViaMove")
|
||||
fieldKeyList = stash.pluginSettings["zfieldKeyList"] # Default Field Key List with the desired order
|
||||
if not fieldKeyList or fieldKeyList == "":
|
||||
fieldKeyList = DEFAULT_FIELD_KEY_LIST
|
||||
fieldKeyList = fieldKeyList.replace(" ", "")
|
||||
fieldKeyList = fieldKeyList.replace(";", ",")
|
||||
fieldKeyList = fieldKeyList.split(",")
|
||||
stash.Trace(f"(fieldKeyList={fieldKeyList})")
|
||||
# stash.Trace(f"(fieldKeyList={fieldKeyList})")
|
||||
separator = stash.pluginSettings["zseparators"]
|
||||
# ----------------------------------------------------------------------
|
||||
# **********************************************************************
|
||||
|
||||
double_separator = separator + separator
|
||||
stash.Trace(f"(WRAPPER_STYLES={WRAPPER_STYLES}) (POSTFIX_STYLES={POSTFIX_STYLES})")
|
||||
# stash.Trace(f"(WRAPPER_STYLES={WRAPPER_STYLES}) (POSTFIX_STYLES={POSTFIX_STYLES})")
|
||||
|
||||
# Function to replace illegal characters in filenames
|
||||
def replace_illegal_characters(filename):
|
||||
@@ -112,13 +139,56 @@ def should_exclude_path(scene_details):
|
||||
return True
|
||||
return False
|
||||
|
||||
include_keyField_if_in_name = stash.pluginSettings["z_keyFIeldsIncludeInFileName"]
|
||||
excludeIgnoreAutoTags = config["excludeIgnoreAutoTags"]
|
||||
|
||||
def getPerformers(scene, title):
|
||||
title = title.lower()
|
||||
results = ""
|
||||
for performer in scene['performers']:
|
||||
name = performer['name']
|
||||
stash.Trace(f"performer = {name}")
|
||||
if not include_keyField_if_in_name:
|
||||
if name.lower() in title:
|
||||
stash.Trace(f"Skipping performer name '{name}' because already in title: '{title}'")
|
||||
continue
|
||||
results += f"{name}, "
|
||||
return results.strip(", ")
|
||||
|
||||
def getGalleries(scene, title):
|
||||
results = ""
|
||||
for gallery in scene['galleries']:
|
||||
name = gallery = stash.find_gallery(gallery['id'])['title']
|
||||
stash.Trace(f"gallery = {name}")
|
||||
if not include_keyField_if_in_name:
|
||||
if name.lower() in title:
|
||||
stash.Trace(f"Skipping gallery name '{name}' because already in title: '{title}'")
|
||||
continue
|
||||
results += f"{name}, "
|
||||
return results.strip(", ")
|
||||
|
||||
def getTags(scene, title):
|
||||
title = title.lower()
|
||||
results = ""
|
||||
for tag in scene['tags']:
|
||||
name = tag['name']
|
||||
stash.Trace(f"tag = {name}")
|
||||
if excludeIgnoreAutoTags == True and tag['ignore_auto_tag'] == True:
|
||||
stash.Trace(f"Skipping tag name '{name}' because ignore_auto_tag is True.")
|
||||
continue
|
||||
if not include_keyField_if_in_name:
|
||||
if name.lower() in title:
|
||||
stash.Trace(f"Skipping tag name '{name}' because already in title: '{title}'")
|
||||
continue
|
||||
results += f"{name}, "
|
||||
return results.strip(", ")
|
||||
|
||||
# Function to form the new filename based on scene details and user settings
|
||||
def form_filename(original_file_stem, scene_details):
|
||||
filename_parts = []
|
||||
tag_keys_added = 0
|
||||
default_title = ''
|
||||
if_notitle_use_org_filename = config["if_notitle_use_org_filename"]
|
||||
include_keyField_if_in_name = stash.pluginSettings["z_keyFIeldsIncludeInFileName"]
|
||||
if if_notitle_use_org_filename:
|
||||
default_title = original_file_stem
|
||||
# ...................
|
||||
@@ -152,21 +222,26 @@ def form_filename(original_file_stem, scene_details):
|
||||
stash.Log(f"Skipping tag not in whitelist: {tag_name}")
|
||||
stash.Trace(f"(tag_keys_added={tag_keys_added})")
|
||||
|
||||
stash.Trace(f"scene_details = {scene_details}")
|
||||
|
||||
for key in fieldKeyList:
|
||||
if key == 'studio':
|
||||
if stash.pluginSettings["studioAppend"]:
|
||||
studio_name = scene_details.get('studio', {})
|
||||
if stash.Setting("studioAppendEnable"):
|
||||
studio = scene_details.get('studio')
|
||||
if studio != None:
|
||||
studio_name = studio.get('name')
|
||||
else:
|
||||
studio_name = None
|
||||
stash.Trace(f"(studio_name={studio_name})")
|
||||
if studio_name:
|
||||
studio_name = scene_details.get('studio', {}).get('name', '')
|
||||
stash.Trace(f"(studio_name={studio_name})")
|
||||
if studio_name:
|
||||
studio_name += POSTFIX_STYLES.get('studio')
|
||||
if include_keyField_if_in_name or studio_name.lower() not in title.lower():
|
||||
if WRAPPER_STYLES.get('studio'):
|
||||
filename_parts.append(f"{WRAPPER_STYLES['studio'][0]}{studio_name}{WRAPPER_STYLES['studio'][1]}")
|
||||
else:
|
||||
filename_parts.append(studio_name)
|
||||
studio_name += POSTFIX_STYLES.get('studio')
|
||||
if include_keyField_if_in_name or studio_name.lower() not in title.lower():
|
||||
if WRAPPER_STYLES.get('studio'):
|
||||
filename_parts.append(f"{WRAPPER_STYLES['studio'][0]}{studio_name}{WRAPPER_STYLES['studio'][1]}")
|
||||
else:
|
||||
filename_parts.append(studio_name)
|
||||
else:
|
||||
stash.Trace("Skipping studio because of user setting studioAppend disabled.")
|
||||
elif key == 'title':
|
||||
if title: # This value has already been fetch in start of function because it needs to be defined before tags and performers
|
||||
title += POSTFIX_STYLES.get('title')
|
||||
@@ -175,19 +250,17 @@ def form_filename(original_file_stem, scene_details):
|
||||
else:
|
||||
filename_parts.append(title)
|
||||
elif key == 'performers':
|
||||
if stash.pluginSettings["performerAppend"]:
|
||||
performers = '-'.join([performer.get('name', '') for performer in scene_details.get('performers', [])])
|
||||
if performers:
|
||||
if stash.Setting("performerAppendEnable"):
|
||||
performers = getPerformers(scene_details, title)
|
||||
if performers != "":
|
||||
performers += POSTFIX_STYLES.get('performers')
|
||||
stash.Trace(f"(include_keyField_if_in_name={include_keyField_if_in_name})")
|
||||
if include_keyField_if_in_name or performers.lower() not in title.lower():
|
||||
stash.Trace(f"(performers={performers})")
|
||||
if WRAPPER_STYLES.get('performers'):
|
||||
filename_parts.append(f"{WRAPPER_STYLES['performers'][0]}{performers}{WRAPPER_STYLES['performers'][1]}")
|
||||
else:
|
||||
filename_parts.append(performers)
|
||||
stash.Trace(f"(performers={performers})")
|
||||
if WRAPPER_STYLES.get('performers'):
|
||||
filename_parts.append(f"{WRAPPER_STYLES['performers'][0]}{performers}{WRAPPER_STYLES['performers'][1]}")
|
||||
else:
|
||||
filename_parts.append(performers)
|
||||
elif key == 'date':
|
||||
scene_date = scene_details.get('date', '')
|
||||
scene_date = scene_details.get('date')
|
||||
if scene_date:
|
||||
scene_date += POSTFIX_STYLES.get('date')
|
||||
if WRAPPER_STYLES.get('date'):
|
||||
@@ -195,8 +268,10 @@ def form_filename(original_file_stem, scene_details):
|
||||
if scene_date not in title:
|
||||
filename_parts.append(scene_date)
|
||||
elif key == 'resolution':
|
||||
width = str(scene_details.get('files', [{}])[0].get('width', '')) # Convert width to string
|
||||
height = str(scene_details.get('files', [{}])[0].get('height', '')) # Convert height to string
|
||||
# width = str(scene_details.get('files', [{}])[0].get('width', '')) # Convert width to string
|
||||
# height = str(scene_details.get('files', [{}])[0].get('height', '')) # Convert height to string
|
||||
width = str(scene_details['files'][0]['width'])
|
||||
height = str(scene_details['files'][0]['height'])
|
||||
if width and height:
|
||||
resolution = width + POSTFIX_STYLES.get('width_height_seperator') + height + POSTFIX_STYLES.get('resolution')
|
||||
if WRAPPER_STYLES.get('resolution'):
|
||||
@@ -204,7 +279,7 @@ def form_filename(original_file_stem, scene_details):
|
||||
if resolution not in title:
|
||||
filename_parts.append(resolution)
|
||||
elif key == 'width':
|
||||
width = str(scene_details.get('files', [{}])[0].get('width', '')) # Convert width to string
|
||||
width = str(scene_details['files'][0]['width'])
|
||||
if width:
|
||||
width += POSTFIX_STYLES.get('width')
|
||||
if WRAPPER_STYLES.get('width'):
|
||||
@@ -212,7 +287,7 @@ def form_filename(original_file_stem, scene_details):
|
||||
if width not in title:
|
||||
filename_parts.append(width)
|
||||
elif key == 'height':
|
||||
height = str(scene_details.get('files', [{}])[0].get('height', '')) # Convert height to string
|
||||
height = str(scene_details['files'][0]['height'])
|
||||
if height:
|
||||
height += POSTFIX_STYLES.get('height')
|
||||
if WRAPPER_STYLES.get('height'):
|
||||
@@ -220,7 +295,7 @@ def form_filename(original_file_stem, scene_details):
|
||||
if height not in title:
|
||||
filename_parts.append(height)
|
||||
elif key == 'video_codec':
|
||||
video_codec = scene_details.get('files', [{}])[0].get('video_codec', '').upper() # Convert to uppercase
|
||||
video_codec = scene_details['files'][0]['video_codec'].upper() # Convert to uppercase
|
||||
if video_codec:
|
||||
video_codec += POSTFIX_STYLES.get('video_codec')
|
||||
if WRAPPER_STYLES.get('video_codec'):
|
||||
@@ -228,7 +303,7 @@ def form_filename(original_file_stem, scene_details):
|
||||
if video_codec not in title:
|
||||
filename_parts.append(video_codec)
|
||||
elif key == 'frame_rate':
|
||||
frame_rate = str(scene_details.get('files', [{}])[0].get('frame_rate', '')) + 'FPS' # Convert to string and append ' FPS'
|
||||
frame_rate = str(scene_details['files'][0]['frame_rate']) + 'FPS' # Convert to string and append ' FPS'
|
||||
if frame_rate:
|
||||
frame_rate += POSTFIX_STYLES.get('frame_rate')
|
||||
if WRAPPER_STYLES.get('frame_rate'):
|
||||
@@ -236,24 +311,24 @@ def form_filename(original_file_stem, scene_details):
|
||||
if frame_rate not in title:
|
||||
filename_parts.append(frame_rate)
|
||||
elif key == 'galleries':
|
||||
galleries = [gallery.get('title', '') for gallery in scene_details.get('galleries', [])]
|
||||
for gallery_name in galleries:
|
||||
stash.Trace(f"(include_keyField_if_in_name={include_keyField_if_in_name}) (gallery_name={gallery_name})")
|
||||
if include_keyField_if_in_name or gallery_name.lower() not in title.lower():
|
||||
gallery_name += POSTFIX_STYLES.get('galleries')
|
||||
if WRAPPER_STYLES.get('galleries'):
|
||||
filename_parts.append(f"{WRAPPER_STYLES['galleries'][0]}{gallery_name}{WRAPPER_STYLES['galleries'][1]}")
|
||||
else:
|
||||
filename_parts.append(gallery_name)
|
||||
stash.Trace(f"(gallery_name={gallery_name})")
|
||||
galleries = getGalleries(scene_details, title)
|
||||
if galleries != "":
|
||||
galleries += POSTFIX_STYLES.get('galleries')
|
||||
if WRAPPER_STYLES.get('galleries'):
|
||||
filename_parts.append(f"{WRAPPER_STYLES['galleries'][0]}{galleries}{WRAPPER_STYLES['galleries'][1]}")
|
||||
else:
|
||||
filename_parts.append(galleries)
|
||||
stash.Trace(f"(galleries={galleries})")
|
||||
elif key == 'tags':
|
||||
if stash.pluginSettings["tagAppend"]:
|
||||
tags = [tag.get('name', '') for tag in scene_details.get('tags', [])]
|
||||
for tag_name in tags:
|
||||
stash.Trace(f"(include_keyField_if_in_name={include_keyField_if_in_name}) (tag_name={tag_name})")
|
||||
if include_keyField_if_in_name or tag_name.lower() not in title.lower():
|
||||
add_tag(tag_name + POSTFIX_STYLES.get('tag'))
|
||||
stash.Trace(f"(tag_name={tag_name})")
|
||||
if stash.Setting("tagAppendEnable"):
|
||||
tags = getTags(scene_details, title)
|
||||
if tags != "":
|
||||
tags += POSTFIX_STYLES.get('tag')
|
||||
if WRAPPER_STYLES.get('tag'):
|
||||
filename_parts.append(f"{WRAPPER_STYLES['tag'][0]}{tags}{WRAPPER_STYLES['tag'][1]}")
|
||||
else:
|
||||
filename_parts.append(tags)
|
||||
stash.Trace(f"(tags={tags})")
|
||||
|
||||
stash.Trace(f"(filename_parts={filename_parts})")
|
||||
new_filename = separator.join(filename_parts).replace(double_separator, separator)
|
||||
@@ -268,13 +343,34 @@ def form_filename(original_file_stem, scene_details):
|
||||
|
||||
def rename_scene(scene_id):
|
||||
global exitMsg
|
||||
scene_details = stash.find_scene(scene_id)
|
||||
stash.Trace(f"(scene_details1={scene_details})")
|
||||
POST_SCAN_DELAY = 3
|
||||
fragment = 'id title performers {name} tags {id name ignore_auto_tag} studio {name} galleries {id} files {id path width height video_codec frame_rate} date'
|
||||
scene_details = stash.find_scene(scene_id, fragment)
|
||||
stash.Trace(f"(scene_details={scene_details})")
|
||||
if not scene_details:
|
||||
stash.Error(f"Scene with ID {scene_id} not found.")
|
||||
return None
|
||||
taskqueue = taskQueue(stash.job_queue())
|
||||
original_file_path = scene_details['files'][0]['path']
|
||||
original_parent_directory = Path(original_file_path).parent
|
||||
maxScanCountDefault = 5
|
||||
maxScanCountForUpdate = 10
|
||||
if scene_details['title'] == None or scene_details['title'] == "":
|
||||
if renameEvenIfTitleEmpty == False:
|
||||
stash.Log("Nothing to do because title is empty.")
|
||||
return None
|
||||
stash.Warn("Title is empty.")
|
||||
maxScanCountDefault = 1
|
||||
maxScanCountForUpdate = 1
|
||||
if not os.path.isfile(original_file_path) and not taskqueue.clearDupTagsJobOnTaskQueue() and not taskqueue.deleteTaggedScenesJobOnTaskQueue() and not taskqueue.tooManyScanOnTaskQueue(maxScanCountDefault):
|
||||
stash.Warn(f"[metadata_scan] Have to rescan scene ID {scene_id}, because Stash library path '{original_file_path}' does not exist. Scanning path: {original_parent_directory.resolve().as_posix()}")
|
||||
stash.metadata_scan(paths=[original_parent_directory.resolve().as_posix()])
|
||||
time.sleep(POST_SCAN_DELAY) # After a scan, need a few seconds delay before fetching data.
|
||||
scene_details = stash.find_scene(scene_id)
|
||||
original_file_path = scene_details['files'][0]['path']
|
||||
if not os.path.isfile(original_file_path):
|
||||
stash.Error(f"Can not rename file because path {original_file_path} doesn't exist.")
|
||||
return None
|
||||
stash.Trace(f"(original_file_path={original_file_path})")
|
||||
# Check if the scene's path matches any of the excluded paths
|
||||
if exclude_paths and any(Path(original_file_path).match(exclude_path) for exclude_path in exclude_paths):
|
||||
@@ -293,31 +389,82 @@ def rename_scene(scene_id):
|
||||
new_filename = truncated_filename + '_' + hash_suffix + Path(original_file_path).suffix
|
||||
newFilenameWithExt = new_filename + Path(original_file_path).suffix
|
||||
new_file_path = f"{original_parent_directory}{os.sep}{new_filename}{Path(original_file_name).suffix}"
|
||||
stash.Trace(f"(original_file_name={original_file_name})(new_file_path={new_file_path})")
|
||||
stash.Trace(f"(original_file_name={original_file_name}) (newFilenameWithExt={newFilenameWithExt})(new_file_path={new_file_path}) (FileID={scene_details['files'][0]['id']})")
|
||||
if original_file_name == newFilenameWithExt or original_file_name == new_filename:
|
||||
stash.Log(f"Nothing to do, because new file name matches original file name: (newFilenameWithExt={newFilenameWithExt})")
|
||||
return None
|
||||
targetDidExist = True if os.path.isfile(new_file_path) else False
|
||||
try:
|
||||
if openedfile != None:
|
||||
results = openedfile.closeFile(original_file_path)
|
||||
if results != None:
|
||||
stash.Warn(f"Had to close '{original_file_path}', because it was opened by following pids:{results['pids']}")
|
||||
if move_files:
|
||||
if not dry_run:
|
||||
shutil.move(original_file_path, new_file_path)
|
||||
exitMsg = f"{dry_run_prefix}Moved file to '{new_file_path}' from '{original_file_path}'"
|
||||
else:
|
||||
stash.Trace(f"Rename('{original_file_path}', '{new_file_path}')")
|
||||
if not dry_run:
|
||||
os.rename(original_file_path, new_file_path)
|
||||
exitMsg = f"{dry_run_prefix}Renamed file to '{new_file_path}' from '{original_file_path}'"
|
||||
except OSError as e:
|
||||
exitMsg = f"Failed to move/rename file: From {original_file_path} to {new_file_path}. Error: {e}"
|
||||
exitMsg = f"Failed to move/rename file: From {original_file_path} to {new_file_path}; targetDidExist={targetDidExist}. Error: {e}"
|
||||
stash.Error(exitMsg)
|
||||
if not targetDidExist and os.path.isfile(new_file_path):
|
||||
if not taskqueue.tooManyScanOnTaskQueue(maxScanCountDefault):
|
||||
stash.Trace(f"Calling [metadata_scan] for path {original_parent_directory.resolve().as_posix()}")
|
||||
stash.metadata_scan(paths=[original_parent_directory.resolve().as_posix()])
|
||||
if targetDidExist:
|
||||
raise
|
||||
if os.path.isfile(new_file_path):
|
||||
if os.path.isfile(original_file_path):
|
||||
os.remove(original_file_path)
|
||||
pass
|
||||
else:
|
||||
# ToDo: Add delay rename here
|
||||
raise
|
||||
|
||||
stash.metadata_scan(paths=[original_parent_directory.resolve().as_posix()])
|
||||
if stash.renameFileNameInDB(scene_details['files'][0]['id'], original_file_name, newFilenameWithExt):
|
||||
stash.Trace("DB rename success")
|
||||
elif not taskqueue.tooManyScanOnTaskQueue(maxScanCountForUpdate):
|
||||
stash.Trace(f"Calling [metadata_scan] for path {original_parent_directory.resolve().as_posix()}")
|
||||
stash.metadata_scan(paths=[original_parent_directory.resolve().as_posix()])
|
||||
time.sleep(POST_SCAN_DELAY) # After a scan, need a few seconds delay before fetching data.
|
||||
scene_details = stash.find_scene(scene_id)
|
||||
if new_file_path != scene_details['files'][0]['path'] and not targetDidExist and not taskqueue.tooManyScanOnTaskQueue(maxScanCountDefault):
|
||||
stash.Trace(f"Calling [metadata_scan] for path {original_parent_directory.resolve().as_posix()}")
|
||||
stash.metadata_scan(paths=[original_parent_directory.resolve().as_posix()])
|
||||
time.sleep(POST_SCAN_DELAY) # After a scan, need a few seconds delay before fetching data.
|
||||
scene_details = stash.find_scene(scene_id)
|
||||
if new_file_path != scene_details['files'][0]['path']:
|
||||
if not os.path.isfile(new_file_path):
|
||||
stash.Error(f"Failed to rename file from {scene_details['files'][0]['path']} to {new_file_path}.")
|
||||
elif os.path.isfile(scene_details['files'][0]['path']):
|
||||
stash.Warn(f"Failed to rename file from {scene_details['files'][0]['path']} to {new_file_path}. Old file still exist. Will attempt delay deletion.")
|
||||
for i in range(1, 5*60):
|
||||
time.sleep(60)
|
||||
if not os.path.isfile(new_file_path):
|
||||
stash.Error(f"Not deleting old file name {original_file_path} because new file name (new_file_path) does NOT exist.")
|
||||
break
|
||||
os.remove(original_file_path)
|
||||
if not os.path.isfile(original_file_path):
|
||||
stash.Log(f"Deleted {original_file_path} in delay deletion after {i} minutes.")
|
||||
stash.Trace(f"Calling [metadata_scan] for path {original_parent_directory.resolve().as_posix()}")
|
||||
stash.metadata_scan(paths=[original_parent_directory.resolve().as_posix()])
|
||||
break
|
||||
else:
|
||||
org_stem = Path(scene_details['files'][0]['path']).stem
|
||||
new_stem = Path(new_file_path).stem
|
||||
file_id = scene_details['files'][0]['id']
|
||||
stash.Warn(f"Failed to update Stash library with new name. Will try direct SQL update. org_name={org_stem}; new_name={new_stem}; file_id={file_id}")
|
||||
# stash.set_file_basename(file_id, new_stem)
|
||||
else:
|
||||
stash.Warn(f"Not performming [metadata_scan] because too many scan jobs are already on the Task Queue. Recommend running a full scan, and a clean job to make sure Stash DB is up to date.")
|
||||
if not taskqueue.cleanJobOnTaskQueue():
|
||||
stash.metadata_scan()
|
||||
stash.metadata_clean()
|
||||
if not taskqueue.cleanGeneratedJobOnTaskQueue():
|
||||
stash.metadata_clean_generated()
|
||||
stash.Log(exitMsg)
|
||||
return new_filename
|
||||
|
||||
@@ -326,7 +473,7 @@ def rename_files_task():
|
||||
all_scenes = scene_result['allScenes']
|
||||
if not all_scenes:
|
||||
stash.Error("No scenes found.")
|
||||
exit()
|
||||
sys.exit(13)
|
||||
# Find the scene with the latest updated_at timestamp
|
||||
latest_scene = max(all_scenes, key=lambda scene: scene['updated_at'])
|
||||
# Extract the ID of the latest scene
|
||||
@@ -340,12 +487,18 @@ def rename_files_task():
|
||||
stash.Log("No changes were made.")
|
||||
return
|
||||
|
||||
if stash.PLUGIN_TASK_NAME == "rename_files_task":
|
||||
rename_files_task()
|
||||
elif inputToUpdateScenePost:
|
||||
rename_files_task()
|
||||
try:
|
||||
if stash.PLUGIN_TASK_NAME == "rename_files_task":
|
||||
stash.Trace(f"PLUGIN_TASK_NAME={stash.PLUGIN_TASK_NAME}")
|
||||
rename_files_task()
|
||||
elif inputToUpdateScenePost:
|
||||
rename_files_task()
|
||||
else:
|
||||
stash.Trace(f"Nothing to do. doNothing={doNothing}")
|
||||
except Exception as e:
|
||||
tb = traceback.format_exc()
|
||||
stash.Error(f"Exception while running Plugin. Error: {e}\nTraceBack={tb}")
|
||||
# stash.log.exception('Got exception on main handler')
|
||||
|
||||
stash.Trace("\n*********************************\nEXITING ***********************\n*********************************")
|
||||
|
||||
# ToDo: Wish List
|
||||
# Add code to get tags from duplicate filenames
|
||||
@@ -1,28 +1,21 @@
|
||||
name: RenameFile
|
||||
description: Renames video (scene) file names when the user edits the [Title] field located in the scene [Edit] tab.
|
||||
version: 0.4.6
|
||||
version: 0.5.6
|
||||
url: https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/RenameFile
|
||||
ui:
|
||||
css:
|
||||
- renamefile.css
|
||||
javascript:
|
||||
- renamefile.js
|
||||
settings:
|
||||
performerAppend:
|
||||
displayName: Append Performers
|
||||
description: Enable to append performers name to file name when renaming a file. Requires performers to be included in [Key Fields] list, which by default it is included.
|
||||
type: BOOLEAN
|
||||
studioAppend:
|
||||
displayName: Append Studio
|
||||
description: Enable to append studio name to file name when renaming a file. Requires studio to be included in [Key Fields] list, which by default it is included.
|
||||
type: BOOLEAN
|
||||
tagAppend:
|
||||
displayName: Append Tags
|
||||
description: Enable to append tag names to file name when renaming a file. Requires tags to be included in [Key Fields] list, which by default it is included.
|
||||
yRenameEvenIfTitleEmpty:
|
||||
displayName: Empty Title Rename
|
||||
description: If enable, rename files even if TITLE field is empty.
|
||||
type: BOOLEAN
|
||||
z_keyFIeldsIncludeInFileName: # Prefixing z_ to variable names so that the GUI will place these fields after above fields (alphabatically listed)
|
||||
displayName: Include Existing Key Field
|
||||
description: Enable to append performer, tags, studios, & galleries even if name already exists in the original file name.
|
||||
type: BOOLEAN
|
||||
zafileRenameViaMove:
|
||||
displayName: Move Instead of Rename
|
||||
description: Enable to move file instead of rename file. (Not recommended for Windows OS)
|
||||
type: BOOLEAN
|
||||
zfieldKeyList:
|
||||
displayName: Key Fields
|
||||
description: '(Default=title,performers,studio,tags) Define key fields to use to format the file name. This is a comma seperated list, and the list should be in the desired format order. For example, if the user wants the performers name before the title, set the performers name first. Example:"performers,title,tags". This is an example of user adding height:"title,performers,tags,height" Here''s an example using all of the supported fields: "title,performers,tags,studio,galleries,resolution,width,height,video_codec,frame_rate,date".'
|
||||
@@ -37,7 +30,7 @@ settings:
|
||||
type: STRING
|
||||
zzdebugTracing:
|
||||
displayName: Debug Tracing
|
||||
description: (Default=false) [***For Advanced Users***] Enable debug tracing. When enabled, additional tracing logging is added to Stash\plugins\RenameFile\renamefile.log
|
||||
description: Enable debug tracing so-as to add additional debug logging in Stash\plugins\RenameFile\renamefile.log
|
||||
type: BOOLEAN
|
||||
zzdryRun:
|
||||
displayName: Dry Run
|
||||
|
||||
@@ -38,7 +38,7 @@ config = {
|
||||
"date": '',
|
||||
},
|
||||
# Add tags to exclude from RenameFile.
|
||||
"excludeTags": ["DuplicateMarkForDeletion", "DuplicateMarkForSwap", "DuplicateWhitelistFile","_DuplicateMarkForDeletion","_DuplicateMarkForSwap", "_DuplicateWhitelistFile"],
|
||||
"excludeTags": ["DuplicateMarkForDeletion", "DuplicateMarkForSwap", "DuplicateWhitelistFile","_DuplicateMarkForDeletion","_DuplicateMarkForSwap", "_DuplicateWhitelistFile","ExcludeDuplicateMarkForDeletion", "_ExcludeDuplicateMarkForDeletion"],
|
||||
# Add path(s) to exclude from RenameFile. Example Usage: r"/path/to/exclude1" When entering multiple paths, use space. Example: r"/path_1_to/exclude" r"/someOtherPath2Exclude" r"/yetAnotherPath"
|
||||
"pathToExclude": "",
|
||||
# Define a whitelist of allowed tags or EMPTY to allow all tags. Example Usage: "tag1", "tag2", "tag3"
|
||||
@@ -47,4 +47,26 @@ config = {
|
||||
"if_notitle_use_org_filename": True, # Warning: Do not recommend setting this to False.
|
||||
# Current Stash DB schema only allows maximum base file name length to be 255
|
||||
"max_filename_length": 255,
|
||||
# Exclude tags with ignore_auto_tag set to True
|
||||
"excludeIgnoreAutoTags": True,
|
||||
# Enable to append performers name to file name when renaming a file. Requires performers to be included in [Key Fields] list, which by default it is included.
|
||||
"performerAppendEnable": True,
|
||||
# Enable to append studio name to file name when renaming a file. Requires studio to be included in [Key Fields] list, which by default it is included.
|
||||
"studioAppendEnable": True,
|
||||
# Enable to append tag names to file name when renaming a file. Requires tags to be included in [Key Fields] list, which by default it is included.
|
||||
"tagAppendEnable": True,
|
||||
# Enable to move file instead of rename file. (Not recommended for Windows OS)
|
||||
"fileRenameViaMove": False,
|
||||
|
||||
# handleExe is for Windows only.
|
||||
# In Windows, a file can't be renamed if the file is opened by another process.
|
||||
# In other words, if a file is being played by Stash or any other video player, the RenameFile plugin
|
||||
# will get an access denied error when trying to rename the file.
|
||||
# As a workaround, the 'handleExe' field can be populated with the full path to handle.exe or handle64.exe.
|
||||
# This executable can be downloaded from the following link:
|
||||
# https://learn.microsoft.com/en-us/sysinternals/downloads/handle
|
||||
# RenameFile can use the Handle.exe program to close all opened file handles by all processes before renaming the file.
|
||||
#
|
||||
# Warning: This feature can cause the process playing the video to crash.
|
||||
"handleExe": r"C:\Sysinternals\handle64.exe", # https://learn.microsoft.com/en-us/sysinternals/downloads/handle
|
||||
}
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
stashapp-tools >= 0.2.50
|
||||
pyYAML
|
||||
requests
|
||||
requests
|
||||
psutil
|
||||
6
plugins/RenameFile/version_history/README.md
Normal file
6
plugins/RenameFile/version_history/README.md
Normal file
@@ -0,0 +1,6 @@
|
||||
##### This page was added starting on version 0.5.6 to keep track of newly added features between versions.
|
||||
### 0.5.6
|
||||
- Fixed bug with studio getting the studio ID instead of the name of the studio in rename process.
|
||||
- Improved performance by having code get all required scene details in one call to stash.
|
||||
- To remove UI clutter, move rarely used options (performerAppendEnable, studioAppendEnable, tagAppendEnable, & fileRenameViaMove) to renamefile_settings.py
|
||||
- Change options (performerAppendEnable, studioAppendEnable, tagAppendEnable) to default to True (enabled)
|
||||
Reference in New Issue
Block a user