This commit is contained in:
Tweeticoats 2023-12-27 23:49:57 +10:30
commit aa520dfa90
47 changed files with 6527 additions and 1394 deletions

51
.github/workflows/deploy.yml vendored Normal file
View File

@ -0,0 +1,51 @@
name: Deploy repository to Github Pages
on:
push:
branches: [ main, stable ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout main
uses: actions/checkout@v2
with:
path: main
ref: main
fetch-depth: '0'
- run: |
cd main
./build_site.sh ../_site/stable
- name: Checkout dev
uses: actions/checkout@v2
with:
path: dev
# change this ref to whatever dev branch/tag we need when necessary
ref: main
fetch-depth: '0'
- run: |
cd dev
../main/build_site.sh ../_site/develop
- uses: actions/upload-pages-artifact@v2
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-22.04
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
/_site

View File

@ -14,35 +14,40 @@ When downloading directly click on the file you want and then make sure to click
# Plugin and Script Directory
This list keeps track of scripts and plugins in this repository. Please ensure the list is kept in alphabetical order.
## NOTE: BREAKING CHANGES
The upcoming v24 release (and the current development branch) have breaking changes to schema, and also plugin changes.
We're beginning to review plugins and the rest and patch them to work, but it's an ongoing process.
We'll update the table below as we do this, but we STRONGLY recommend you do not use the development branch unless you are prepared to help with the patching.
We will also be rearranging things a bit, and updating documentation (including this page)
## Plugins
Category|Triggers|Plugin Name|Description|Minimum Stash version
--------|-----------|-----------|-----------|---------------------
Scraper|Task|[GHScraper_Checker](plugins/GHScraper_Checker)|Compare local file against github file from the community scraper repo.|v0.8
Maintenance|Task<br />Scene.Update|[renamerOnUpdate](plugins/renamerOnUpdate)|Rename/Move your file based on Stash metadata.|v0.7
Maintenance|Set Scene Cover|[setSceneCoverFromFile](plugins/setSceneCoverFromFile)|Searchs Stash for Scenes with a cover image in the same folder and sets the cover image in stash to that image|v0.7
Scenes|SceneMarker.Create<br />SceneMarker.Update|[markerTagToScene](plugins/markerTagToScene)|Adds primary tag of Scene Marker to the Scene on marker create/update.|v0.8 ([46bbede](https://github.com/stashapp/stash/commit/46bbede9a07144797d6f26cf414205b390ca88f9))
Scanning|Scene.Create<br />Gallery.Create<br />Image.Create|[defaultDataForPath](plugins/defaultDataForPath)|Adds configured Tags, Performers and/or Studio to all newly scanned Scenes, Images and Galleries..|v0.8
Scanning|Scene.Create<br />Gallery.Create|[filenameParser](plugins/filenameParser)|Tries to parse filenames, primarily in {studio}.{year}.{month}.{day}.{performer1firstname}.{performer1lastname}.{performer2}.{title} format, into the respective fields|v0.10
Scanning|Scene.Create|[pathParser](plugins/pathParser)|Updates scene info based on the file path.|v0.17
Scanning|Scene.Create|[titleFromFilename](plugins/titleFromFilename)|Sets the scene title to its filename|v0.17
Reporting||[TagGraph](plugins/tagGraph)|Creates a visual of the Tag relations.|v0.7
Category|Triggers|Plugin Name|Description|Minimum Stash version|Updated for v24|
--------|-----------|-----------|-----------|---------------------|-----
Scraper|Task|[GHScraper_Checker](plugins/GHScraper_Checker)|Compare local file against github file from the community scraper repo.|v0.8|:x:
Maintenance|Task<br />Scene.Update|[renamerOnUpdate](plugins/renamerOnUpdate)|Rename/Move your file based on Stash metadata.|v0.7|:x:
Maintenance|Set Scene Cover|[setSceneCoverFromFile](plugins/setSceneCoverFromFile)|Searchs Stash for Scenes with a cover image in the same folder and sets the cover image in stash to that image|v0.7|:x:
Scenes|SceneMarker.Create<br />SceneMarker.Update|[markerTagToScene](plugins/markerTagToScene)|Adds primary tag of Scene Marker to the Scene on marker create/update.|v0.8 ([46bbede](https://github.com/stashapp/stash/commit/46bbede9a07144797d6f26cf414205b390ca88f9))|:x:
Scanning|Scene.Create<br />Gallery.Create<br />Image.Create|[defaultDataForPath](plugins/defaultDataForPath)|Adds configured Tags, Performers and/or Studio to all newly scanned Scenes, Images and Galleries..|v0.8|:x:
Scanning|Scene.Create<br />Gallery.Create|[filenameParser](plugins/filenameParser)|Tries to parse filenames, primarily in {studio}.{year}.{month}.{day}.{performer1firstname}.{performer1lastname}.{performer2}.{title} format, into the respective fields|v0.10|:x:
Scanning|Scene.Create|[pathParser](plugins/pathParser)|Updates scene info based on the file path.|v0.17|:x:
Scanning|Scene.Create|[titleFromFilename](plugins/titleFromFilename)|Sets the scene title to its filename|v0.17|:x:
Reporting||[TagGraph](plugins/tagGraph)|Creates a visual of the Tag relations.|v0.7|:x:
## Themes
Theme Name|Description |
----------|--------------------------------------------|
[Plex](themes/plex) |Theme inspired by the popular Plex Interface|
Theme Name|Description |Updated for v24|
----------|--------------------------------------------|----
[Plex](themes/plex) |Theme inspired by the popular Plex Interface|:x:
## Utility Scripts
|Category|Userscript Name|Description|
---------|---------------|-----------|
StashDB |[StashDB Submission Helper](/userscripts/StashDB_Submission_Helper)|Adds handy functions for StashDB submissions like buttons to add aliases in bulk to a performer|
|Category|Userscript Name|Description|Updated for v24|
---------|---------------|-----------|----
StashDB |[StashDB Submission Helper](/userscripts/StashDB_Submission_Helper)|Adds handy functions for StashDB submissions like buttons to add aliases in bulk to a performer|:x:
## Utility Scripts
Category|Plugin Name|Description|Minimum Stash version
--------|-----------|-----------|---------------------
Kodi|[Kodi Helper](scripts/kodi-helper)|Generates `nfo` and `strm` for use with Kodi.|v0.7
Maintenance|[Stash Sqlite Renamer](scripts/Sqlite_Renamer)|Renames your files using stash's metadata.|v0.7
Category|Plugin Name|Description|Minimum Stash version|Updated for v24|
--------|-----------|-----------|---------------------|----
Kodi|[Kodi Helper](scripts/kodi-helper)|Generates `nfo` and `strm` for use with Kodi.|v0.7|:x:

72
build_site.sh Executable file
View File

@ -0,0 +1,72 @@
#!/bin/bash
# builds a repository of scrapers
# outputs to _site with the following structure:
# index.yml
# <scraper_id>.zip
# Each zip file contains the scraper.yml file and any other files in the same directory
outdir="$1"
if [ -z "$outdir" ]; then
outdir="_site"
fi
rm -rf "$outdir"
mkdir -p "$outdir"
buildPlugin()
{
f=$1
if grep -q "^#pkgignore" "$f"; then
return
fi
# get the scraper id from the directory
dir=$(dirname "$f")
plugin_id=$(basename "$f" .yml)
echo "Processing $plugin_id"
# create a directory for the version
version=$(git log -n 1 --pretty=format:%h -- "$dir"/*)
updated=$(TZ=UTC0 git log -n 1 --date="format-local:%F %T" --pretty=format:%ad -- "$dir"/*)
# create the zip file
# copy other files
zipfile=$(realpath "$outdir/$plugin_id.zip")
pushd "$dir" > /dev/null
zip -r "$zipfile" . > /dev/null
popd > /dev/null
name=$(grep "^name:" "$f" | head -n 1 | cut -d' ' -f2- | sed -e 's/\r//' -e 's/^"\(.*\)"$/\1/')
description=$(grep "^description:" "$f" | head -n 1 | cut -d' ' -f2- | sed -e 's/\r//' -e 's/^"\(.*\)"$/\1/')
ymlVersion=$(grep "^version:" "$f" | head -n 1 | cut -d' ' -f2- | sed -e 's/\r//' -e 's/^"\(.*\)"$/\1/')
version="$ymlVersion-$version"
dep=$(grep "^# requires:" "$f" | cut -c 12- | sed -e 's/\r//')
# write to spec index
echo "- id: $plugin_id
name: $name
metadata:
description: $description
version: $version
date: $updated
path: $plugin_id.zip
sha256: $(sha256sum "$zipfile" | cut -d' ' -f1)" >> "$outdir"/index.yml
# handle dependencies
if [ ! -z "$dep" ]; then
echo " requires:" >> "$outdir"/index.yml
for d in ${dep//,/ }; do
echo " - $d" >> "$outdir"/index.yml
done
fi
echo "" >> "$outdir"/index.yml
}
find ./plugins -mindepth 1 -name *.yml | while read file; do
buildPlugin "$file"
done

View File

@ -0,0 +1,11 @@
name: Cropper.JS
description: Exports cropper.js functionality for JS/Userscripts
version: 1.6.1
ui:
css:
- cropper.css
javascript:
- cropper.js
# note - not minimized for more transparency around updates & diffs against source code
# https://github.com/fengyuanchen/cropperjs/tree/main/dist

View File

@ -0,0 +1,308 @@
/*!
* Cropper.js v1.6.1
* https://fengyuanchen.github.io/cropperjs
*
* Copyright 2015-present Chen Fengyuan
* Released under the MIT license
*
* Date: 2023-09-17T03:44:17.565Z
*/
.cropper-container {
direction: ltr;
font-size: 0;
line-height: 0;
position: relative;
-ms-touch-action: none;
touch-action: none;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.cropper-container img {
backface-visibility: hidden;
display: block;
height: 100%;
image-orientation: 0deg;
max-height: none !important;
max-width: none !important;
min-height: 0 !important;
min-width: 0 !important;
width: 100%;
}
.cropper-wrap-box,
.cropper-canvas,
.cropper-drag-box,
.cropper-crop-box,
.cropper-modal {
bottom: 0;
left: 0;
position: absolute;
right: 0;
top: 0;
}
.cropper-wrap-box,
.cropper-canvas {
overflow: hidden;
}
.cropper-drag-box {
background-color: #fff;
opacity: 0;
}
.cropper-modal {
background-color: #000;
opacity: 0.5;
}
.cropper-view-box {
display: block;
height: 100%;
outline: 1px solid #39f;
outline-color: rgba(51, 153, 255, 0.75);
overflow: hidden;
width: 100%;
}
.cropper-dashed {
border: 0 dashed #eee;
display: block;
opacity: 0.5;
position: absolute;
}
.cropper-dashed.dashed-h {
border-bottom-width: 1px;
border-top-width: 1px;
height: calc(100% / 3);
left: 0;
top: calc(100% / 3);
width: 100%;
}
.cropper-dashed.dashed-v {
border-left-width: 1px;
border-right-width: 1px;
height: 100%;
left: calc(100% / 3);
top: 0;
width: calc(100% / 3);
}
.cropper-center {
display: block;
height: 0;
left: 50%;
opacity: 0.75;
position: absolute;
top: 50%;
width: 0;
}
.cropper-center::before,
.cropper-center::after {
background-color: #eee;
content: ' ';
display: block;
position: absolute;
}
.cropper-center::before {
height: 1px;
left: -3px;
top: 0;
width: 7px;
}
.cropper-center::after {
height: 7px;
left: 0;
top: -3px;
width: 1px;
}
.cropper-face,
.cropper-line,
.cropper-point {
display: block;
height: 100%;
opacity: 0.1;
position: absolute;
width: 100%;
}
.cropper-face {
background-color: #fff;
left: 0;
top: 0;
}
.cropper-line {
background-color: #39f;
}
.cropper-line.line-e {
cursor: ew-resize;
right: -3px;
top: 0;
width: 5px;
}
.cropper-line.line-n {
cursor: ns-resize;
height: 5px;
left: 0;
top: -3px;
}
.cropper-line.line-w {
cursor: ew-resize;
left: -3px;
top: 0;
width: 5px;
}
.cropper-line.line-s {
bottom: -3px;
cursor: ns-resize;
height: 5px;
left: 0;
}
.cropper-point {
background-color: #39f;
height: 5px;
opacity: 0.75;
width: 5px;
}
.cropper-point.point-e {
cursor: ew-resize;
margin-top: -3px;
right: -3px;
top: 50%;
}
.cropper-point.point-n {
cursor: ns-resize;
left: 50%;
margin-left: -3px;
top: -3px;
}
.cropper-point.point-w {
cursor: ew-resize;
left: -3px;
margin-top: -3px;
top: 50%;
}
.cropper-point.point-s {
bottom: -3px;
cursor: s-resize;
left: 50%;
margin-left: -3px;
}
.cropper-point.point-ne {
cursor: nesw-resize;
right: -3px;
top: -3px;
}
.cropper-point.point-nw {
cursor: nwse-resize;
left: -3px;
top: -3px;
}
.cropper-point.point-sw {
bottom: -3px;
cursor: nesw-resize;
left: -3px;
}
.cropper-point.point-se {
bottom: -3px;
cursor: nwse-resize;
height: 20px;
opacity: 1;
right: -3px;
width: 20px;
}
@media (min-width: 768px) {
.cropper-point.point-se {
height: 15px;
width: 15px;
}
}
@media (min-width: 992px) {
.cropper-point.point-se {
height: 10px;
width: 10px;
}
}
@media (min-width: 1200px) {
.cropper-point.point-se {
height: 5px;
opacity: 0.75;
width: 5px;
}
}
.cropper-point.point-se::before {
background-color: #39f;
bottom: -50%;
content: ' ';
display: block;
height: 200%;
opacity: 0;
position: absolute;
right: -50%;
width: 200%;
}
.cropper-invisible {
opacity: 0;
}
.cropper-bg {
background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQAQMAAAAlPW0iAAAAA3NCSVQICAjb4U/gAAAABlBMVEXMzMz////TjRV2AAAACXBIWXMAAArrAAAK6wGCiw1aAAAAHHRFWHRTb2Z0d2FyZQBBZG9iZSBGaXJld29ya3MgQ1M26LyyjAAAABFJREFUCJlj+M/AgBVhF/0PAH6/D/HkDxOGAAAAAElFTkSuQmCC');
}
.cropper-hide {
display: block;
height: 0;
position: absolute;
width: 0;
}
.cropper-hidden {
display: none !important;
}
.cropper-move {
cursor: move;
}
.cropper-crop {
cursor: crosshair;
}
.cropper-disabled .cropper-drag-box,
.cropper-disabled .cropper-face,
.cropper-disabled .cropper-line,
.cropper-disabled .cropper-point {
cursor: not-allowed;
}

3274
plugins/CropperJS/cropper.js Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,308 @@
(function() {
let running = false;
const buttons = [];
let maxCount = 0;
function resolveToggle(el) {
let button = null;
if (el?.classList.contains('optional-field-content')) {
button = el.previousElementSibling;
} else if (el?.tagName === 'SPAN' && el?.classList.contains('ml-auto')) {
button = el.querySelector('.optional-field button');
} else if (el?.parentElement?.classList.contains('optional-field-content')) {
button = el.parentElement.previousElementSibling;
}
const state = button?.classList.contains('text-success');
return {
button,
state
};
}
function toggleSearchItem(searchItem, toggleMode) {
const searchResultItem = searchItem.querySelector('li.search-result.selected-result.active');
if (!searchResultItem) return;
const {
urlNode,
url,
id,
data,
nameNode,
name,
queryInput,
performerNodes
} = stash.parseSearchItem(searchItem);
const {
remoteUrlNode,
remoteId,
remoteUrl,
remoteData,
urlNode: matchUrlNode,
detailsNode,
imageNode,
titleNode,
codeNode,
dateNode,
studioNode,
performerNodes: matchPerformerNodes,
matches
} = stash.parseSearchResultItem(searchResultItem);
const studioMatchNode = matches.find(o => o.matchType === 'studio')?.matchNode;
const performerMatchNodes = matches.filter(o => o.matchType === 'performer').map(o => o.matchNode);
const includeTitle = document.getElementById('result-toggle-title').checked;
const includeCode = document.getElementById('result-toggle-code').checked;
const includeDate = document.getElementById('result-toggle-date').checked;
const includeCover = document.getElementById('result-toggle-cover').checked;
const includeStashID = document.getElementById('result-toggle-stashid').checked;
const includeURL = document.getElementById('result-toggle-url').checked;
const includeDetails = document.getElementById('result-toggle-details').checked;
const includeStudio = document.getElementById('result-toggle-studio').checked;
const includePerformers = document.getElementById('result-toggle-performers').checked;
let options = [];
options.push(['title', includeTitle, titleNode, resolveToggle(titleNode)]);
options.push(['code', includeCode, codeNode, resolveToggle(codeNode)]);
options.push(['date', includeDate, dateNode, resolveToggle(dateNode)]);
options.push(['cover', includeCover, imageNode, resolveToggle(imageNode)]);
options.push(['stashid', includeStashID, remoteUrlNode, resolveToggle(remoteUrlNode)]);
options.push(['url', includeURL, matchUrlNode, resolveToggle(matchUrlNode)]);
options.push(['details', includeDetails, detailsNode, resolveToggle(detailsNode)]);
options.push(['studio', includeStudio, studioMatchNode, resolveToggle(studioMatchNode)]);
options = options.concat(performerMatchNodes.map(o => ['performer', includePerformers, o, resolveToggle(o)]));
for (const [optionType, optionValue, optionNode, {
button,
state
}] of options) {
let wantedState = optionValue;
if (toggleMode === 1) {
wantedState = true;
} else if (toggleMode === -1) {
wantedState = false;
}
if (optionNode && wantedState !== state) {
button.click();
}
}
}
function run() {
if (!running) return;
const button = buttons.pop();
stash.setProgress((maxCount - buttons.length) / maxCount * 100);
if (button) {
const searchItem = getClosestAncestor(button, '.search-item');
let toggleMode = 0;
if (btn === btnOn) {
toggleMode = 1;
} else if (btn === btnOff) {
toggleMode = -1;
} else if (btn === btnMixed) {
toggleMode = 0;
}
toggleSearchItem(searchItem, toggleMode);
setTimeout(run, 0);
} else {
stop();
}
}
const btnGroup = document.createElement('div');
const btnGroupId = 'batch-result-toggle';
btnGroup.setAttribute('id', btnGroupId);
btnGroup.classList.add('btn-group', 'ml-3');
const checkLabel = '<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="check" class="svg-inline--fa fa-check fa-w-16 fa-icon fa-fw" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M173.898 439.404l-166.4-166.4c-9.997-9.997-9.997-26.206 0-36.204l36.203-36.204c9.997-9.998 26.207-9.998 36.204 0L192 312.69 432.095 72.596c9.997-9.997 26.207-9.997 36.204 0l36.203 36.204c9.997 9.997 9.997 26.206 0 36.204l-294.4 294.401c-9.998 9.997-26.207 9.997-36.204-.001z"></path></svg>';
const timesLabel = '<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="times" class="svg-inline--fa fa-times fa-w-11 fa-icon fa-fw" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 352 512"><path fill="currentColor" d="M242.72 256l100.07-100.07c12.28-12.28 12.28-32.19 0-44.48l-22.24-22.24c-12.28-12.28-32.19-12.28-44.48 0L176 189.28 75.93 89.21c-12.28-12.28-32.19-12.28-44.48 0L9.21 111.45c-12.28 12.28-12.28 32.19 0 44.48L109.28 256 9.21 356.07c-12.28 12.28-12.28 32.19 0 44.48l22.24 22.24c12.28 12.28 32.2 12.28 44.48 0L176 322.72l100.07 100.07c12.28 12.28 32.2 12.28 44.48 0l22.24-22.24c12.28-12.28 12.28-32.19 0-44.48L242.72 256z"></path></svg>';
const startLabel = '<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="circle" class="svg-inline--fa fa-circle fa-w-16 fa-icon fa-fw" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M512 256C512 397.4 397.4 512 256 512C114.6 512 0 397.4 0 256C0 114.6 114.6 0 256 0C397.4 0 512 114.6 512 256zM256 48C141.1 48 48 141.1 48 256C48 370.9 141.1 464 256 464C370.9 464 464 370.9 464 256C464 141.1 370.9 48 256 48z"/></svg>';
let btn;
const btnOffId = 'batch-result-toggle-off';
const btnOff = document.createElement("button");
btnOff.setAttribute("id", btnOffId);
btnOff.title = 'Result Toggle All Off';
btnOff.classList.add('btn', 'btn-primary');
btnOff.innerHTML = timesLabel;
btnOff.onclick = () => {
if (running) {
stop();
} else {
btn = btnOff;
start();
}
};
btnGroup.appendChild(btnOff);
const btnMixedId = 'batch-result-toggle-mixed';
const btnMixed = document.createElement("button");
btnMixed.setAttribute("id", btnMixedId);
btnMixed.title = 'Result Toggle All';
btnMixed.classList.add('btn', 'btn-primary');
btnMixed.innerHTML = startLabel;
btnMixed.onclick = () => {
if (running) {
stop();
} else {
btn = btnMixed;
start();
}
};
btnGroup.appendChild(btnMixed);
const btnOnId = 'batch-result-toggle-on';
const btnOn = document.createElement("button");
btnOn.setAttribute("id", btnOnId);
btnOn.title = 'Result Toggle All On';
btnOn.classList.add('btn', 'btn-primary');
btnOn.innerHTML = checkLabel;
btnOn.onclick = () => {
if (running) {
stop();
} else {
btn = btnOn;
start();
}
};
btnGroup.appendChild(btnOn);
function start() {
// btn.innerHTML = stopLabel;
btn.classList.remove('btn-primary');
btn.classList.add('btn-danger');
btnMixed.disabled = true;
btnOn.disabled = true;
btnOff.disabled = true;
btn.disabled = false;
running = true;
stash.setProgress(0);
buttons.length = 0;
for (const button of document.querySelectorAll('.btn.btn-primary')) {
if (button.innerText === 'Search') {
buttons.push(button);
}
}
maxCount = buttons.length;
run();
}
function stop() {
// btn.innerHTML = startLabel;
btn.classList.remove('btn-danger');
btn.classList.add('btn-primary');
running = false;
stash.setProgress(0);
btnMixed.disabled = false;
btnOn.disabled = false;
btnOff.disabled = false;
}
stash.addEventListener('tagger:mutations:header', evt => {
const el = getElementByXpath("//button[text()='Scrape All']");
if (el && !document.getElementById(btnGroupId)) {
const container = el.parentElement;
container.appendChild(btnGroup);
sortElementChildren(container);
el.classList.add('ml-3');
}
});
const resultToggleConfigId = 'result-toggle-config';
stash.addEventListener('tagger:configuration', evt => {
const el = evt.detail;
if (!document.getElementById(resultToggleConfigId)) {
const configContainer = el.parentElement;
const resultToggleConfig = createElementFromHTML(`
<div id="${resultToggleConfigId}" class="col-md-6 mt-4">
<h5>Result Toggle ${startLabel} Configuration</h5>
<div class="row">
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-title" class="form-check-input" data-default="true">
<label title="" for="result-toggle-title" class="form-check-label">Title</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-code" class="form-check-input" data-default="true">
<label title="" for="result-toggle-code" class="form-check-label">Code</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-date" class="form-check-input" data-default="true">
<label title="" for="result-toggle-date" class="form-check-label">Date</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-cover" class="form-check-input" data-default="true">
<label title="" for="result-toggle-cover" class="form-check-label">Cover</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-stashid" class="form-check-input" data-default="true">
<label title="" for="result-toggle-stashid" class="form-check-label">Stash ID</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-url" class="form-check-input" data-default="true">
<label title="" for="result-toggle-url" class="form-check-label">URL</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-details" class="form-check-input" data-default="true">
<label title="" for="result-toggle-details" class="form-check-label">Details</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-studio" class="form-check-input" data-default="true">
<label title="" for="result-toggle-studio" class="form-check-label">Studio</label>
</div>
</div>
<div class="align-items-center form-group col-md-6">
<div class="form-check">
<input type="checkbox" id="result-toggle-performers" class="form-check-input" data-default="true">
<label title="" for="result-toggle-performers" class="form-check-label">Performers</label>
</div>
</div>
</div>
</div>
`);
configContainer.appendChild(resultToggleConfig);
loadSettings();
}
});
async function loadSettings() {
for (const input of document.querySelectorAll(`#${resultToggleConfigId} input`)) {
input.checked = await sessionStorage.getItem(input.id, input.dataset.default === 'true');
input.addEventListener('change', async () => {
await sessionStorage.setItem(input.id, input.checked);
});
}
}
stash.addEventListener('tagger:mutation:add:remoteperformer', evt => toggleSearchItem(getClosestAncestor(evt.detail.node, '.search-item'), 0));
stash.addEventListener('tagger:mutation:add:remotestudio', evt => toggleSearchItem(getClosestAncestor(evt.detail.node, '.search-item'), 0));
stash.addEventListener('tagger:mutation:add:local', evt => toggleSearchItem(getClosestAncestor(evt.detail.node, '.search-item'), 0));
stash.addEventListener('tagger:mutation:add:container', evt => toggleSearchItem(getClosestAncestor(evt.detail.node, '.search-item'), 0));
stash.addEventListener('tagger:mutation:add:subcontainer', evt => toggleSearchItem(getClosestAncestor(evt.detail.node, '.search-item'), 0));
function checkSaveButtonDisplay() {
const taggerContainer = document.querySelector('.tagger-container');
const saveButton = getElementByXpath("//button[text()='Save']", taggerContainer);
btnGroup.style.display = saveButton ? 'inline-block' : 'none';
}
stash.addEventListener('tagger:mutations:searchitems', checkSaveButtonDisplay);
})();

View File

@ -0,0 +1,9 @@
name: Stash Batch Result Toggle.
# requires: StashUserscriptLibrary
description: In Scene Tagger, adds button to toggle all stashdb scene match result fields. Saves clicks when you only want to save a few metadata fields. Instead of turning off every field, you batch toggle them off, then toggle on the ones you want
version: 1.0
ui:
requires:
- StashUserscriptLibrary
javascript:
- stashBatchResultToggle.js

View File

@ -0,0 +1,12 @@
# Comic Archive Metadata Extractor
Follows the Comicrack Standard for saving Comic Metadata in .cbz files by reading the ComicInfo.xml file in the archive and writing the result into the stash gallery.
Use the config.py ImportList to define what XML names should be mapped to what.
Currently, Bookmark and Type are recognized as chapters that are imported as well.
The current Configuration will overwrite any value you try to set that is already set in the ComicInfo.xml. For a change in that, change the hook condition in the yml.
### Installation
Move the `comicInfoExtractor` directory into Stash's plugins directory, reload plugins.
### Tasks
* Load all cbz Metadata - Fetch metadata for all galleries.
* Post update hook - Fetch metadata for that gallery

View File

@ -0,0 +1,124 @@
import stashapi.log as log
from stashapi.stashapp import StashInterface
import stashapi.marker_parse as mp
import yaml
import json
import os
import sys
import xml.etree.ElementTree as ET
import zipfile
per_page = 100
def processGallery(g):
#Read ComicInfo.xml File
if len(g["files"]) == 0:
log.info(g["id"] + " is not an archive. No scanning for Comic Metadata.")
return
comicInfo = False
with zipfile.ZipFile(g["files"][0]["path"], 'r') as archive:
archivecontent = [x.lower() for x in archive.namelist()]
for archivefile in archivecontent:
if archivefile.lower() == "comicinfo.xml":
comicInfo = ET.fromstring(archive.read("ComicInfo.xml"))
if not comicInfo:
log.info(g["files"][0]["path"] + " does not contain a ComicInfo.xml file. No scan will be triggered.")
return
#Adjust names for giving ids
for key in ImportList.keys():
if ImportList[key] == "tags":
ImportList[key] = "tag_ids"
if ImportList[key] == "performers":
ImportList[key] = "performer_ids"
if ImportList[key] == "studio":
ImportList[key] = "studio_id"
#Get Metadata from ComicInfo.xml
galleryData = {"id": g["id"]}
for item in ImportList.keys():
value = comicInfo.find(item)
if value != None:
galleryData[ImportList[item]] = value.text
chapterData = []
pageData = comicInfo.find("Pages")
if pageData:
for page in pageData:
if page.get("Bookmark"):
chapterData.append({"image_index": int(page.get("Image")) + 1, "title": page.get("Bookmark")})
if page.get("Type"):
chapterData.append({"image_index": int(page.get("Image")) + 1, "title": page.get("Type")})
#Adjust the retrieved data if necessary
for data in galleryData.keys():
if data in ["tag_ids", "performer_ids"]:
galleryData[data] = [x.strip() for x in galleryData[data].split(",")]
if data == "tag_ids":
tagids = []
for tag in galleryData[data]:
tagids.append(stash.find_tag(tag, create=True)["id"])
galleryData[data] = tagids
if data == "performer_ids":
performerids = []
for performer in galleryData[data]:
performerids.append(stash.find_performer(performer, create=True)["id"])
galleryData[data] = performerids
if data == "studio_id":
galleryData[data] = stash.find_studio(galleryData[data], create=True)["id"]
if data == "date":
galleryData[data] = galleryData[data] + "-01-01"
if data == "organized":
galleryData[data] = eval(galleryData[data].lower().capitalize())
if data == "rating100":
galleryData[data] = int(galleryData[data])
#Add Chapter if it does not exist and finally update Gallery Metadata
for chapter in chapterData:
addChapter = True
for existingChapter in g["chapters"]:
if existingChapter["title"] == chapter["title"] and existingChapter["image_index"] == chapter["image_index"]:
addChapter = False
if addChapter:
stash.create_gallery_chapter({"title": chapter["title"], "image_index": chapter["image_index"], "gallery_id": g["id"]})
stash.update_gallery(galleryData)
def processAll():
log.info('Getting gallery count')
count=stash.find_galleries(f={},filter={"per_page": 1},get_count=True)[0]
log.info(str(count)+' galleries to scan.')
for r in range(1,int(count/per_page)+1):
log.info('processing '+str(r*per_page)+ ' - '+str(count))
galleries=stash.find_galleries(f={},filter={"page":r,"per_page": per_page})
for g in galleries:
processGallery(g)
#Start of the Program
json_input = json.loads(sys.stdin.read())
FRAGMENT_SERVER = json_input["server_connection"]
stash = StashInterface(FRAGMENT_SERVER)
#Load Config
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), "config.yml"), "r") as f:
try:
config = yaml.safe_load(f)
except yaml.YAMLError as exc:
log.error("Could not load config.yml: " + str(exc))
sys.exit(1)
try:
ImportList=config["ImportList"]
except KeyError as key:
log.error(str(key) + " is not defined in config.yml, but is needed for this script to proceed")
sys.exit(1)
if 'mode' in json_input['args']:
PLUGIN_ARGS = json_input['args']["mode"]
if 'process' in PLUGIN_ARGS:
processAll()
elif 'hookContext' in json_input['args']:
id=json_input['args']['hookContext']['id']
gallery=stash.find_gallery(id)
processGallery(gallery)

View File

@ -0,0 +1,19 @@
name: Comic Info Extractor
description: Extract the metadata from cbz with the Comicrack standard (ComicInfo.xml)
version: 0.1
url: https://github.com/stashapp/CommunityScripts/
exec:
- "/usr/bin/python3"
- "{pluginDir}/comicInfoExtractor.py"
interface: raw
hooks:
- name: Add Metadata to Gallery
description: Update Metadata for Gallery by evaluating the ComicInfo.xml.
triggeredBy:
- Gallery.Update.Post
- Gallery.Create.Post
tasks:
- name: Load all cbz Metadata
description: Get Metadata for all Galleries by looking for ComicInfo.xml files in the Archive.
defaultArgs:
mode: process

View File

@ -0,0 +1,12 @@
#pkgignore
#ImportList is a dictionary
#that matches an xml Attribute from ComicInfo.xml to the according value in stash (using the graphql naming)
#Fields that refer to different types of media are resolved by name and created if necessary (tags, studio, performers)
#Fields that can contain multiple values (tags, performers) will be expected as a comma separated string, like
#<Genre>Outdoor, Blonde</Genre>
ImportList:
Genre: tags
Title: title
Writer: studio
Year: date
Summary: details

View File

@ -0,0 +1,2 @@
stashapp-tools
pyyaml

View File

@ -0,0 +1,8 @@
Marks duplicate markers with a tag: `[Marker: Duplicate]`
Tasks -> Search for duplicate markers
It will add the tag to any markers that have an **exact** match for title, time **and** primary tag.
It will only add to existing markers, it is up to the user to go to the tag and navigate to the scene where the duplicates will be highlighted with the tag.
(it's technically a Dupe Marker Marker)

View File

@ -0,0 +1,69 @@
import json
import sys
import re
import datetime as dt
import stashapi.log as log
from stashapi.tools import human_bytes
from stashapi.stash_types import PhashDistance
from stashapi.stashapp import StashInterface
FRAGMENT = json.loads(sys.stdin.read())
MODE = FRAGMENT['args']['mode']
stash = StashInterface(FRAGMENT["server_connection"])
dupe_marker_tag = stash.find_tag('[Marker: Duplicate]', create=True).get("id")
def findScenesWithMarkers():
totalDupes = 0
scenes = stash.find_scenes(f={"has_markers": "true"},fragment="id")
for scene in scenes:
totalDupes += checkScene(scene)
log.info("Found %d duplicate markers across %d scenes" % (totalDupes, len(scenes)))
def addMarkerTag(marker):
query = """
mutation SceneMarkerUpdate($input:SceneMarkerUpdateInput!) {
sceneMarkerUpdate(input: $input) {
id
}
}
"""
oldTags = [tag["id"] for tag in marker["tags"]]
if dupe_marker_tag in oldTags:
return
oldTags.append(dupe_marker_tag)
newMarker = {
"id": marker["id"],
"tag_ids": oldTags
}
stash._callGraphQL(query, {"input": newMarker })
#stash.update_scene_marker(newMarker, "id")
def checkScene(scene):
seen = set()
dupes = []
markers = stash.find_scene_markers(scene['id'])
# find duplicate pairs
for marker in markers:
sortidx = ";".join([
str(marker["title"]),
str(marker["seconds"]),
str(marker["primary_tag"]["id"])
])
if sortidx not in seen:
seen.add(sortidx)
else:
dupes.append(marker)
# add tag
if dupes:
log.debug("Found %d duplicate markers in scene %s" % (len(dupes), scene['id']))
for dupe in dupes:
addMarkerTag(dupe)
return len(dupes)
def main():
if MODE == "search":
findScenesWithMarkers()
log.exit("Plugin exited normally.")
if __name__ == '__main__':
main()

View File

@ -0,0 +1,13 @@
name: Dupe Marker Detector
description: Finds and marks duplicate markers
version: 0.1
url: https://github.com/stashapp/CommunityScripts/
exec:
- python
- "{pluginDir}/dupeMarker.py"
interface: raw
tasks:
- name: 'Search'
description: Search for duplicate markers
defaultArgs:
mode: search

View File

@ -0,0 +1 @@
stashapp-tools

View File

@ -162,6 +162,7 @@ function cleanFilename(name) {
var blockList = [
'mp4',
'mov',
'zip',
'xxx',
'4k',
'4096x2160',

View File

@ -1,27 +1,49 @@
This plugin has four functions:
1) It will create two tags for review, [Dupe: Keep] and [Dupe: Remove]
# PHASH Duplicate Tagger
2) It will auto assign those tags to scenes with EXACT PHashes based on (and in this order):
a) Keep the larger resolution
b) Keep the larger file size (if same resolution)
c) Keep the older scene (if same file size.)
(Older scene is kept since it's more likely to have been organized if they're the same file)
With this order of precedence one scene is determined to be the "Keeper" and the rest are assigned for Removal
When the scenes are tagged, the titles are also modified to add '[Dupe: {SceneID}K/R]'
The SceneID put into the title is the one determined to be the "Keeper", and is put into all matching scenes
This way you can sort by title after matching and verify the scenes are actually the same thing, and the Keeper
will be the first scene in the set. (Since you'll have [Dupe: 72412K], [Dupe: 72412R], [Dupe: 72412R] as an example
## Requirements
* python >= 3.10.X
* `pip install -r requirements.txt`
What I have personally done is essentially set a filter on the two Dupe tags, then sort by title. Then I spot check the
'K' scenes versus the 'R' scenes. If everything looks good then I just drop [Dupe: Keep] out of the filter (leaving only
[Dupe: Remove], Select All and delete the files.
3) It will remove the [Dupe: Keep] and [Dupe: Remove] tags from Stash
4) It will remove the [Dupe: ######K/R] tags from the titles
(These last two options are obviously for after you have removed the scenes you don't want any longer)
## Title Syntax
PS. This script is essentially a hack and slash job on scripts from Belley and WithoutPants, thanks guys!
This plugin will change the titles of scenes that are matched as duplicates in the following format
PPS. The original plugin has been rewritten by stg-annon, and does now require hos stashapp-tools module (pip install stashapp-tools)
(Yes, this works with the Stash Docker)
`[PDT: 0.0GB|<group_id><keep_flag>] <Scene Title>`
group_id: usually the scene ID of the scene that was selected to Keep
keep_flag: K=Keep R=remove U=Unknown
## Tags
various tags may be created by this plugin
* Keep - Applied on scenes that are determined to be the "best"
* Remove - Applied to the scenes that determined to be the "worst"
* Unknown - Applied to scenes where a best scene could not be determined
* Ignore - Applied to scenes by user to ignore known duplicates
* Reason - These tags are applied to remove scenes, they will have a category that will match the determining factor on why a scene was chosen to be removed
## Tasks
### Tag Dupes (EXACT/HIGH/MEDIUM)
These tasks will search for scenes with similar PHASHs within stash the closeness (distance) of the hashes to each other depends on which option you select
* EXACT - Matches have a distance of 0 and should be exact matches
* HIGH - Matches have a distance of 3 and are very similar to each other
* MEDIUM - Matches have a distance of 6 and resemble each other
### Delete Managed Tags
remove any generated tags within stash created by the plugin, excluding the `Ignore` tag this may be something you want to retain
### Scene Cleanup
cleanup changes made to scene titles and tags back to before they were tagged
### Generate Scene PHASHs
Start a generate task within stash to generate PHASHs
## Custom Compare Functions
you can create custom compare functions inside config.py all current compare functions are provided custom functions must return two values when a better file is determined, the better object and a message string, optionally you can set `remove_reason` on the worse file and it will be tagged with that reason
custom functions must start with "compare_" otherwise they will not be detected, make sure to add your function name to the PRIORITY list

View File

@ -0,0 +1,110 @@
import stashapi.log as log
from stashapi.tools import human_bytes, human_bits
PRIORITY = ['bitrate_per_pixel','resolution', 'bitrate', 'encoding', 'size', 'age']
CODEC_PRIORITY = {'AV1':0,'H265':1,'HEVC':1,'H264':2,'MPEG4':3,'MPEG1VIDEO':3,'WMV3':4,'WMV2':5,'VC1':6,'SVQ3':7}
KEEP_TAG_NAME = "[PDT: Keep]"
REMOVE_TAG_NAME = "[PDT: Remove]"
UNKNOWN_TAG_NAME = "[PDT: Unknown]"
IGNORE_TAG_NAME = "[PDT: Ignore]"
def compare_bitrate_per_pixel(self, other):
try:
self_bpp = self.bitrate / (self.width * self.height * self.frame_rate)
except ZeroDivisionError:
log.warning(f'scene {self.id} has 0 in file value ({self.width}x{self.height} {self.frame_rate}fps)')
return
try:
other_bpp = other.bitrate / (other.width * other.height * other.frame_rate)
except ZeroDivisionError:
log.warning(f'scene {other.id} has 0 in file value ({other.width}x{other.height} {other.frame_rate}fps)')
return
bpp_diff = abs(self_bpp-other_bpp)
if bpp_diff <= 0.01:
return
if self_bpp > other_bpp:
better_bpp, worse_bpp = self_bpp, other_bpp
better, worse = self, other
else:
worse_bpp, better_bpp = self_bpp, other_bpp
worse, better = self, other
worse.remove_reason = "bitrate_per_pxl"
message = f'bitrate/pxl {better_bpp:.3f}bpp > {worse_bpp:.3f}bpp Δ:{bpp_diff:.3f}'
return better, message
def compare_frame_rate(self, other):
if not self.frame_rate:
log.warning(f'scene {self.id} has no value for frame_rate')
if not other.frame_rate:
log.warning(f'scene {other.id} has no value for frame_rate')
if abs(self.frame_rate-other.frame_rate) < 5:
return
if self.frame_rate > other.frame_rate:
better, worse = self, other
else:
worse, better = self, other
worse.remove_reason = "frame_rate"
return better, f'Better FPS {better.frame_rate} vs {worse.frame_rate}'
def compare_resolution(self, other):
if self.height == other.height:
return
if self.height > other.height:
better, worse = self, other
else:
worse, better = self, other
worse.remove_reason = "resolution"
return better, f"Better Resolution {better.id}:{better.height}p > {worse.id}:{worse.height}p"
def compare_bitrate(self, other):
if self.bitrate == other.bitrate:
return
if self.bitrate > other.bitrate:
better, worse = self, other
else:
worse, better = self, other
worse.remove_reason = "bitrate"
return better, f"Better Bitrate {human_bits(better.bitrate)}ps > {human_bits(worse.bitrate)}ps Δ:({human_bits(better.bitrate-other.bitrate)}ps)"
def compare_size(self, other):
if abs(self.size-other.size) <= 100000: # diff is <= than 0.1 Mb
return
if self.size > other.size:
better, worse = self, other
else:
worse, better = self, other
worse.remove_reason = "file_size"
return better, f"Better Size {human_bytes(better.size)} > {human_bytes(worse.size)} Δ:({human_bytes(better.size-worse.size)})"
def compare_age(self, other):
if not (self.mod_time and other.mod_time):
return
if self.mod_time == other.mod_time:
return
if self.mod_time < other.mod_time:
better, worse = self, other
else:
worse, better = self, other
worse.remove_reason = "age"
return better, f"Choose Oldest: Δ:{worse.mod_time-better.mod_time} | {better.id} older than {worse.id}"
def compare_encoding(self, other):
if self.codec_priority == other.codec_priority:
return
if not (isinstance(self.codec_priority, int) and isinstance(other.codec_priority, int)):
return
if self.codec_priority < other.codec_priority:
better, worse = self, other
else:
worse, better = self, other
worse.remove_reason = "video_codec"
return self, f"Prefer Codec {better.codec}({better.id}) over {worse.codec}({worse.id})"

View File

@ -1,63 +1,59 @@
import json
import sys
import re
import re, sys, json
import datetime as dt
from inspect import getmembers, isfunction
try:
import stashapi.log as log
from stashapi.tools import human_bytes
from stashapi.types import PhashDistance
from stashapi.tools import human_bytes, human_bits
from stashapi.stash_types import PhashDistance
from stashapi.stashapp import StashInterface
except ModuleNotFoundError:
print("You need to install the stashapi module. (pip install stashapp-tools)",
file=sys.stderr)
PRIORITY = ['resolution', 'bitrate', 'size', 'age'] # 'encoding'
CODEC_PRIORITY = ['H265','HEVC','H264','MPEG4']
import config
FRAGMENT = json.loads(sys.stdin.read())
MODE = FRAGMENT['args']['mode']
stash = StashInterface(FRAGMENT["server_connection"])
SLIM_SCENE_FRAGMENT = """
id
title
id
title
date
tags { id }
files {
size
path
file_mod_time
tags { id }
file {
size
height
bitrate
video_codec
}
width
height
bit_rate
mod_time
duration
frame_rate
video_codec
}
"""
def main():
if MODE == "create":
stash.find_tag('[Dupe: Keep]', create=True)
stash.find_tag('[Dupe: Remove]', create=True)
stash.find_tag('[Dupe: Ignore]', create=True)
if MODE == "remove":
tag_id = stash.find_tag('[Dupe: Keep]').get("id")
stash.destroy_tag(tag_id)
tag_id = stash.find_tag('[Dupe: Remove]').get("id")
stash.destroy_tag(tag_id)
clean_scenes()
for tag in get_managed_tags():
stash.destroy_tag(tag["id"])
if MODE == "tagexact":
duplicate_list = stash.find_duplicate_scenes(PhashDistance.EXACT, fragment=SLIM_SCENE_FRAGMENT)
process_duplicates(duplicate_list)
if MODE == "taghigh":
duplicate_list = stash.find_duplicate_scenes(PhashDistance.HIGH, fragment=SLIM_SCENE_FRAGMENT)
process_duplicates(duplicate_list)
if MODE == "tagmid":
duplicate_list = stash.find_duplicate_scenes(PhashDistance.MEDIUM, fragment=SLIM_SCENE_FRAGMENT)
process_duplicates(duplicate_list)
if MODE == "tag_exact":
process_duplicates(PhashDistance.EXACT)
if MODE == "tag_high":
process_duplicates(PhashDistance.HIGH)
if MODE == "tag_medium":
process_duplicates(PhashDistance.MEDIUM)
if MODE == "clean_scenes":
clean_scenes()
if MODE == "generate_phash":
generate_phash()
if MODE == "cleantitle":
clean_titles()
log.exit("Plugin exited normally.")
@ -66,23 +62,39 @@ def parse_timestamp(ts, format="%Y-%m-%dT%H:%M:%S%z"):
ts = re.sub(r'\.\d+', "", ts) #remove fractional seconds
return dt.datetime.strptime(ts, format)
class StashScene:
def __init__(self, scene=None) -> None:
file = scene["files"][0]
self.id = int(scene['id'])
self.mod_time = parse_timestamp(scene['file_mod_time'])
self.height = scene['file']['height']
self.size = int(scene['file']['size'])
self.bitrate = int(scene['file']['bitrate'])
self.mod_time = parse_timestamp(file['mod_time'])
if scene.get("date"):
self.date = parse_timestamp(scene['date'], format="%Y-%m-%d")
else:
self.date = None
self.path = scene.get("path")
self.width = file['width']
self.height = file['height']
# File size in # of BYTES
self.size = int(file['size'])
self.frame_rate = int(file['frame_rate'])
self.bitrate = int(file['bit_rate'])
self.duration = float(file['duration'])
# replace any existing tagged title
self.title = re.sub(r'^\[Dupe: \d+[KR]\]\s+', '', scene['title'])
self.path = scene['path']
self.path = file['path']
self.tag_ids = [t["id"]for t in scene["tags"]]
self.codec = scene['file']['video_codec'].upper()
if self.codec in CODEC_PRIORITY:
self.codec = CODEC_PRIORITY.index(self.codec)
self.remove_reason = None
self.codec = file['video_codec'].upper()
if self.codec in config.CODEC_PRIORITY:
self.codec_priority = config.CODEC_PRIORITY[self.codec]
else:
log.warning(f"could not find codec {self.codec}")
self.codec_priority = None
log.warning(f"could not find codec {self.codec} used in SceneID:{self.id}")
def __repr__(self) -> str:
return f'<StashScene ({self.id})>'
@ -94,176 +106,165 @@ class StashScene:
if not (isinstance(other, StashScene)):
raise Exception(f"can only compare to <StashScene> not <{type(other)}>")
# Check if same scene
if self.id == other.id:
return None, "Matching IDs {self.id}=={other.id}"
return None, f"Matching IDs {self.id}=={other.id}"
def compare_not_found():
def compare_not_found(*args, **kwargs):
raise Exception("comparison not found")
for type in PRIORITY:
for type in config.PRIORITY:
try:
compare_function = getattr(self, f'compare_{type}', compare_not_found)
best, msg = compare_function(other)
if best:
result = compare_function(other)
if result and len(result) == 2:
best, msg = result
return best, msg
except Exception as e:
log.error(f"Issue Comparing <{type}> {e}")
log.error(f"Issue Comparing {self.id} {other.id} using <{type}> {e}")
return None, f"{self.id} worse than {other.id}"
def compare_resolution(self, other):
# Checking Resolution
if self.height != other.height:
if self.height > other.height:
return self, f"Better Resolution {self.height} > {other.height} | {self.id}>{other.id}"
else:
return other, f"Better Resolution {other.height} > {self.height} | {other.id}>{self.id}"
return None, None
def compare_bitrate(self, other):
# Checking Bitrate
if self.bitrate != other.bitrate:
if self.bitrate > other.bitrate:
return self, f"Better Bitrate {human_bytes(self.bitrate)} > {human_bytes(other.bitrate)} Δ:({human_bytes(self.bitrate-other.bitrate)}) | {self.id}>{other.id}"
else:
return other, f"Better Bitrate {human_bytes(other.bitrate)} > {human_bytes(self.bitrate)} Δ:({human_bytes(other.bitrate-self.bitrate)}) | {other.id}>{self.id}"
return None, None
def compare_size(self, other):
# Checking Size
if self.size != other.size:
if self.size > other.size:
return self, f"Better Size {human_bytes(self.size)} > {human_bytes(other.size)} Δ:({human_bytes(self.size-other.size)}) | {self.id} > {other.id}"
else:
return other, f"Better Size {human_bytes(other.size)} > {human_bytes(self.size)} Δ:({human_bytes(other.size-self.size)}) | {other.id} > {self.id}"
return None, None
def compare_age(self, other):
# Checking Age
if self.mod_time != other.mod_time:
if self.mod_time < other.mod_time:
return self, f"Choose Oldest: Δ:{other.mod_time-self.mod_time} | {self.id} older than {other.id}"
else:
return other, f"Choose Oldest: Δ:{self.mod_time-other.mod_time} | {other.id} older than {self.id}"
return None, None
def compare_encoding(self, other):
# could not find one of the codecs in priority list
if not isinstance(self.codec, int) or not isinstance(other.codec, int):
return None, None
if self.codec != other.codec:
if self.codec < other.codec:
return self, f"Preferred Codec {CODEC_PRIORITY[self.codec]} over {CODEC_PRIORITY[other.codec]} | {self.id} better than {other.id}"
else:
return other, f"Preferred Codec {CODEC_PRIORITY[other.codec]} over {CODEC_PRIORITY[self.codec]} | {other.id} better than {self.id}"
return None, None
def process_duplicates(distance:PhashDistance=PhashDistance.EXACT):
clean_scenes() # clean old results
ignore_tag_id = stash.find_tag(config.IGNORE_TAG_NAME, create=True).get("id")
duplicate_list = stash.find_duplicate_scenes(distance, fragment=SLIM_SCENE_FRAGMENT)
def process_duplicates(duplicate_list):
ignore_tag_id = stash.find_tag('[Dupe: Ignore]', create=True).get("id")
total = len(duplicate_list)
log.info(f"There is {total} sets of duplicates found.")
log.info(f"Found {total} sets of duplicates.")
for i, group in enumerate(duplicate_list):
log.progress(i/total)
group = [StashScene(s) for s in group]
filtered_group = []
for scene in group:
tag_ids = [ t['id'] for t in scene['tags'] ]
if ignore_tag_id in tag_ids:
log.debug(f"Ignore {scene['id']} {scene['title']}")
if ignore_tag_id in scene.tag_ids:
log.debug(f"Ignore {scene.id} {scene.title}")
else:
filtered_group.append(scene)
if len(filtered_group) > 1:
tag_files(filtered_group)
log.progress(i/total)
def tag_files(group):
tag_keep = stash.find_tag('[Dupe: Keep]', create=True).get("id")
tag_remove = stash.find_tag('[Dupe: Remove]', create=True).get("id")
group = [StashScene(s) for s in group]
keep_reasons = []
keep_scene = group[0]
keep_scene = None
total_size = group[0].size
for scene in group[1:]:
better, msg = scene.compare(keep_scene)
total_size += scene.size
better, msg = scene.compare(group[0])
if better:
keep_scene = better
keep_reasons.append(msg)
keep_reasons.append(msg)
total_size = human_bytes(total_size, round=2, prefix='G')
keep_scene.reasons = keep_reasons
if not keep_scene:
log.info(f"could not determine better scene from {group}")
if config.UNKNOWN_TAG_NAME:
group_id = group[0].id
for scene in group:
tag_ids = [stash.find_tag(config.UNKNOWN_TAG_NAME, create=True).get("id")]
stash.update_scenes({
'ids': [scene.id],
'title': f'[PDT: {total_size}|{group_id}U] {scene.title}',
'tag_ids': {
'mode': 'ADD',
'ids': tag_ids
}
})
return
log.info(f"{keep_scene.id} best of:{[s.id for s in group]} {keep_scene.reasons}")
log.info(f"{keep_scene.id} best of:{[s.id for s in group]} {keep_reasons}")
for scene in group:
if scene.id == keep_scene.id:
# log.debug(f"Tag for Keeping: {scene.id} {scene.path}")
tag_ids = [stash.find_tag(config.KEEP_TAG_NAME, create=True).get("id")]
stash.update_scenes({
'ids': [scene.id],
'title': f'[Dupe: {keep_scene.id}K] {scene.title}',
'title': f'[PDT: {total_size}|{keep_scene.id}K] {scene.title}',
'tag_ids': {
'mode': 'ADD',
'ids': [tag_keep]
}
'ids': tag_ids
}
})
else:
# log.debug(f"Tag for Removal: {scene.id} {scene.path}")
tag_ids = []
tag_ids.append(stash.find_tag(config.REMOVE_TAG_NAME, create=True).get("id"))
if scene.remove_reason:
tag_ids.append(stash.find_tag(f'[Reason: {scene.remove_reason}]', create=True).get('id'))
stash.update_scenes({
'ids': [scene.id],
'title': f'[Dupe: {keep_scene.id}R] {scene.title}',
'title': f'[PDT: {total_size}|{keep_scene.id}R] {scene.title}',
'tag_ids': {
'mode': 'ADD',
'ids': [tag_remove]
}
'ids': tag_ids
}
})
def clean_titles():
scenes = stash.find_scenes(f={
def clean_scenes():
scene_count, scenes = stash.find_scenes(f={
"title": {
"modifier": "MATCHES_REGEX",
"value": "^\\[Dupe: (\\d+)([KR])\\]"
"value": "^\\[PDT: .+?\\]"
}
},fragment="id title")
},fragment="id title", get_count=True)
log.info(f"Cleaning Titles/Tags of {len(scenes)} Scenes ")
log.info(f"Cleaning Titles/Tags of {scene_count} Scenes ")
for scene in scenes:
title = re.sub(r'\[Dupe: \d+[KR]\]\s+', '', scene['title'])
log.info(f"Removing Dupe Title String from: [{scene['id']}] {scene['title']}")
# Clean scene Title
for i, scene in enumerate(scenes):
title = re.sub(r'\[PDT: .+?\]\s+', '', scene['title'])
stash.update_scenes({
'ids': [scene['id']],
'title': title
})
log.progress(i/scene_count)
tag_keep = stash.find_tag('[Dupe: Keep]')
if tag_keep:
tag_keep = tag_keep['id']
scenes = stash.find_scenes(f={
"tags":{
"value": [tag_keep],
"modifier": "INCLUDES",
"depth": 0
}
},fragment="id title")
# Remove Tags
for tag in get_managed_tags():
scene_count, scenes = stash.find_scenes(f={
"tags":{"value": [tag['id']],"modifier": "INCLUDES","depth": 0}
}, fragment="id", get_count=True)
if not scene_count > 0:
continue
log.info(f'removing tag {tag["name"]} from {scene_count} scenes')
stash.update_scenes({
'ids': [s['id'] for s in scenes],
'ids': [s["id"] for s in scenes],
'tag_ids': {
'mode': 'REMOVE',
'ids': [tag_keep]
'ids': [tag['id']]
}
})
tag_remove = stash.find_tag('[Dupe: Remove]')
if tag_remove:
tag_remove = tag_remove['id']
scenes = stash.find_scenes(f={
"tags":{
"value": [tag_remove],
"modifier": "INCLUDES",
"depth": 0
}
},fragment="id title")
stash.update_scenes({
'ids': [s['id'] for s in scenes],
'tag_ids': {
'mode': 'REMOVE',
'ids': [tag_remove]
}
})
def get_managed_tags(fragment="id name"):
tags = stash.find_tags(f={
"name": {
"value": "^\\[Reason",
"modifier": "MATCHES_REGEX"
}}, fragment=fragment)
tag_name_list = [
config.REMOVE_TAG_NAME,
config.KEEP_TAG_NAME,
config.UNKNOWN_TAG_NAME,
# config.IGNORE_TAG_NAME,
]
for tag_name in tag_name_list:
if tag := stash.find_tag(tag_name):
tags.append(tag)
return tags
def generate_phash():
query = """mutation MetadataGenerate($input: GenerateMetadataInput!) {
metadataGenerate(input: $input)
}"""
variables = {"phashes", True}
stash._callGraphQL(query, variables)
if __name__ == '__main__':
main()
for name, func in getmembers(config, isfunction):
if re.match(r'^compare_', name):
setattr(StashScene, name, func)
main()

View File

@ -1,53 +1,33 @@
# This plugin has four functions:
#
# 1) It will create two tags for review, [Dupe: Keep] and [Dupe: Remove]
# 2) It will auto assign those tags to scenes with different degrees of matching PHashes based on (and in this order):
# a) Keep the larger resolution
# b) Keep the larger file size (if same resolution)
# c) Keep the older scene (if same file size.)
# (Older scene is kept since it's more likely to have been organized if they're the same file)
# With this order of precedence one scene is determined to be the "Keeper" and the rest are assigned for Removal
# When the scenes are tagged, the titles are also modified to add '[Dupe: {SceneID}K/R]'
# The SceneID put into the title is the one determined to be the "Keeper", and is put into all matching scenes
# This way you can sort by title after matching and verify the scenes are actually the same thing, and the Keeper
# will be the first scene in the set. (Since you'll have [Dupe: 72412K], [Dupe: 72412R], [Dupe: 72412R] as an example
#
# 3) It will remove the [Dupe: Keep] and [Dupe: Remove] tags from Stash
# 4) It will remove the [Dupe: ######K/R] tags from the titles
# (These last two options are obviously for after you have removed the scenes you don't want any longer)
#
# PS. This script is essentially a hack and slash job on scripts from Belley and WithoutPants, thanks guys!
name: "PHash Duplicate Tagger"
description: Will tag scenes based on duplicate PHashes for easier/safer removal.
version: 0.1.0
url: https://github.com/Darklyter/CommunityScripts
version: 0.1.3
url: https://github.com/stashapp/CommunityScripts/tree/main/plugins/phashDuplicateTagger
exec:
- python
- "{pluginDir}/phashDuplicateTagger.py"
interface: raw
tasks:
- name: 'Create [Dupe] Tags'
description: 'Create [Dupe: Keep] and [Dupe: Remove] scene tags for filtering '
defaultArgs:
mode: create
- name: 'Set Dupe Tags (EXACT)'
- name: 'Tag Dupes (EXACT)'
description: 'Assign duplicates tags to Exact Match (Dist 0) scenes'
defaultArgs:
mode: tagexact
- name: 'Set Dupe Tags (HIGH)'
mode: tag_exact
- name: 'Tag Dupes (HIGH)'
description: 'Assign duplicates tags to High Match (Dist 3) scenes'
defaultArgs:
mode: taghigh
- name: 'Set Dupe Tags (MEDIUM)'
mode: tag_high
- name: 'Tag Dupes (MEDIUM)'
description: 'Assign duplicates tags to Medium Match (Dist 6) scenes (BE CAREFUL WITH THIS LEVEL)'
defaultArgs:
mode: tagmid
- name: 'Remove [Dupe] Tags'
description: 'Remove duplicates scene tags from Stash database'
mode: tag_medium
- name: 'Delete Managed Tags'
description: 'Deletes tags managed by this plugin from stash'
defaultArgs:
mode: remove
- name: 'Strip [Dupe] From Titles'
description: 'Clean prefixed Dupe string from scene titles'
- name: 'Scene Cleanup'
description: 'Removes titles from scenes and any generated tags excluding [Dupe: Ignore]'
defaultArgs:
mode: cleantitle
mode: clean_scenes
- name: 'Generate Scene PHASHs'
description: 'Generate PHASHs for all scenes where they are missing'
defaultArgs:
mode: generate_phash

View File

@ -1 +1 @@
stashapp-tools>=0.2.0
stashapp-tools>=0.2.33

View File

@ -1,74 +0,0 @@
# SQLITE Renamer for Stash (Task)
Using metadata from your stash to rename your file.
## Requirement
- Stash
- Python 3+ (Tested on Python v3.9.1 64bit, Win10)
- Request Module (https://pypi.org/project/requests/)
- Windows 10 ? (No idea if this work for all OS)
## Installation
- Download the whole folder 'renamer' (config.py, log.py, renamerTask.py/.yml)
- Place it in your **plugins** folder (where the `config.yml` is)
- Reload plugins (Settings > Plugins)
- renamerTask should appear.
### :exclamation: Make sure to configure the plugin by editing `config.py` before running it :exclamation:
## Usage
- You have tasks (Settings > Task):
- **Dry-Run 🔍**: Don't edit any file, just show in log. It will create `renamer_scan.txt` that contains every edit.
- **[DRYRUN] Check 10 scenes**: Check 10 scenes (by newest updated).
- **[DRYRUN] Check all scenes**: Check all scenes.
- **Process :pencil2:**: Edit your files, **don't touch Stash while doing this task**.
- **Process scanned scene from Dry-Run task**: Read `renamer_scan.txt` instead of checking all scenes.
- **Process 10 scenes**: Check 10 scenes (by newest updated).
- **Process all scenes**: Check all scenes.
## Configuration
- Read/Edit `config.py`
- I recommend setting the **log_file** as it can be useful to revert.
- If you have the **renamerOnUpdate** plugin, you can copy the `config.py` from it.
### Example
> Note: The priority is Tag > Studio > Default
The config will be:
```py
# Change filename for scenes from 'Vixen' or 'Slayed' studio.
studio_templates = {
"Slayed": "$date $performer - $title [$studio]",
"Vixen": "$performer - $title [$studio]"
}
# Change filename if the tag 'rename_tag' is present.
tag_templates = {
"rename_tag": "$year $title - $studio $resolution $video_codec",
}
# Change filename no matter what
use_default_template = True
default_template = "$date $title"
# Use space as a performer separator
performer_splitchar = " "
# If the scene has more than 3 performers, the $performer field will be ignored.
performer_limit = 3
```
The scene was just scanned, everything is default (Title = Filename).
Current filename: `Slayed.21.09.02.Ariana.Marie.Emily.Willis.And.Eliza.Ibarra.XXX.1080p.mp4`
|Stash Field | Value | Filename | Trigger template |
|--|:---:|--|--|
| - | *Default* |`Slayed.21.09.02.Ariana.Marie.Emily.Willis.And.Eliza.Ibarra.XXX.1080p.mp4` | default_template
| ~Title| **Driver**| `Driver.mp4` | default_template
| +Date| **2021-09-02**| `2021-09-02 Driver.mp4` | default_template
| +Performer | **Ariana Marie<br>Emily Willis<br>Eliza Ibarra**| `2021-09-02 Driver.mp4` | default_template
| +Studio | **Vixen**| `Ariana Marie Emily Willis Eliza Ibarra - Driver [Vixen].mp4` | studio_templates [Vixen]
| ~Studio | **Slayed**| `2021-09-02 Ariana Marie Emily Willis Eliza Ibarra - Driver [Slayed].mp4` | studio_templates [Slayed]
| +Performer | **Elsa Jean**| `2021-09-02 Driver [Slayed].mp4` | studio_templates [Slayed]<br>**Reach performer_limit**.
| +Tag | **rename_tag**| `2021 Driver - Slayed HD h264.mp4` | tag_templates [rename_tag]

View File

@ -1,88 +0,0 @@
###################################################################
#
# -----------------------------------------------------------------
# Available: $date $year $performer $title $height $resolution $studio $parent_studio $studio_family $video_codec $audio_codec
# -note:
# $studio_family: If parent studio exist use it, else use the studio name.
# $performer: If more than * performers, this field will be ignored. Limit to fix at Settings section below (default: 3)
# $resolution: SD/HD/UHD/VERTICAL (for phone) | $height: 720p 1080p 4k 8k
# -----------------------------------------------------------------
# e.g.:
# $title == Her Fantasy Ball
# $date $title == 2016-12-29 Her Fantasy Ball
# $year $title $height == 2016 Her Fantasy Ball 1080p
# $date $performer - $title [$studio] == 2016-12-29 Eva Lovia - Her Fantasy Ball [Sneaky Sex]
# $parent_studio $date $performer - $title == Reality Kings 2016-12-29 Eva Lovia - Her Fantasy Ball
#
####################################################################
# TEMPLATE #
# Priority : Tags > Studios > Default
# templates to use for given tags
# add or remove as needed
tag_templates = {
"!1. Western": "$date $performer - $title [$studio]",
"!1. JAV": "$title",
"!1. Anime": "$title $date [$studio]"
}
# adjust the below if you want to use studio names instead of tags for the renaming templates
studio_templates = {
}
# change to True to use the default template if no specific tag/studio is found
use_default_template = False
# default template, adjust as needed
default_template = "$date $title"
######################################
# Logging #
# File to save what is renamed, can be useful if you need to revert changes.
# Will look like: IDSCENE|OLD_PATH|NEW_PATH
# Leave Blank ("") or use None if you don't want to use a log file, or a working path like: C:\Users\USERNAME\.stash\plugins\Hooks\rename_log.txt
log_file = r""
######################################
# Settings #
# Character to use as a performer separator.
performer_splitchar = " "
# Maximum number of performer names in the filename. If there are more than that in a scene the filename will not include any performer names!
performer_limit = 3
# ignore male performers.
performer_ignore_male = False
# If $performer is before $title, prevent having duplicate text.
# e.g.:
# Template used: $year $performer - $title
# 2016 Dani Daniels - Dani Daniels in ***.mp4 --> 2016 Dani Daniels in ***.mp4
prevent_title_performer = False
# Only rename 'Organized' scenes.
only_organized = False
# Field to remove if the path is too long. First in list will be removed then second then ... if length is still too long.
order_field = ["$video_codec", "$audio_codec", "$resolution", "$height", "$studio_family", "$studio", "$parent_studio","$performer"]
# Alternate way to show diff. Not useful at all.
alt_diff_display = False
######################################
# Module Related #
# ! OPTIONAL module settings. Not needed for basic operation !
# = psutil module (https://pypi.org/project/psutil/) =
# Gets a list of all processes instead of stopping after the first one. Enabling it slows down the plugin
process_getall = False
# If the file is used by a process, the plugin will kill it. IT CAN MAKE STASH CRASH TOO.
process_kill_attach = False
# =========================
# = Unidecode module (https://pypi.org/project/Unidecode/) =
# Check site mentioned for more details.
# TL;DR: Prevent having non common characters by replacing them.
# Warning: If you have non-latin characters (Cyrillic, Kanji, Arabic, ...), the result will be extremely different.
use_ascii = False
# =========================

View File

@ -1,52 +0,0 @@
import sys
# Log messages sent from a plugin instance are transmitted via stderr and are
# encoded with a prefix consisting of special character SOH, then the log
# level (one of t, d, i, w, e, or p - corresponding to trace, debug, info,
# warning, error and progress levels respectively), then special character
# STX.
#
# The LogTrace, LogDebug, LogInfo, LogWarning, and LogError methods, and their equivalent
# formatted methods are intended for use by plugin instances to transmit log
# messages. The LogProgress method is also intended for sending progress data.
#
def __prefix(level_char):
start_level_char = b'\x01'
end_level_char = b'\x02'
ret = start_level_char + level_char + end_level_char
return ret.decode()
def __log(level_char, s):
if level_char == "":
return
print(__prefix(level_char) + s + "\n", file=sys.stderr, flush=True)
def LogTrace(s):
__log(b't', s)
def LogDebug(s):
__log(b'd', s)
def LogInfo(s):
__log(b'i', s)
def LogWarning(s):
__log(b'w', s)
def LogError(s):
__log(b'e', s)
def LogProgress(p):
progress = min(max(0, p), 1)
__log(b'p', str(progress))

View File

@ -1,647 +0,0 @@
import difflib
import json
import os
import re
import sqlite3
import subprocess
import sys
import time
import requests
try:
import psutil # pip install psutil
MODULE_PSUTIL = True
except:
MODULE_PSUTIL = False
try:
import unidecode # pip install Unidecode
MODULE_UNIDECODE = True
except:
MODULE_UNIDECODE = False
import config
import log
FRAGMENT = json.loads(sys.stdin.read())
FRAGMENT_SERVER = FRAGMENT["server_connection"]
PLUGIN_DIR = FRAGMENT_SERVER["PluginDir"]
PLUGIN_ARGS = FRAGMENT['args'].get("mode")
log.LogDebug("--Starting Plugin 'Renammer'--")
#log.LogDebug("{}".format(FRAGMENT))
def callGraphQL(query, variables=None, raise_exception=True):
# Session cookie for authentication
graphql_port = FRAGMENT_SERVER['Port']
graphql_scheme = FRAGMENT_SERVER['Scheme']
graphql_cookies = {
'session': FRAGMENT_SERVER.get('SessionCookie').get('Value')
}
graphql_headers = {
"Accept-Encoding": "gzip, deflate, br",
"Content-Type": "application/json",
"Accept": "application/json",
"Connection": "keep-alive",
"DNT": "1"
}
graphql_domain = 'localhost'
# Stash GraphQL endpoint
graphql_url = graphql_scheme + "://" + graphql_domain + ":" + str(graphql_port) + "/graphql"
json = {'query': query}
if variables is not None:
json['variables'] = variables
try:
response = requests.post(graphql_url, json=json,headers=graphql_headers, cookies=graphql_cookies, timeout=20)
except Exception as e:
exit_plugin(err="[FATAL] Exception with GraphQL request. {}".format(e))
if response.status_code == 200:
result = response.json()
if result.get("error"):
for error in result["error"]["errors"]:
if raise_exception:
raise Exception("GraphQL error: {}".format(error))
else:
log.LogError("GraphQL error: {}".format(error))
return None
if result.get("data"):
return result.get("data")
elif response.status_code == 401:
exit_plugin(err="HTTP Error 401, Unauthorised.")
else:
raise ConnectionError("GraphQL query failed: {} - {}".format(response.status_code, response.content))
def graphql_getScene(scene_id):
query = """
query FindScene($id: ID!, $checksum: String) {
findScene(id: $id, checksum: $checksum) {
...SceneData
}
}
fragment SceneData on Scene {
id
checksum
oshash
title
details
url
date
rating
o_counter
organized
path
phash
interactive
file {
size
duration
video_codec
audio_codec
width
height
framerate
bitrate
}
studio {
...SlimStudioData
}
movies {
movie {
...MovieData
}
scene_index
}
tags {
...SlimTagData
}
performers {
...PerformerData
}
}
fragment SlimStudioData on Studio {
id
name
parent_studio {
id
name
}
details
rating
aliases
}
fragment MovieData on Movie {
id
checksum
name
aliases
date
rating
director
studio {
...SlimStudioData
}
synopsis
url
}
fragment SlimTagData on Tag {
id
name
aliases
}
fragment PerformerData on Performer {
id
checksum
name
url
gender
twitter
instagram
birthdate
ethnicity
country
eye_color
height
measurements
fake_tits
career_length
tattoos
piercings
aliases
favorite
tags {
...SlimTagData
}
rating
details
death_date
hair_color
weight
}
"""
variables = {
"id": scene_id
}
result = callGraphQL(query, variables)
return result.get('findScene')
def graphql_getConfiguration():
query = """
query Configuration {
configuration {
general {
databasePath
}
}
}
"""
result = callGraphQL(query)
return result.get('configuration')
def graphql_findScene(perPage,direc="DESC"):
query = """
query FindScenes($filter: FindFilterType) {
findScenes(filter: $filter) {
count
scenes {
...SlimSceneData
}
}
}
fragment SlimSceneData on Scene {
id
checksum
oshash
title
details
url
date
rating
o_counter
organized
path
phash
interactive
scene_markers {
id
title
seconds
}
galleries {
id
path
title
}
studio {
id
name
}
movies {
movie {
id
name
}
scene_index
}
tags {
id
name
}
performers {
id
name
gender
favorite
}
}
"""
# ASC DESC
variables = {'filter': {"direction": direc, "page": 1, "per_page": perPage, "sort": "updated_at"}}
result = callGraphQL(query, variables)
return result.get("findScenes")
def makeFilename(scene_information, query):
new_filename = str(query)
for field in TEMPLATE_FIELD:
field_name = field.replace("$","")
if field in new_filename:
if scene_information.get(field_name):
if field == "$performer":
if re.search(r"\$performer[-\s_]*\$title", new_filename) and scene_information.get('title') and PREVENT_TITLE_PERF:
if re.search("^{}".format(scene_information["performer"]), scene_information["title"]):
log.LogInfo("Ignoring the performer field because it's already in start of title")
new_filename = re.sub('\$performer[-\s_]*', '', new_filename)
continue
new_filename = new_filename.replace(field, scene_information[field_name])
else:
new_filename = re.sub('\${}[-\s_]*'.format(field_name), '', new_filename)
# remove []
new_filename = re.sub('\[\W*]', '', new_filename)
# Remove multiple space/_ in row
new_filename = re.sub('[\s_]{2,}', ' ', new_filename)
# Remove multiple - in row
new_filename = re.sub('(?:[\s_]-){2,}', ' -', new_filename)
# Remove space at start/end
new_filename = new_filename.strip(" -")
return new_filename
def find_diff_text(a, b):
addi = minus = stay = ""
minus_ = addi_ = 0
for _, s in enumerate(difflib.ndiff(a, b)):
if s[0] == ' ':
stay += s[-1]
minus += "*"
addi += "*"
elif s[0] == '-':
minus += s[-1]
minus_ += 1
elif s[0] == '+':
addi += s[-1]
addi_ += 1
if minus_ > 20 or addi_ > 20:
log.LogDebug("Diff Checker: +{}; -{};".format(addi_,minus_))
log.LogDebug("OLD: {}".format(a))
log.LogDebug("NEW: {}".format(b))
else:
log.LogDebug("Original: {}\n- Charac: {}\n+ Charac: {}\n Result: {}".format(a, minus, addi, b))
return
def has_handle(fpath,all_result=False):
lst = []
for proc in psutil.process_iter():
try:
for item in proc.open_files():
if fpath == item.path:
if all_result:
lst.append(proc)
else:
return proc
except Exception:
pass
return lst
def exit_plugin(msg=None, err=None):
if msg is None and err is None:
msg = "plugin ended"
output_json = {"output": msg, "error": err}
print(json.dumps(output_json))
sys.exit()
def renamer(scene_id):
filename_template = None
STASH_SCENE = graphql_getScene(scene_id)
# ================================================================ #
# RENAMER #
# Tags > Studios > Default
# Default
if config.use_default_template:
filename_template = config.default_template
# Change by Studio
if STASH_SCENE.get("studio") and config.studio_templates:
if config.studio_templates.get(STASH_SCENE["studio"]["name"]):
filename_template = config.studio_templates[STASH_SCENE["studio"]["name"]]
# by Parent
if STASH_SCENE["studio"].get("parent_studio"):
if config.studio_templates.get(STASH_SCENE["studio"]["name"]):
filename_template = config.studio_templates[STASH_SCENE["studio"]["name"]]
# Change by Tag
if STASH_SCENE.get("tags") and config.tag_templates:
for tag in STASH_SCENE["tags"]:
if config.tag_templates.get(tag["name"]):
filename_template = config.tag_templates[tag["name"]]
break
# END #
####################################################################
if config.only_organized and not STASH_SCENE["organized"]:
return("Scene ignored (not organized)")
if not filename_template:
return("No template for this scene.")
#log.LogDebug("Using this template: {}".format(filename_template))
current_path = STASH_SCENE["path"]
# note: contain the dot (.mp4)
file_extension = os.path.splitext(current_path)[1]
# note: basename contains the extension
current_filename = os.path.basename(current_path)
current_directory = os.path.dirname(current_path)
# Grabbing things from Stash
scene_information = {}
# Grab Title (without extension if present)
if STASH_SCENE.get("title"):
# Removing extension if present in title
scene_information["title"] = re.sub("{}$".format(file_extension), "", STASH_SCENE["title"])
# Grab Date
scene_information["date"] = STASH_SCENE.get("date")
# Grab Performer
if STASH_SCENE.get("performers"):
perf_list = ""
perf_count = 0
for perf in STASH_SCENE["performers"]:
#log.LogDebug(performer)
if PERFORMER_IGNORE_MALE and perf["gender"] == "MALE":
continue
if perf_count > PERFORMER_LIMIT:
# We've already exceeded the limit. No need to keep checking
break
perf_list += perf["name"] + PERFORMER_SPLITCHAR
perf_count += 1
# Remove last character
perf_list = perf_list[:-len(PERFORMER_SPLITCHAR)]
if perf_count > PERFORMER_LIMIT:
log.LogInfo("More than {} performer(s). Ignoring $performer".format(PERFORMER_LIMIT))
perf_list = ""
scene_information["performer"] = perf_list
# Grab Studio name
if STASH_SCENE.get("studio"):
scene_information["studio"] = STASH_SCENE["studio"].get("name")
scene_information["studio_family"] = scene_information["studio"]
# Grab Parent name
if STASH_SCENE["studio"].get("parent_studio"):
scene_information["parent_studio"] = STASH_SCENE["studio"]["parent_studio"]["name"]
scene_information["studio_family"] = scene_information["parent_studio"]
# Grab Height (720p,1080p,4k...)
scene_information["resolution"] = 'SD'
scene_information["height"] = "{}p".format(STASH_SCENE["file"]["height"])
if STASH_SCENE["file"]["height"] >= 720:
scene_information["resolution"] = 'HD'
if STASH_SCENE["file"]["height"] >= 2160:
scene_information["height"] = '4k'
scene_information["resolution"] = 'UHD'
if STASH_SCENE["file"]["height"] >= 4320:
scene_information["height"] = '8k'
# For Phone ?
if STASH_SCENE["file"]["height"] > STASH_SCENE["file"]["width"]:
scene_information["resolution"] = 'VERTICAL'
scene_information["video_codec"] = STASH_SCENE["file"]["video_codec"]
scene_information["audio_codec"] = STASH_SCENE["file"]["audio_codec"]
log.LogDebug("[{}] Scene information: {}".format(scene_id,scene_information))
if scene_information.get("date"):
scene_information["year"] = scene_information["date"][0:4]
# Create the new filename
new_filename = makeFilename(scene_information, filename_template) + file_extension
# Remove illegal character for Windows ('#' and ',' is not illegal you can remove it)
new_filename = re.sub('[\\/:"*?<>|#,]+', '', new_filename)
# Trying to remove non standard character
if MODULE_UNIDECODE and UNICODE_USE:
new_filename = unidecode.unidecode(new_filename, errors='preserve')
else:
# Using typewriter for Apostrophe
new_filename = re.sub("[]+", "'", new_filename)
# Replace the old filename by the new in the filepath
new_path = current_path.rstrip(current_filename) + new_filename
# Trying to prevent error with long path for Win10
# https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd
if len(new_path) > 240:
log.LogWarning("The Path is too long ({})...".format(len(new_path)))
for word in ORDER_SHORTFIELD:
if word not in filename_template:
continue
filename_template = re.sub('\{}[-\s_]*'.format(word), '', filename_template).strip()
log.LogDebug("Removing field: {}".format(word))
new_filename = makeFilename(scene_information, filename_template) + file_extension
new_path = current_path.rstrip(current_filename) + new_filename
if len(new_path) < 240:
log.LogInfo("Reduced filename to: {}".format(new_filename))
break
if len(new_path) > 240:
return("Can't manage to reduce the path, operation aborted.")
#log.LogDebug("Filename: {} -> {}".format(current_filename,new_filename))
#log.LogDebug("Path: {} -> {}".format(current_path,new_path))
if (new_path == current_path):
return("Filename already correct. ({})".format(current_filename))
if ALT_DIFF_DISPLAY:
find_diff_text(current_filename,new_filename)
else:
log.LogDebug("[OLD] Filename: {}".format(current_filename))
log.LogDebug("[NEW] Filename: {}".format(new_filename))
if DRY_RUN:
with open(FILE_DRYRUN_RESULT, 'a', encoding='utf-8') as f:
f.write("{}|{}|{}\n".format(scene_id, current_filename, new_filename))
return("[Dry-run] Writing in {}".format(FILE_DRYRUN_RESULT))
# Connect to the DB
try:
sqliteConnection = sqlite3.connect(STASH_DATABASE)
cursor = sqliteConnection.cursor()
log.LogDebug("Python successfully connected to SQLite")
except sqlite3.Error as error:
return("FATAL SQLITE Error: {}".format(error))
# Looking for duplicate filename
folder_name = os.path.basename(os.path.dirname(new_path))
cursor.execute("SELECT id FROM scenes WHERE path LIKE ? AND NOT id=?;", ["%" + folder_name + "_" + new_filename, scene_id])
dupl_check = cursor.fetchall()
if len(dupl_check) > 0:
for dupl_row in dupl_check:
log.LogError("Same path: [{}]".format(dupl_row[0]))
return("Duplicate path detected, check log!")
cursor.execute("SELECT id FROM scenes WHERE path LIKE ? AND NOT id=?;", ["%" + new_filename, scene_id])
dupl_check = cursor.fetchall()
if len(dupl_check) > 0:
for dupl_row in dupl_check:
log.LogInfo("Same filename: [{}]".format(dupl_row[0]))
# OS Rename
if (os.path.isfile(current_path) == True):
try:
os.rename(current_path, new_path)
except PermissionError as err:
if "[WinError 32]" in str(err) and MODULE_PSUTIL:
log.LogWarning("A process use this file, trying to find it (Probably FFMPEG)")
# Find what process access the file, it's ffmpeg for sure...
process_use = has_handle(current_path, PROCESS_ALLRESULT)
if process_use:
# Terminate the process then try again to rename
log.LogDebug("Process that use this file: {}".format(process_use))
if PROCESS_KILL:
p = psutil.Process(process_use.pid)
p.terminate()
p.wait(10)
# If we don't manage to close it, this will create a error again.
os.rename(current_path, new_path)
else:
return("A process prevent editing the file.")
else:
log.LogError(err)
return ""
if (os.path.isfile(new_path) == True):
log.LogInfo("[OS] File Renamed!")
if LOGFILE:
with open(LOGFILE, 'a', encoding='utf-8') as f:
f.write("{}|{}|{}\n".format(scene_id, current_path, new_path))
else:
# I don't think it's possible.
return("[OS] File failed to rename ? {}".format(new_path))
else:
return("[OS] File don't exist in your Disk/Drive ({})".format(current_path))
# Database rename
cursor.execute("UPDATE scenes SET path=? WHERE id=?;", [new_path, scene_id])
sqliteConnection.commit()
# Close DB
cursor.close()
sqliteConnection.close()
log.LogInfo("[SQLITE] Database updated and closed!")
return ""
# File that show what we will changed.
FILE_DRYRUN_RESULT = os.path.join(PLUGIN_DIR, "renamer_scan.txt")
STASH_CONFIG = graphql_getConfiguration()
STASH_DATABASE = STASH_CONFIG["general"]["databasePath"]
TEMPLATE_FIELD = "$date $year $performer $title $height $resolution $studio $parent_studio $studio_family $video_codec $audio_codec".split(" ")
# READING CONFIG
LOGFILE = config.log_file
PERFORMER_SPLITCHAR = config.performer_splitchar
PERFORMER_LIMIT = config.performer_limit
PERFORMER_IGNORE_MALE = config.performer_ignore_male
PREVENT_TITLE_PERF = config.prevent_title_performer
PROCESS_KILL = config.process_kill_attach
PROCESS_ALLRESULT = config.process_getall
UNICODE_USE = config.use_ascii
ORDER_SHORTFIELD = config.order_field
ALT_DIFF_DISPLAY = config.alt_diff_display
# Task
scenes = None
progress = 0
start_time = time.time()
if PLUGIN_ARGS in ["Process_test","Process_full","Process_dry"]:
DRY_RUN = False
else:
log.LogDebug("Dry-Run enable")
DRY_RUN = True
if PLUGIN_ARGS in ["DRYRUN_test","Process_test"]:
scenes = graphql_findScene(10, "DESC")
if PLUGIN_ARGS in ["DRYRUN_full","Process_full"]:
scenes = graphql_findScene(-1, "ASC")
if PLUGIN_ARGS == "Process_dry":
if os.path.exists(FILE_DRYRUN_RESULT):
scenes = {"scenes":[]}
with open(FILE_DRYRUN_RESULT, 'r', encoding='utf-8') as f:
for line in f:
scene_id_file = line.split("|")[0]
scenes["scenes"].append({"id": scene_id_file})
else:
exit_plugin(err="Can't find the file from the dry-run ({}). Be sure to run a Dry-Run task before.".format(FILE_DRYRUN_RESULT))
if not scenes:
exit_plugin(err="no scene")
log.LogDebug("Count scenes: {}".format(len(scenes["scenes"])))
progress_step = 1 / len(scenes["scenes"])
for scene in scenes["scenes"]:
msg = renamer(scene["id"])
if msg:
log.LogDebug(msg)
progress += progress_step
log.LogProgress(progress)
if PLUGIN_ARGS == "Process_dry":
os.remove(FILE_DRYRUN_RESULT)
if DRY_RUN:
num_lines = 0
if os.path.exists(FILE_DRYRUN_RESULT):
num_lines = sum(1 for _ in open(FILE_DRYRUN_RESULT, encoding='utf-8'))
if num_lines > 0:
log.LogInfo("[DRY-RUN] There wil be {} file(s) changed. Check {} for more details".format(num_lines, FILE_DRYRUN_RESULT))
else:
log.LogInfo("[DRY-RUN] No change to do.")
log.LogInfo("Took {} seconds".format(round(time.time() - start_time)))
exit_plugin("Successful!")

View File

@ -1,29 +0,0 @@
name: renamerTask
description: Rename filename based to a template.
url: https://github.com/stashapp/CommunityScripts
version: 1.1
exec:
- python
- "{pluginDir}/renamerTask.py"
interface: raw
tasks:
- name: '[DRYRUN] Check 10 scenes'
description: Only check 10 scenes. Just show in log and create a file with the possible change.
defaultArgs:
mode: DRYRUN_test
- name: '[DRYRUN] Check all scenes'
description: Check all scenes. Just show in log and create a file with the possible change.
defaultArgs:
mode: DRYRUN_full
- name: 'Process scanned scene from Dry-Run task'
description: Edit scenes listed on the textfile from the Dry-Run task. ! Don't do anything in Stash in same time !
defaultArgs:
mode: Process_dry
- name: 'Process 10 scenes'
description: Edit the filename (if needed) for 10 scenes. ! Don't do anything in Stash in same time !
defaultArgs:
mode: Process_test
- name: 'Process all scenes'
description: Edit the filename (if needed) for all scenes. ! Don't do anything in Stash in same time !
defaultArgs:
mode: Process_full

View File

@ -0,0 +1,145 @@
// By ScruffyNerf
// Ported by feederbox826
(function () {
let cropping = false;
let cropper = null;
try {
const img = document.createElement('img');
new Cropper(img)
} catch (e) {
console.error("Cropper not loaded - please install 4. CropperJS from CommunityScripts")
}
try {
stash.getVersion()
} catch (e) {
console.error("Stash not loaded - please install 1. stashUserscriptLibrary from CommunityScripts")
}
function setupCropper() {
const cropBtnContainerId = "crop-btn-container";
if (document.getElementById(cropBtnContainerId)) return
const sceneId = window.location.pathname.replace('/scenes/', '').split('/')[0];
const sceneImage = document.querySelector("img.scene-cover")
var cropperModal = document.createElement("dialog");
cropperModal.style.width = "90%";
cropperModal.style.border = "none";
cropperModal.classList.add('bg-dark');
document.body.appendChild(cropperModal);
var cropperContainer = document.createElement("div");
cropperContainer.style.width = "100%";
cropperContainer.style.height = "auto";
cropperContainer.style.margin = "auto";
cropperModal.appendChild(cropperContainer);
var image = sceneImage.cloneNode();
image.style.display = "block";
image.style.maxWidth = "100%";
cropperContainer.appendChild(image);
var cropBtnContainer = document.createElement('div');
cropBtnContainer.setAttribute("id", cropBtnContainerId);
cropBtnContainer.classList.add('d-flex','flex-row','justify-content-center','align-items-center');
cropBtnContainer.style.gap = "10px";
cropperModal.appendChild(cropBtnContainer);
sceneImage.parentElement.parentElement.style.flexFlow = 'column';
const cropInfo = document.createElement('p');
cropInfo.style.all = "revert";
cropInfo.classList.add('text-white');
const cropStart = document.createElement('button');
cropStart.setAttribute("id", "crop-start");
cropStart.classList.add('btn', 'btn-primary');
cropStart.innerText = 'Crop Image';
cropStart.addEventListener('click', evt => {
cropping = true;
cropStart.style.display = 'none';
cropCancel.style.display = 'inline-block';
//const isVertical = image.naturalHeight > image.naturalWidth;
//const aspectRatio = isVertical ? 3/2 : NaN
const aspectRatio = NaN
cropper = new Cropper(image, {
viewMode: 1,
initialAspectRatio: aspectRatio,
movable: false,
rotatable: false,
scalable: false,
zoomable: false,
zoomOnTouch: false,
zoomOnWheel: false,
ready() {
cropAccept.style.display = 'inline-block';
},
crop(e) {
cropInfo.innerText = `X: ${Math.round(e.detail.x)}, Y: ${Math.round(e.detail.y)}, Width: ${Math.round(e.detail.width)}px, Height: ${Math.round(e.detail.height)}px`;
}
});
cropperModal.showModal();
});
sceneImage.parentElement.appendChild(cropStart);
const cropAccept = document.createElement('button');
cropAccept.setAttribute("id", "crop-accept");
cropAccept.classList.add('btn', 'btn-success', 'mr-2');
cropAccept.innerText = 'OK';
cropAccept.addEventListener('click', async evt => {
cropping = false;
cropStart.style.display = 'inline-block';
cropAccept.style.display = 'none';
cropCancel.style.display = 'none';
cropInfo.innerText = '';
const reqData = {
"operationName": "SceneUpdate",
"variables": {
"input": {
"cover_image": cropper.getCroppedCanvas().toDataURL(),
"id": sceneId
}
},
"query": `mutation SceneUpdate($input: SceneUpdateInput!) {
sceneUpdate(input: $input) {
id
}
}`
}
await stash.callGQL(reqData);
reloadImg(image.src);
cropper.destroy();
cropperModal.close("cropAccept");
});
cropBtnContainer.appendChild(cropAccept);
const cropCancel = document.createElement('button');
cropCancel.setAttribute("id", "crop-accept");
cropCancel.classList.add('btn', 'btn-danger');
cropCancel.innerText = 'Cancel';
cropCancel.addEventListener('click', evt => {
cropping = false;
cropStart.style.display = 'inline-block';
cropAccept.style.display = 'none';
cropCancel.style.display = 'none';
cropInfo.innerText = '';
cropper.destroy();
cropperModal.close("cropCancel");
});
cropBtnContainer.appendChild(cropCancel);
cropAccept.style.display = 'none';
cropCancel.style.display = 'none';
cropBtnContainer.appendChild(cropInfo);
}
stash.addEventListener('page:scene', function () {
waitForElementId('scene-edit-details', setupCropper);
});
})();

View File

@ -0,0 +1,10 @@
name: Scene Cover Cropper
# requires: CropperJS
description: Crop Scene Cover Images
version: 1.0
ui:
requires:
- CropperJS
css:
javascript:
- sceneCoverCropper.js

View File

@ -1,52 +0,0 @@
import sys
# Log messages sent from a plugin instance are transmitted via stderr and are
# encoded with a prefix consisting of special character SOH, then the log
# level (one of t, d, i, w, e, or p - corresponding to trace, debug, info,
# warning, error and progress levels respectively), then special character
# STX.
#
# The LogTrace, LogDebug, LogInfo, LogWarning, and LogError methods, and their equivalent
# formatted methods are intended for use by plugin instances to transmit log
# messages. The LogProgress method is also intended for sending progress data.
#
def __prefix(level_char):
start_level_char = b'\x01'
end_level_char = b'\x02'
ret = start_level_char + level_char + end_level_char
return ret.decode()
def __log(level_char, s):
if level_char == "":
return
print(__prefix(level_char) + s + "\n", file=sys.stderr, flush=True)
def trace(s):
__log(b't', s)
def debug(s):
__log(b'd', s)
def info(s):
__log(b'i', s)
def warning(s):
__log(b'w', s)
def error(s):
__log(b'e', s)
def progress(p):
progress = min(max(0, p), 1)
__log(b'p', str(progress))

View File

@ -4,8 +4,13 @@ import sys
import json
import base64
import log
from stash_interface import StashInterface
try:
import stashapi.log as log
from stashapi.tools import file_to_base64
from stashapi.stashapp import StashInterface
except ModuleNotFoundError:
print("You need to install the stashapi module. (pip install stashapp-tools)",
file=sys.stderr)
MANUAL_ROOT = None # /some/other/path to override scanning all stashes
cover_pattern = r'(?:thumb|poster|cover)\.(?:jpg|png)'
@ -21,7 +26,7 @@ def main():
if MANUAL_ROOT:
scan(MANUAL_ROOT, handle_cover)
else:
for stash_path in stash.get_root_paths():
for stash_path in get_stash_paths():
scan(stash_path, handle_cover)
except Exception as e:
log.error(e)
@ -34,30 +39,32 @@ def handle_cover(path, file):
filepath = os.path.join(path, file)
with open(filepath, "rb") as img:
b64img_bytes = base64.b64encode(img.read())
if not b64img_bytes:
b64img = file_to_base64(filepath)
if not b64img:
log.warning(f"Could not parse {filepath} to b64image")
return
b64img = f"data:image/jpeg;base64,{b64img_bytes.decode('utf-8')}"
scene_ids = stash.get_scenes_id(filter={
scenes = stash.find_scenes(f={
"path": {
"modifier": "INCLUDES",
"value": f"{path}\""
}
})
}, fragment="id")
log.info(f'Found Cover: {[int(s) for s in scene_ids]}|{filepath}')
log.info(f'Found Cover: {[int(s["id"]) for s in scenes]}|{filepath}')
if mode_arg == "set_cover":
for scene_id in scene_ids:
for scene in scenes:
stash.update_scene({
"id": scene_id,
"id": scene["id"],
"cover_image": b64img
})
log.info(f'Applied cover Scenes')
log.info(f'Applied cover to {len(scenes)} scenes')
def get_stash_paths():
config = stash.get_configuration("general { stashes { path excludeVideo } }")
stashes = config["configuration"]["general"]["stashes"]
return [s["path"] for s in stashes if not s["excludeVideo"]]
def scan(ROOT_PATH, _callback):
log.info(f'Scanning {ROOT_PATH}')
@ -66,4 +73,5 @@ def scan(ROOT_PATH, _callback):
if re.match(cover_pattern, file, re.IGNORECASE):
_callback(root, file)
main()
if __name__ == '__main__':
main()

View File

@ -1,6 +1,6 @@
name: Set Scene Cover
description: Searchs Stash for Scenes with a cover image in the same folder and sets the cover image in stash to that image
version: 0.3
description: searches Stash for Scenes with a cover image in the same folder and sets the cover image in stash to that image
version: 0.4
url: https://github.com/stg-annon/CommunityScripts/tree/main/plugins/setSceneCoverFromFile
exec:
- python
@ -8,7 +8,7 @@ exec:
interface: raw
tasks:
- name: Scan
description: searchs stash dirs for cover images and logs results
description: searches stash dirs for cover images and logs results
defaultArgs:
mode: scan
- name: Set Cover

View File

@ -1,137 +0,0 @@
import requests
import sys
import re
import log
class StashInterface:
port = ""
url = ""
headers = {
"Accept-Encoding": "gzip, deflate, br",
"Content-Type": "application/json",
"Accept": "application/json",
"Connection": "keep-alive",
"DNT": "1"
}
cookies = {}
def __init__(self, conn, fragments={}):
self.port = conn['Port']
scheme = conn['Scheme']
# Session cookie for authentication
self.cookies = {
'session': conn.get('SessionCookie').get('Value')
}
domain = conn.get('Domain') if conn.get('Domain') else 'localhost'
# Stash GraphQL endpoint
self.url = scheme + "://" + domain + ":" + str(self.port) + "/graphql"
log.debug(f"Using stash GraphQl endpoint at {self.url}")
self.fragments = fragments
self.fragments.update(stash_gql_fragments)
def __resolveFragments(self, query):
fragmentRefrences = list(set(re.findall(r'(?<=\.\.\.)\w+', query)))
fragments = []
for ref in fragmentRefrences:
fragments.append({
"fragment": ref,
"defined": bool(re.search("fragment {}".format(ref), query))
})
if all([f["defined"] for f in fragments]):
return query
else:
for fragment in [f["fragment"] for f in fragments if not f["defined"]]:
if fragment not in self.fragments:
raise Exception(f'GraphQL error: fragment "{fragment}" not defined')
query += self.fragments[fragment]
return self.__resolveFragments(query)
def __callGraphQL(self, query, variables=None):
query = self.__resolveFragments(query)
json = {'query': query}
if variables is not None:
json['variables'] = variables
response = requests.post(self.url, json=json, headers=self.headers, cookies=self.cookies)
if response.status_code == 200:
result = response.json()
if result.get("error", None):
for error in result["error"]["errors"]:
raise Exception("GraphQL error: {}".format(error))
if result.get("data", None):
return result.get("data")
elif response.status_code == 401:
sys.exit("HTTP Error 401, Unauthorised. Cookie authentication most likely failed")
else:
raise ConnectionError(
"GraphQL query failed:{} - {}. Query: {}. Variables: {}".format(
response.status_code, response.content, query, variables)
)
def get_scenes_id(self, filter={}):
query = """
query FindScenes($filter: FindFilterType, $scene_filter: SceneFilterType, $scene_ids: [Int!]) {
findScenes(filter: $filter, scene_filter: $scene_filter, scene_ids: $scene_ids) {
count
scenes {
id
}
}
}
"""
variables = {
"filter": { "per_page": -1 },
"scene_filter": filter
}
result = self.__callGraphQL(query, variables)
scene_ids = [s["id"] for s in result.get('findScenes').get('scenes')]
return scene_ids
def update_scene(self, scene_data):
query = """
mutation SceneUpdate($input:SceneUpdateInput!) {
sceneUpdate(input: $input) {
id
}
}
"""
variables = {'input': scene_data}
result = self.__callGraphQL(query, variables)
return result["sceneUpdate"]["id"]
def get_root_paths(self):
query = """
query Configuration {
configuration {
general{
stashes{
path
excludeVideo
}
}
}
}
"""
result = self.__callGraphQL(query)
stashes = result["configuration"]["general"]["stashes"]
paths = [s["path"] for s in stashes if not s["excludeVideo"]]
return paths
stash_gql_fragments = {}

View File

@ -0,0 +1,6 @@
name: Stash Userscript Library
description: Exports utility functions and a Stash class that emits events whenever a GQL response is received and whenenever a page navigation change is detected
version: 1.0
ui:
javascript:
- stashUserscriptLibrary.js

File diff suppressed because it is too large Load Diff

136
plugins/stats/stats.js Normal file
View File

@ -0,0 +1,136 @@
(function() {
function createStatElement(container, title, heading) {
const statEl = document.createElement('div');
statEl.classList.add('stats-element');
container.appendChild(statEl);
const statTitle = document.createElement('p');
statTitle.classList.add('title');
statTitle.innerText = title;
statEl.appendChild(statTitle);
const statHeading = document.createElement('p');
statHeading.classList.add('heading');
statHeading.innerText = heading;
statEl.appendChild(statHeading);
}
async function createSceneStashIDPct(row) {
const reqData = {
"variables": {
"scene_filter": {
"stash_id": {
"value": "",
"modifier": "NOT_NULL"
}
}
},
"query": "query FindScenes($filter: FindFilterType, $scene_filter: SceneFilterType, $scene_ids: [Int!]) {\n findScenes(filter: $filter, scene_filter: $scene_filter, scene_ids: $scene_ids) {\n count\n }\n}"
};
const stashIdCount = (await stash.callGQL(reqData)).data.findScenes.count;
const reqData2 = {
"variables": {
"scene_filter": {}
},
"query": "query FindScenes($filter: FindFilterType, $scene_filter: SceneFilterType, $scene_ids: [Int!]) {\n findScenes(filter: $filter, scene_filter: $scene_filter, scene_ids: $scene_ids) {\n count\n }\n}"
};
const totalCount = (await stash.callGQL(reqData2)).data.findScenes.count;
createStatElement(row, (stashIdCount / totalCount * 100).toFixed(2) + '%', 'Scene StashIDs');
}
async function createPerformerStashIDPct(row) {
const reqData = {
"variables": {
"performer_filter": {
"stash_id": {
"value": "",
"modifier": "NOT_NULL"
}
}
},
"query": "query FindPerformers($filter: FindFilterType, $performer_filter: PerformerFilterType) {\n findPerformers(filter: $filter, performer_filter: $performer_filter) {\n count\n }\n}"
};
const stashIdCount = (await stash.callGQL(reqData)).data.findPerformers.count;
const reqData2 = {
"variables": {
"performer_filter": {}
},
"query": "query FindPerformers($filter: FindFilterType, $performer_filter: PerformerFilterType) {\n findPerformers(filter: $filter, performer_filter: $performer_filter) {\n count\n }\n}"
};
const totalCount = (await stash.callGQL(reqData2)).data.findPerformers.count;
createStatElement(row, (stashIdCount / totalCount * 100).toFixed(2) + '%', 'Performer StashIDs');
}
async function createStudioStashIDPct(row) {
const reqData = {
"variables": {
"studio_filter": {
"stash_id": {
"value": "",
"modifier": "NOT_NULL"
}
}
},
"query": "query FindStudios($filter: FindFilterType, $studio_filter: StudioFilterType) {\n findStudios(filter: $filter, studio_filter: $studio_filter) {\n count\n }\n}"
};
const stashIdCount = (await stash.callGQL(reqData)).data.findStudios.count;
const reqData2 = {
"variables": {
"scene_filter": {}
},
"query": "query FindStudios($filter: FindFilterType, $studio_filter: StudioFilterType) {\n findStudios(filter: $filter, studio_filter: $studio_filter) {\n count\n }\n}"
};
const totalCount = (await stash.callGQL(reqData2)).data.findStudios.count;
createStatElement(row, (stashIdCount / totalCount * 100).toFixed(2) + '%', 'Studio StashIDs');
}
async function createPerformerFavorites(row) {
const reqData = {
"variables": {
"performer_filter": {
"filter_favorites": true
}
},
"query": "query FindPerformers($filter: FindFilterType, $performer_filter: PerformerFilterType) {\n findPerformers(filter: $filter, performer_filter: $performer_filter) {\n count\n }\n}"
};
const perfCount = (await stash.callGQL(reqData)).data.findPerformers.count;
createStatElement(row, perfCount, 'Favorite Performers');
}
async function createMarkersStat(row) {
const reqData = {
"variables": {
"scene_marker_filter": {}
},
"query": "query FindSceneMarkers($filter: FindFilterType, $scene_marker_filter: SceneMarkerFilterType) {\n findSceneMarkers(filter: $filter, scene_marker_filter: $scene_marker_filter) {\n count\n }\n}"
};
const totalCount = (await stash.callGQL(reqData)).data.findSceneMarkers.count;
createStatElement(row, totalCount, 'Markers');
}
stash.addEventListener('page:stats', function() {
waitForElementByXpath("//div[contains(@class, 'container-fluid')]/div[@class='mt-5']", function(xpath, el) {
if (!document.getElementById('custom-stats-row')) {
const changelog = el.querySelector('div.changelog');
const row = document.createElement('div');
row.setAttribute('id', 'custom-stats-row');
row.classList.add('col', 'col-sm-8', 'm-sm-auto', 'row', 'stats');
el.insertBefore(row, changelog);
createSceneStashIDPct(row);
createStudioStashIDPct(row);
createPerformerStashIDPct(row);
createPerformerFavorites(row);
createMarkersStat(row);
}
});
});
})();

9
plugins/stats/stats.yml Normal file
View File

@ -0,0 +1,9 @@
name: Extended Stats
# requires: StashUserscriptLibrary
description: Adds new stats to the stats page
version: 1.0
ui:
requires:
- StashUserscriptLibrary
javascript:
- stats.js

View File

@ -14,66 +14,77 @@ request_s = requests.Session()
def processScene(s):
if len(s['stash_ids']) > 0:
for sid in s['stash_ids']:
# print('looking up markers for stash id: '+sid['stash_id'])
res = request_s.post('https://timestamp.trade/get-markers/' + sid['stash_id'], json=s)
if res.status_code==200:
md = res.json()
if 'marker' in md:
log.info(
'api returned something, for scene: ' + s['title'] + ' marker count: ' + str(len(md['marker'])))
markers = []
for m in md['marker']:
# log.debug('-- ' + m['name'] + ", " + str(m['start'] / 1000))
marker = {}
marker["seconds"] = m['start'] / 1000
marker["primary_tag"] = m["tag"]
marker["tags"] = []
marker["title"] = m['name']
markers.append(marker)
if len(markers) > 0:
log.info('Saving markers')
mp.import_scene_markers(stash, markers, s['id'], 15)
if 'galleries' in md:
log.info(md['galleries'])
skip_sync_tag_id = stash.find_tag('[Timestamp: Skip Sync]', create=True).get("id")
for g in md['galleries']:
for f in g['files']:
res=stash.find_galleries(f={"checksum": {"value": f['md5'],"modifier": "EQUALS"},"tags":{"depth":0,"excludes":[skip_sync_tag_id],"modifier":"INCLUDES_ALL","value":[]}})
for gal in res:
# log.debug('Gallery=%s' %(gal,))
gallery={
'id':gal['id'],
'title':gal['title'],
'urls':gal['urls'],
'date':gal['date'],
'rating100':gal['rating100'],
'studio_id':gal['studio']['id'],
'performer_ids':[x['id'] for x in gal['performers']],
'tag_ids':[x['id'] for x in gal['tags']],
'scene_ids':[x['id'] for x in gal['scenes']],
'details':gal['details']
}
if len(gal['urls'])==0:
log.debug('no urls on gallery, needs new metadata')
gallery['urls'].extend([x['url'] for x in g['urls']])
if len(s['stash_ids']) == 0:
log.debug('no scenes to process')
return
skip_sync_tag_id = stash.find_tag('[Timestamp: Skip Sync]', create=True).get("id")
for sid in s['stash_ids']:
try:
if any(tag['id'] == str(skip_sync_tag_id) for tag in s['tags']):
log.debug('scene has skip sync tag')
return
log.debug('looking up markers for stash id: '+sid['stash_id'])
res = requests.post('https://timestamp.trade/get-markers/' + sid['stash_id'], json=s)
md = res.json()
if md.get('marker'):
log.info('api returned markers for scene: ' + s['title'] + ' marker count: ' + str(len(md['marker'])))
markers = []
for m in md['marker']:
# log.debug('-- ' + m['name'] + ", " + str(m['start'] / 1000))
marker = {}
marker["seconds"] = m['start'] / 1000
marker["primary_tag"] = m["tag"]
marker["tags"] = []
marker["title"] = m['name']
markers.append(marker)
if len(markers) > 0:
log.info('Saving markers')
mp.import_scene_markers(stash, markers, s['id'], 15)
else:
log.debug('api returned no markers for scene: ' + s['title'])
if 'galleries' in md:
log.info(md['galleries'])
skip_sync_tag_id = stash.find_tag('[Timestamp: Skip Sync]', create=True).get("id")
for g in md['galleries']:
for f in g['files']:
res = stash.find_galleries(f={"checksum": {"value": f['md5'], "modifier": "EQUALS"},
"tags": {"depth": 0, "excludes": [skip_sync_tag_id],
"modifier": "INCLUDES_ALL", "value": []}})
for gal in res:
# log.debug('Gallery=%s' %(gal,))
gallery = {
'id': gal['id'],
'title': gal['title'],
'urls': gal['urls'],
'date': gal['date'],
'rating100': gal['rating100'],
'studio_id': gal['studio']['id'],
'performer_ids': [x['id'] for x in gal['performers']],
'tag_ids': [x['id'] for x in gal['tags']],
'scene_ids': [x['id'] for x in gal['scenes']],
'details': gal['details']
}
if len(gal['urls']) == 0:
log.debug('no urls on gallery, needs new metadata')
gallery['urls'].extend([x['url'] for x in g['urls']])
if s['id'] not in gallery['scene_ids']:
log.debug('attaching scene %s to gallery %s '% (s['id'],gallery['id'],))
gallery['scene_ids'].append(s['id'])
log.info('updating gallery: %s' % (gal['id'],))
stash.update_gallery(gallery_data=gallery)
log.debug(res)
if 'movies' in md:
log.info(md['movies'])
if s['id'] not in gallery['scene_ids']:
log.debug('attaching scene %s to gallery %s ' % (s['id'], gallery['id'],))
gallery['scene_ids'].append(s['id'])
log.info('updating gallery: %s' % (gal['id'],))
stash.update_gallery(gallery_data=gallery)
except json.decoder.JSONDecodeError:
log.error('api returned invalid JSON for stash id: ' + sid['stash_id'])
def processAll():
log.info('Getting scene count')
count=stash.find_scenes(f={"stash_id_endpoint": { "endpoint": "", "modifier": "NOT_NULL", "stash_id": ""},"has_markers":"false"},filter={"per_page": 1},get_count=True)[0]
skip_sync_tag_id = stash.find_tag('[Timestamp: Skip Sync]', create=True).get("id")
count=stash.find_scenes(f={"stash_id_endpoint": { "endpoint": "", "modifier": "NOT_NULL", "stash_id": ""},"has_markers":"false","tags":{"depth":0,"excludes":[skip_sync_tag_id],"modifier":"INCLUDES_ALL","value":[]}},filter={"per_page": 1},get_count=True)[0]
log.info(str(count)+' scenes to submit.')
i=0
for r in range(1,int(count/per_page)+1):
@ -85,7 +96,7 @@ def processAll():
log.progress((i/count))
time.sleep(1)
def submit(f={"has_markers": "true"}):
def submit():
scene_fgmt = """title
details
url
@ -168,11 +179,15 @@ def submit(f={"has_markers": "true"}):
}
}
"""
count = stash.find_scenes(f=f, filter={"per_page": 1}, get_count=True,fragment=scene_fgmt)[0]
skip_submit_tag_id = stash.find_tag('[Timestamp: Skip Submit]', create=True).get("id")
count = stash.find_scenes(f={"has_markers": "true","tags":{"depth":0,"excludes":[skip_sync_tag_id],"modifier":"INCLUDES_ALL","value":[]}}, filter={"per_page": 1}, get_count=True)[0]
i=0
for r in range(1, math.ceil(count/per_page) + 1):
log.info('submitting scenes: %s - %s %0.1f%%' % ((r - 1) * per_page,r * per_page,(i/count)*100,))
scenes = stash.find_scenes(f=f, filter={"page": r, "per_page": per_page},fragment=scene_fgmt)
scenes = stash.find_scenes(f={"has_markers": "true"}, filter={"page": r, "per_page": per_page},fragment=scene_fgmt)
for s in scenes:
log.debug("submitting scene: " + str(s))
request_s.post('https://timestamp.trade/submit-stash', json=s)
@ -181,7 +196,6 @@ def submit(f={"has_markers": "true"}):
time.sleep(2)
def submitGallery():
scene_fgmt = """ title
url

View File

@ -1,6 +1,6 @@
name: Timestamp Trade
description: Sync Markers with timestamp.trade, a new database for sharing markers.
version: 0.4
version: 0.3
url: https://github.com/stashapp/CommunityScripts/
exec:
- python

View File

@ -0,0 +1,15 @@
FROM python:3.11.5-alpine3.18
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
#Create an empty config file so that we can just use the defaults. This file can be mounted if it needs to be
#modified
RUN touch /config.toml
#Apparently using -u causes the logs to output immediately
CMD [ "python", "-u", "./watcher.py", "/config.toml" ]

View File

@ -0,0 +1,63 @@
# Stash Watcher
Stash Watcher is a service that watches your Stash library directories for changes and then triggers a Metadata Scan when new files are added to those directories. It then waits a period of time before triggering another scan to keep Stash from constantly scanning if you're making many changes. Note that updates are watched during that window; the update is merely delayed.
## Configuration
Modify a [config.toml](config.toml) for your environment. The defaults match the Stash docker defaults, so they may work for you. You are likely to have to update `Paths` and possibly `ApiKey`. Check out [default.toml](default.toml) for all configurable options. You can configure:
* Url (host, domain, port)
* Api Key (if your Stash is password protected)
* Paths
* Timeout - the minimum time between Metadata Scans
* Scan options - The options for the Metadata Scan
* Enable Polling - see [SMB/CIFS Shares](#smbcifs-shares)
## Running Stash Watcher
You can run Stash Watcher directly from the [command line](#running-directly-with-python) or from inside [docker](#running-with-docker).
### Running directly with python
The directs below are for linux, but they should work on other operating systems.
#### Step 0: Create a Virtual Environment (optional, but recommended)
```
python -m venv venv
. venv/bin/activate
```
#### Step 1: Install dependencies
```
pip install -r requirements.txt
```
#### Step 2: Create/Modify Configuration
Following the directions in [Configuration](#configuration), modify [config.toml](config.toml) if necessary.
#### Step 3: Execute
```
python watcher.py path_to_config.toml
```
That's it. Now when you make changes to watched directories, Stash Watcher will make an API call to trigger a metadata scan.
### Running with docker
There is currently no published docker image, so you'll have to build it yourself. The easiest way to do this is with docker compose:
```
version: "3.4"
services:
stash-watcher:
container_name: stash-watcher
build: <path_to_stash-watcher_directory>
volumes:
#This is only required if you have to modify config.toml (if the defaults are fine you don't have to map this file)
- ./config.toml:/config.toml:ro
#This is the path to your stash content. If you have multiple paths, map them here
- /stash:/data:ro
restart: unless-stopped
```
Then you can run
```
docker compose up -d --build
```
To start the watcher.
## Notes
### SMB/CIFS shares
The library ([watchdog](https://pypi.org/project/watchdog/)) that Stash Watcher uses has some limitations when dealing with SMB/CIFS shares. If you encounter some problems, set [PollInterval in your config.toml](https://github.com/DuctTape42/CommunityScripts/blob/main/scripts/stash-watcher/defaults.toml#L28). This is a lot less efficient than the default mechanism, but is more likely to work.
In my testing (this is from Windows to a share on another machine), if the machine running Stash Watcher wrote to the share, then the normal watcher worked fine. However, if a different machine wrote to the share, then Stash Watcher did not see the write unless I used Polling.

View File

@ -0,0 +1,16 @@
#This is the information about your stash instance
[Host]
#The scheme (either http or https)
Scheme = http
#The full hostname for your stash instance. If you're running in docker you might want the
#service name and not localhost here.
Host = localhost
#The port number for your stash instance
Port = 9999
#The api key, if your stash instance is password protected
ApiKey =
#Configuration for the listener itself
[Config]
#A comma separated list of paths to watch.
Paths = /data

View File

@ -0,0 +1,48 @@
#This is the information about your stash instance
[Host]
#The scheme (either http or https)
Scheme = http
#The full hostname for your stash instance. If you're running in docker you might want the
#service name and not localhost here.
Host = localhost
#The port number for your stash instance
Port = 9999
#The api key, if your stash instance is password protected
ApiKey =
#Configuration for the listener itself
[Config]
#A comma separated list of paths to watch.
Paths = /data
#The minimum time to wait between triggering scans
Cooldown = 300
#A list of file extensions to watch. If this is omitted, it uses the extensions that are defined
#in your Stash library (for videos, images, and galleries)
Extensions =
#If this is set to a non-zero numeric value, this forces the use of polling to
#determine file system changes. If it is left blank, then the OS appropriate
#mechanism is used. This is much less efficient than the OS mechanism, so it
#should be used with care. The docs claim that this is required to watch SMB
#shares, though in my testing I could watch them on Windows with the regular
#WindowsApiObserver
PollInterval=
#This enables debug logging
Debug=
#Options for the Stash Scan. Stash defaults to everything disabled, so this is the default
#Generate options that match up with what we can do in Scan
[ScanOptions]
#"Generate scene covers" from the UI
Covers=true
#"Generate previews" from the UI
Previews=true
#"Generate animated image previews" from the UI
ImagePreviews=false
#"Generate scrubber sprites" from the UI
Sprites=false
#"Generate perceptual hashes" from the UI
Phashes=true
#"Generate thumbnails for images" from the UI
Thumbnails=true
#"Generate previews for image clips" from the UI
ClipPreviews=false

View File

@ -0,0 +1,3 @@
argparse
stashapp-tools
watchdog

View File

@ -0,0 +1,240 @@
#!/usr/bin/python -w
import argparse
import configparser
import time
import os
from threading import Lock, Condition
from watchdog.observers import Observer
from watchdog.observers.polling import PollingObserver
from watchdog.events import PatternMatchingEventHandler
from stashapi.stashapp import StashInterface
import logging
import sys
from enum import Enum
#the type of watcher being used; controls how to interpret the events
WatcherType = Enum('WatcherType', ['INOTIFY', 'WINDOWS', 'POLLING', 'KQUEUE'])
#Setup logger
logger = logging.getLogger("stash-watcher")
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
ch.setFormatter(logging.Formatter("%(asctime)s %(message)s"))
logger.addHandler(ch)
#This signals that we should
shouldUpdate = False
mutex = Lock()
signal = Condition(mutex)
modifiedFiles = {}
currentWatcherType = None
def log(msg):
logger.info(msg)
def debug(msg):
logger.debug(msg)
def handleEvent(event):
global shouldUpdate
global currentWatcherType
debug("========EVENT========")
debug(str(event))
#log(modifiedFiles)
#Record if the file was modified. When a file is closed, see if it was modified. If so, trigger
shouldTrigger = False
if event.is_directory == True:
return
#Depending on the watcher type, we have to handle these events differently
if currentWatcherType == WatcherType.WINDOWS:
#On windows here's what happens:
# File moved into a watched directory - Created Event
# File moved out of a watched directory - Deleted Event
# Moved within a watched directory (src and dst in watched directory) - Moved event
# echo blah > foo.mp4 - Created then Modified
# copying a small file - Created then Modified
# copying a large file - Created then two (or more) Modified events (appears to be one when the file is created and another when it's finished)
#It looks like you can get an optional Created Event and then
#either one or two Modified events. You can also get Moved events
#For local files on Windows, they can't be opened if they're currently
#being written to. Therefore, every time we get an event, attempt to
#open the file. If we're successful, assume the write is finished and
#trigger the update. Otherwise wait until the next event and try again
if event.event_type == "created" or event.event_type == "modified":
try:
with open(event.src_path) as file:
debug("Successfully opened file; triggering")
shouldTrigger = True
except:
pass
if event.event_type == "moved":
shouldTrigger = True
elif currentWatcherType == WatcherType.POLLING:
#Every interval you get 1 event per changed file
# - If the file was not present in the previous poll, then Created
# - If the file was present and has a new size, then Modified
# - If the file was moved within the directory, then Moved
# - If the file is gone, then deleted
#
# For now, just trigger on the created event. In the future, create
# a timer at 2x polling interval. Reschedule the timer on each event
# when it fires, trigger the update.
if event.event_type == "moved" or event.event_type == "created":
shouldTrigger = True
#Until someone tests this on mac, just do what INOTIFY does
elif currentWatcherType == WatcherType.INOTIFY or currentWatcherType == WatcherType.KQUEUE:
if event.event_type == "modified":
modifiedFiles[event.src_path] = 1
#These are for files being copied into the target
elif event.event_type == "closed":
if event.src_path in modifiedFiles:
del modifiedFiles[event.src_path]
shouldTrigger = True
#For download managers and the like that write to a temporary file and then move to the destination (real)
#path. Note that this actually triggers if the destination is in the watched location, and not just if it's
#moved out of a watched directory
elif event.event_type == "moved":
shouldTrigger = True
else:
print("Unknown watcher type " + str(currentWatcherType))
sys.exit(1)
#Trigger the update
if shouldTrigger:
debug("Triggering updates")
with mutex:
shouldUpdate = True
signal.notify()
def main(stash, scanFlags, paths, extensions, timeout, pollInterval):
global shouldUpdate
global currentWatcherType
if len(extensions) == 1 and extensions[0] == "*":
patterns = ["*"]
else:
patterns = list(map(lambda x : "*." + x, extensions))
eventHandler = PatternMatchingEventHandler(patterns, None, False, True)
eventHandler.on_any_event = handleEvent
observer = Observer()
observerName = type(observer).__name__
if pollInterval != None and pollInterval > 0:
currentWatcherType = WatcherType.POLLING
observer = PollingObserver()
elif observerName == "WindowsApiObserver":
currentWatcherType = WatcherType.WINDOWS
elif observerName == "KqueueObserver":
currentWatcherType = WatcherType.KQUEUE
elif observerName == "InotifyObserver":
currentWatcherType = WatcherType.INOTIFY
else:
print("Unknown watcher type " + str(observer))
sys.exit(1)
debug(str(observer))
for path in paths:
observer.schedule(eventHandler, path, recursive=True)
observer.start()
try:
while True:
with mutex:
while not shouldUpdate:
signal.wait()
shouldUpdate = False
log("Triggering stash scan")
stash.metadata_scan(flags = scanFlags)
log("Sleeping for " + str(timeout) + " seconds")
time.sleep(timeout)
except KeyboardInterrupt:
observer.stop()
observer.join()
def listConverter(item):
debug("listConverter(" + str(item) + ")")
if not item:
return None
listItems = [i.strip() for i in item.split(',')]
if not listItems or (len(listItems) == 1 and not listItems[0]):
return None
return listItems
def makeArgParser():
parser = argparse.ArgumentParser(description='Stash file watcher')
parser.add_argument('config_path', nargs=1, help='Config file path (toml)')
return parser
def parseConfig(path):
config = configparser.ConfigParser(converters={'list': listConverter })
#Load the defaults first
defaults_path = os.path.join(os.path.dirname('__file__'), 'defaults.toml')
config.read(defaults_path)
#Now read the user config
config.read(path)
return config
if __name__ == '__main__':
#Parse the arguments
parser = makeArgParser()
args = parser.parse_args()
configPath = args.config_path
config = parseConfig(configPath)
#Set up Stash
stashArgs = {
"scheme": config["Host"]["Scheme"],
"host": config["Host"]["Host"],
"port": config["Host"]["Port"]
}
if config["Host"]["ApiKey"]:
stashArgs["ApiKey"] = config["Host"]["ApiKey"]
stash = StashInterface(stashArgs)
#And now the flags for the scan
scanFlags = {
"scanGenerateCovers": config["ScanOptions"].getboolean("Covers"),
"scanGeneratePreviews": config["ScanOptions"].getboolean("Previews"),
"scanGenerateImagePreviews": config["ScanOptions"].getboolean("ImagePreviews"),
"scanGenerateSprites": config["ScanOptions"].getboolean("Sprites"),
"scanGeneratePhashes": config["ScanOptions"].getboolean("Phashes"),
"scanGenerateThumbnails": config["ScanOptions"].getboolean("Thumbnails"),
"scanGenerateClipPreviews": config["ScanOptions"].getboolean("ClipPreviews")
}
paths = config.getlist("Config", "Paths")
timeout = config["Config"].getint("Cooldown")
#If the extensions are in the config, use them. Otherwise pull them from stash.
extensions = config.getlist('Config', 'Extensions')
if not extensions:
stashConfig = stash.get_configuration()
extensions = stashConfig['general']['videoExtensions'] + stashConfig['general']['imageExtensions'] + stashConfig['general']['galleryExtensions']
pollIntervalStr = config.get('Config', 'PollInterval')
if pollIntervalStr:
pollInterval = int(pollIntervalStr)
else:
pollInterval = None
if config.get('Config', 'Debug') == "true":
logger.setLevel(logging.DEBUG)
ch.setLevel(logging.DEBUG)
main(stash, scanFlags, paths, extensions, timeout, pollInterval)