slskd #124

Closed
opened 2025-09-09 19:43:57 -05:00 by giteasync · 22 comments

Originally created by @vhsdream on GitHub.

Originally assigned to: @vhsdream on GitHub.

https://github.com/community-scripts/ProxmoxVE/discussions/735

App name
slskd

Website
https://github.com/slskd/slskd

Description
slskd is a modern client-server application for the Soulseek file sharing network.

Originally created by @vhsdream on GitHub. Originally assigned to: @vhsdream on GitHub. https://github.com/community-scripts/ProxmoxVE/discussions/735 App name slskd Website https://github.com/slskd/slskd Description slskd is a modern client-server application for the Soulseek file sharing network.
giteasync added the Started Migration To ProxmoxVE label 2025-09-09 19:43:57 -05:00
Author

@vhsdream commented on GitHub:

This also includes Soularr.

@vhsdream commented on GitHub: This also includes [Soularr](https://soularr.net).
Author

@IReclaimer commented on GitHub:

This might be out of scope a bit, but it's also important that people are able to start using the apps that they install, with relative ease.

I think the user should at least be directed to the two config files that they need to edit to get setup, but they should then be pointed to the documentation for the two apps for how to set them up.

They're located here:
/opt/soularr/confg.ini
/opt/slskd/config/slskd.yml

Can you share the part of the config with the directories config? In case there are credentials please remove.

Here's mine:

cat /opt/slskd/config/slskd.yml

# debug: false
# headless: false
 remote_configuration: true
 remote_file_management: true
# instance_name: default
# flags:
#   no_logo: false
#   no_start: false
#   no_config_watch: false
#   no_connect: false
#   no_share_scan: false
#   force_share_scan: false
#   no_version_check: false
#   log_sql: false
#   experimental: false
#   volatile: false
#   case_sensitive_reg_ex: false
#   legacy_windows_tcp_keepalive: false
#   optimistic_relay_file_info: false
# relay:
#   enabled: false
#   mode: controller # controller (default), agent, or debug (for local development)
#   # controller config is required when running in 'agent' mode
#   # this specifies the relay controller that will be controlling this agent
#   controller:
#     address: https://some.site.com:5000
#     ignore_certificate_errors: true
#     api_key: <a 16-255 character string corresponding to one of the controller's 'readwrite' or 'administrator' API keys>
#     secret: <a 16-255 character shared secret matching the controller's config for this agent>
#     downloads: true
#   # agent config is optional when running in 'controller' mode
#   # this specifies all of the agents capable of connecting
#   agents:
#     my_agent:
#       instance_name: my_agent # make sure the top-level instance_name of the agent matches!
#       secret: <a 16-255 character string unique to this agent>
#       cidr: 0.0.0.0/0,::/0 # Replace this with your subnet
# permissions:
#   file:
#     mode: ~ # not for Windows, chmod syntax, e.g. 644, 777. can't escalate beyond umask
 directories:
   incomplete: /mnt/slskd/incomplete
   downloads: /mnt/slskd/complete
 shares:
   directories:
     - '/mnt/media/music'
   filters:
     - \.ini$
     - Thumbs.db$
     - \.DS_Store$
   cache:
     storage_mode: memory
     workers: 16
     retention: ~ # retain indefinitely (do not automatically re-scan)
# rooms:
#   - ~
# global:
#   upload:
#     slots: 20
#     speed_limit: 1000 # in kibibytes
#   limits:
#     queued:
#       files: 500
#       megabytes: 5000
#     daily:
#       files: 1000
#       megabytes: 10000
#       failures: 200
#     weekly:
#       files: 5000
#       megabytes: 50000
#       failures: 1000
#   download:
#     slots: 500
#     speed_limit: 1000
# groups:
#   default:
#     upload:
#       priority: 500
#       strategy: roundrobin
#       slots: 10
#     limits:
#       queued:
#         files: 150
#         megabytes: 1500
#       daily: ~ # no daily limits (weekly still apply)
#       weekly:
#         files: 1500
#         megabytes: 15000
#         failures: 150
#   leechers:
#     thresholds:
#       files: 1
#       directories: 1
#     upload:
#       priority: 999
#       strategy: roundrobin
#       slots: 1
#       speed_limit: 100
#     limits:
#       queued:
#         files: 15
#         megabytes: 150
#       daily:
#         files: 30
#         megabytes: 300
#         failures: 10
#       weekly:
#         files: 150
#         megabytes: 1500
#         failures: 30
#   blacklisted:
#     members:
#       - <username to blacklist>
#     cidrs:
#       - <CIDR to blacklist, e.g. 255.255.255.255/32>
#   user_defined:
#     my_buddies:
#       upload:
#         priority: 250
#         strategy: firstinfirstout
#         slots: 10
#       limits:
#         queued:
#           files: 1000 # override global default
#       members:
#         - alice
#         - bob
# blacklist:
#   enabled: true
#   file: <path to file containing CIDRs to blacklist>
# filters:
#   search:
#     request:
#       - ^.{1,2}$
 web:
   port: 5030
   https:
     disabled: true
     port: 5031
     force: false
     certificate:
       pfx: ~
       password: ~
   url_base: /
   content_path: wwwroot
   logging: false
   authentication:
     disabled: false
     username: [webgui username]
     password: [webgui password]
     jwt:
       key: [jwt key]
       ttl: 604800000
     api_keys:
       my_api_key:
         key: [api key]
         role: readwrite # readonly, readwrite, administrator
         cidr: 0.0.0.0/0,::/0 # Replace this with your subnet
# retention:
#   searches: 10080 # 7 days, in minutes
#   transfers:
#     upload:
#       succeeded: 1440 # 1 day, in minutes
#       errored: 30
#       cancelled: 5
#     download:
#       succeeded: 1440 # 1 day, in minutes
#       errored: 20160 # 2 weeks, in minutes
#       cancelled: 5
#   files:
#     complete: 20160 # 2 weeks, in minutes
#     incomplete: 43200 # 30 days, in minutes
#   logs: 180 # days
# logger:
#   disk: false
#   no_color: false
#   loki: ~
# metrics:
#   enabled: false
#   url: /metrics
#   authentication:
#     disabled: false
#     username: slskd
#     password: slskd
# feature:
#   swagger: false
 soulseek:
   address: vps.slsknet.org
   port: 2271
   username: [soulseek username]
   password: [soulseek password]
   description: |
     A slskd user. https://github.com/slskd/slskd
   listen_ip_address: 0.0.0.0
   listen_port: 50300
   diagnostic_level: Info
   distributed_network:
     disabled: true
     disable_children: true
     child_limit: 25
     logging: true
   connection:
     timeout:
       connect: 10000
       inactivity: 15000
     buffer:
       read: 16384
       write: 16384
       transfer: 262144
       write_queue: 250
#     proxy:
#       enabled: true
#       address: ~
#       port: ~
#       username: ~
#       password: ~
# integration:
#   webhooks:
#     my_webhook:
#       on:
#         - DownloadFileComplete
#       call:
#         url: https://192.168.1.42:8080/slskd_webhook
#         headers:
#           - name: X-API-Key
#             value: foobar1234
#           - name: Authorization
#             value: Bearer eyJ...ssw5c
#           - name: User-Agent
#             value: slskd/0.0
#         ignore_certificate_errors: true
#       timeout: 5000 # in milliseconds
#       retry:
#         attempts: 3
#   scripts:
#     my_post_download_script:
#       on:
#         - DownloadFileComplete
#         - DownloadDirectoryComplete
#       run: data/my_script.sh --json-to-process $EVENT
#     my_logging_script:
#       on:
#        - All
#       run: data/log_slskd_events.sh $DATA
#   ftp:
#     enabled: true
#     address: ~
#     port: ~
#     username: ~
#     password: ~
#     remote_path: /
#     encryption_mode: auto
#     ignore_certificate_errors: true
#     overwrite_existing: true
#     connection_timeout: 5000
#     retry_attempts: 3
#   pushbullet:
#     enabled: true
#     access_token: ~
#     notification_prefix: "From slskd:"
#     notify_on_private_message: true
#     notify_on_room_mention: true
#     retry_attempts: 3
#     cooldown_time: 900000

cat /opt/soularr/config.ini

[Lidarr]
api_key = [lidarr api key]
host_url = [lidarr host/ip and port]
download_dir = /mnt/download/slskd/complete

[Slskd]
api_key = [slskd api key]
host_url = http://localhost:5030
url_base = /
download_dir = /mnt/slskd/complete
delete_searches = False
stalled_timeout = 3600

[Release Settings]
use_most_common_tracknum = True
allow_multi_disc = True
accepted_countries = Europe,Japan,United Kingdom,United States,[Worldwide],Australia,Canada
accepted_formats = CD,Digital Media,Vinyl

[Search Settings]
search_timeout = 5000
maximum_peer_queue = 50
minimum_peer_upload_speed = 0
minimum_filename_match_ratio = 0.5
allowed_filetypes = flac 32/192,flac 24/192,flac 24/176.4,flac 24/96,flac 24/88.2,flac 24/48,flac 24/44.1,flac 16/48,flac 16/44.1
,flac,mp3 320,mp3
ignored_users = User1,User2,Fred,Bob
search_for_tracks = True
album_prepend_artist = False
track_prepend_artist = True
search_type = incrementing_page
number_of_albums_to_grab = 10
remove_wanted_on_failure = False
title_blacklist = BlacklistWord1,blacklistword2
search_source = missing

[Logging]
level = INFO
# https://docs.python.org/3/library/logging.html#logrecord-attributes
format = [%(levelname)s|%(module)s|L%(lineno)d] %(asctime)s: %(message)s
# https://docs.python.org/3/library/time.html#time.strftime
datefmt = %Y-%m-%dT%H:%M:%S%z

I've tested the slskd part of things and it seems to be working fine.

The Soularr part of things doesn't seem to be working, and I can't seem to find why. It's using a lot of CPU, but it appears to be all IO use, as it's scanning my music library at 36MB/s. I left it for a day, and it's still doing it, so I'm not sure what's up there. It doesn't appear to be fulfilling it's function though. Any advice?

@IReclaimer commented on GitHub: > This might be out of scope a bit, but it's also important that people are able to start using the apps that they install, with relative ease. I think the user should at least be directed to the two config files that they need to edit to get setup, but they should then be pointed to the documentation for the two apps for _how_ to set them up. They're located here: `/opt/soularr/confg.ini` `/opt/slskd/config/slskd.yml` > Can you share the part of the config with the directories config? In case there are credentials please remove. Here's mine: > cat /opt/slskd/config/slskd.yml ``` # debug: false # headless: false remote_configuration: true remote_file_management: true # instance_name: default # flags: # no_logo: false # no_start: false # no_config_watch: false # no_connect: false # no_share_scan: false # force_share_scan: false # no_version_check: false # log_sql: false # experimental: false # volatile: false # case_sensitive_reg_ex: false # legacy_windows_tcp_keepalive: false # optimistic_relay_file_info: false # relay: # enabled: false # mode: controller # controller (default), agent, or debug (for local development) # # controller config is required when running in 'agent' mode # # this specifies the relay controller that will be controlling this agent # controller: # address: https://some.site.com:5000 # ignore_certificate_errors: true # api_key: <a 16-255 character string corresponding to one of the controller's 'readwrite' or 'administrator' API keys> # secret: <a 16-255 character shared secret matching the controller's config for this agent> # downloads: true # # agent config is optional when running in 'controller' mode # # this specifies all of the agents capable of connecting # agents: # my_agent: # instance_name: my_agent # make sure the top-level instance_name of the agent matches! # secret: <a 16-255 character string unique to this agent> # cidr: 0.0.0.0/0,::/0 # Replace this with your subnet # permissions: # file: # mode: ~ # not for Windows, chmod syntax, e.g. 644, 777. can't escalate beyond umask directories: incomplete: /mnt/slskd/incomplete downloads: /mnt/slskd/complete shares: directories: - '/mnt/media/music' filters: - \.ini$ - Thumbs.db$ - \.DS_Store$ cache: storage_mode: memory workers: 16 retention: ~ # retain indefinitely (do not automatically re-scan) # rooms: # - ~ # global: # upload: # slots: 20 # speed_limit: 1000 # in kibibytes # limits: # queued: # files: 500 # megabytes: 5000 # daily: # files: 1000 # megabytes: 10000 # failures: 200 # weekly: # files: 5000 # megabytes: 50000 # failures: 1000 # download: # slots: 500 # speed_limit: 1000 # groups: # default: # upload: # priority: 500 # strategy: roundrobin # slots: 10 # limits: # queued: # files: 150 # megabytes: 1500 # daily: ~ # no daily limits (weekly still apply) # weekly: # files: 1500 # megabytes: 15000 # failures: 150 # leechers: # thresholds: # files: 1 # directories: 1 # upload: # priority: 999 # strategy: roundrobin # slots: 1 # speed_limit: 100 # limits: # queued: # files: 15 # megabytes: 150 # daily: # files: 30 # megabytes: 300 # failures: 10 # weekly: # files: 150 # megabytes: 1500 # failures: 30 # blacklisted: # members: # - <username to blacklist> # cidrs: # - <CIDR to blacklist, e.g. 255.255.255.255/32> # user_defined: # my_buddies: # upload: # priority: 250 # strategy: firstinfirstout # slots: 10 # limits: # queued: # files: 1000 # override global default # members: # - alice # - bob # blacklist: # enabled: true # file: <path to file containing CIDRs to blacklist> # filters: # search: # request: # - ^.{1,2}$ web: port: 5030 https: disabled: true port: 5031 force: false certificate: pfx: ~ password: ~ url_base: / content_path: wwwroot logging: false authentication: disabled: false username: [webgui username] password: [webgui password] jwt: key: [jwt key] ttl: 604800000 api_keys: my_api_key: key: [api key] role: readwrite # readonly, readwrite, administrator cidr: 0.0.0.0/0,::/0 # Replace this with your subnet # retention: # searches: 10080 # 7 days, in minutes # transfers: # upload: # succeeded: 1440 # 1 day, in minutes # errored: 30 # cancelled: 5 # download: # succeeded: 1440 # 1 day, in minutes # errored: 20160 # 2 weeks, in minutes # cancelled: 5 # files: # complete: 20160 # 2 weeks, in minutes # incomplete: 43200 # 30 days, in minutes # logs: 180 # days # logger: # disk: false # no_color: false # loki: ~ # metrics: # enabled: false # url: /metrics # authentication: # disabled: false # username: slskd # password: slskd # feature: # swagger: false soulseek: address: vps.slsknet.org port: 2271 username: [soulseek username] password: [soulseek password] description: | A slskd user. https://github.com/slskd/slskd listen_ip_address: 0.0.0.0 listen_port: 50300 diagnostic_level: Info distributed_network: disabled: true disable_children: true child_limit: 25 logging: true connection: timeout: connect: 10000 inactivity: 15000 buffer: read: 16384 write: 16384 transfer: 262144 write_queue: 250 # proxy: # enabled: true # address: ~ # port: ~ # username: ~ # password: ~ # integration: # webhooks: # my_webhook: # on: # - DownloadFileComplete # call: # url: https://192.168.1.42:8080/slskd_webhook # headers: # - name: X-API-Key # value: foobar1234 # - name: Authorization # value: Bearer eyJ...ssw5c # - name: User-Agent # value: slskd/0.0 # ignore_certificate_errors: true # timeout: 5000 # in milliseconds # retry: # attempts: 3 # scripts: # my_post_download_script: # on: # - DownloadFileComplete # - DownloadDirectoryComplete # run: data/my_script.sh --json-to-process $EVENT # my_logging_script: # on: # - All # run: data/log_slskd_events.sh $DATA # ftp: # enabled: true # address: ~ # port: ~ # username: ~ # password: ~ # remote_path: / # encryption_mode: auto # ignore_certificate_errors: true # overwrite_existing: true # connection_timeout: 5000 # retry_attempts: 3 # pushbullet: # enabled: true # access_token: ~ # notification_prefix: "From slskd:" # notify_on_private_message: true # notify_on_room_mention: true # retry_attempts: 3 # cooldown_time: 900000 ``` > cat /opt/soularr/config.ini ``` [Lidarr] api_key = [lidarr api key] host_url = [lidarr host/ip and port] download_dir = /mnt/download/slskd/complete [Slskd] api_key = [slskd api key] host_url = http://localhost:5030 url_base = / download_dir = /mnt/slskd/complete delete_searches = False stalled_timeout = 3600 [Release Settings] use_most_common_tracknum = True allow_multi_disc = True accepted_countries = Europe,Japan,United Kingdom,United States,[Worldwide],Australia,Canada accepted_formats = CD,Digital Media,Vinyl [Search Settings] search_timeout = 5000 maximum_peer_queue = 50 minimum_peer_upload_speed = 0 minimum_filename_match_ratio = 0.5 allowed_filetypes = flac 32/192,flac 24/192,flac 24/176.4,flac 24/96,flac 24/88.2,flac 24/48,flac 24/44.1,flac 16/48,flac 16/44.1 ,flac,mp3 320,mp3 ignored_users = User1,User2,Fred,Bob search_for_tracks = True album_prepend_artist = False track_prepend_artist = True search_type = incrementing_page number_of_albums_to_grab = 10 remove_wanted_on_failure = False title_blacklist = BlacklistWord1,blacklistword2 search_source = missing [Logging] level = INFO # https://docs.python.org/3/library/logging.html#logrecord-attributes format = [%(levelname)s|%(module)s|L%(lineno)d] %(asctime)s: %(message)s # https://docs.python.org/3/library/time.html#time.strftime datefmt = %Y-%m-%dT%H:%M:%S%z ``` I've tested the slskd part of things and it seems to be working fine. The Soularr part of things doesn't seem to be working, and I can't seem to find why. It's using a lot of CPU, but it appears to be all IO use, as it's scanning my music library at 36MB/s. I left it for a day, and it's still doing it, so I'm not sure what's up there. It doesn't appear to be fulfilling it's function though. Any advice?
Author

@joon-im commented on GitHub:

@MickLesk I'm currently testing the functionality but please be aware that I am a novice at this stuff. Apologies if I am approaching this in the incorrect way. Please give me some pointers if so.

Anyways, here is what I've found so far:

  1. After adding my username and password under the "soulseek:" section of the slskd.yml file, I was able to restart the container and actually connect to the network.
  2. Using the default settings for the "directories", I was able to connect to the Soulseek network, search for files, and download a file.
  3. However, upon editing the yml file and adding my mounted SMB shares under "directories" after adding the mounts in the proxmox shell's .conf file for the container, the app would not load after a container reboot. I get a "cannot connect to the server" error in my browser.
@joon-im commented on GitHub: @MickLesk I'm currently testing the functionality but please be aware that I am a novice at this stuff. Apologies if I am approaching this in the incorrect way. Please give me some pointers if so. Anyways, here is what I've found so far: 1. After adding my username and password under the "soulseek:" section of the slskd.yml file, I was able to restart the container and actually connect to the network. 2. Using the default settings for the "directories", I was able to connect to the Soulseek network, search for files, and download a file. 3. However, upon editing the yml file and adding my mounted SMB shares under "directories" after adding the mounts in the proxmox shell's .conf file for the container, the app would not load after a container reboot. I get a "cannot connect to the server" error in my browser.
Author

@vhsdream commented on GitHub:

This might be out of scope a bit, but it's also important that people are able to start using the apps that they install, with relative ease.

Can you share the part of the config with the directories config? In case there are credentials please remove.

Have you tested that you are able to read/write from the SMB share that you have bind-mounted in the LXC? If you can't, then it's a permissions issue that you'll need to fix before being able to download or upload files in slskd.

Also, can you share the output of journalctl -u slskd.service -f?

@vhsdream commented on GitHub: This might be out of scope a bit, but it's also important that people are able to start using the apps that they install, with relative ease. Can you share the part of the config with the directories config? In case there are credentials please remove. Have you tested that you are able to read/write from the SMB share that you have bind-mounted in the LXC? If you can't, then it's a permissions issue that you'll need to fix before being able to download or upload files in slskd. Also, can you share the output of `journalctl -u slskd.service -f`?
Author

@MickLesk commented on GitHub:

i only can test the and Installation. Fixed some missing verbose flags.

Can anyone test the functionality? @joon-im | @Daviid-P | @Jokosch | @lklynet | @IReclaimer


@vhsdream
this should be added into json, not into ct
Finish configuring Soularr at ´/opt/soularr/config.ini´. Then start with ´systemctl start soularr.timer´

@MickLesk commented on GitHub: i only can test the and Installation. Fixed some missing verbose flags. Can anyone test the functionality? @joon-im | @Daviid-P | @Jokosch | @lklynet | @IReclaimer --- @vhsdream this should be added into json, not into ct Finish configuring Soularr at ´/opt/soularr/config.ini´. Then start with ´systemctl start soularr.timer´
Author

@IReclaimer commented on GitHub:

Unfortunately that wasn't it. It went right back to 99% CPU after the restart.

@IReclaimer commented on GitHub: Unfortunately that wasn't it. It went right back to 99% CPU after the restart.
Author

@vhsdream commented on GitHub:

Oh this is embarrassing. 🤦🏻

So first just stop the service if it's running, then stop the timer so it doesn't execute when you are editing.

Then delete the while true; do line and the done line in /opt/soularr/run.sh and then fix the indentation for the if statement.

Reboot the LXC.

This is what I get for trying to use sed instead of just quickly making my own script.

So the script should look like this:

#!/bin/bash

if ps aux | grep "[s]oularr.py" > /dev/null; then
        echo "Soularr is already running. Exiting..."
else
        python3 -u /opt/soularr/soularr.py
fi

Somehow my eyes just passed over the while loop.

@vhsdream commented on GitHub: Oh this is embarrassing. 🤦🏻 So first just stop the service if it's running, then stop the timer so it doesn't execute when you are editing. Then delete the `while true; do` line and the `done` line in `/opt/soularr/run.sh` and then fix the indentation for the `if` statement. Reboot the LXC. This is what I get for trying to use sed instead of just quickly making my own script. So the script should look like this: ```bash #!/bin/bash if ps aux | grep "[s]oularr.py" > /dev/null; then echo "Soularr is already running. Exiting..." else python3 -u /opt/soularr/soularr.py fi ``` Somehow my eyes just passed over the while loop.
Author

@vhsdream commented on GitHub:

Oops, yeah sorry about that, I didn't even bother to look at the username.

Actually I may have found the issue. Go to the run.sh script in /opt/soularr/ and remove the "$@" from the line that executes soularr, then restart the LXC.

Mind you I haven't tested it yet, but I think that is what is causing your soularr to go haywire.

@vhsdream commented on GitHub: Oops, yeah sorry about that, I didn't even bother to look at the username. Actually I may have found the issue. Go to the `run.sh` script in `/opt/soularr/` and remove the `"$@"` from the line that executes soularr, then restart the LXC. Mind you I haven't tested it yet, but I think that is what is causing your soularr to go haywire.
Author

@IReclaimer commented on GitHub:

As far as helping people get started with configuring parts of applications that we can't do for them, the paths to the config files are in the slskd.json file, so when you go to the website to grab the oneliner command, the info is there as two notes. There are also links to documentation on the page.

I think this should be sufficient. As a general, it's probably not a good idea to provide too much instruction other than where things differ from the creator's docs. Otherwise things can change and you'll have to keep going back to update them.

It was @joon-im that had issues with the bind mounts. I got my NFS bind mounts setup fairly easily so I don't think it's an issue with the script.

I hope this is mentioned in the official docs somewhere, but users should definitely add their music library bind mount as read-only!

So now to soularr. I will have to investigate further to see what might be happening; perhaps the run script is messed up. Thank you for the report 👍🏻

I'll keep looking at this from my end. Let me know if you'd like me to try something specific.

@IReclaimer commented on GitHub: > As far as helping people get started with configuring parts of applications that we can't do for them, the paths to the config files are in the slskd.json file, so when you go to the website to grab the oneliner command, the info is there as two notes. There are also links to documentation on the page. I think this should be sufficient. As a general, it's probably not a good idea to provide too much instruction other than where things differ from the creator's docs. Otherwise things can change and you'll have to keep going back to update them. It was @joon-im that had issues with the bind mounts. I got my NFS bind mounts setup fairly easily so I don't think it's an issue with the script. I hope this is mentioned in the official docs somewhere, but users should definitely add their music library bind mount as read-only! > So now to soularr. I will have to investigate further to see what might be happening; perhaps the run script is messed up. Thank you for the report 👍🏻 I'll keep looking at this from my end. Let me know if you'd like me to try something specific.
Author

@vhsdream commented on GitHub:

Thank you for getting me this info.

As far as helping people get started with configuring parts of applications that we can't do for them, the paths to the config files are in the slskd.json file, so when you go to the website to grab the oneliner command, the info is there as two notes. There are also links to documentation on the page.

Image
As you can surely appreciate, the space for information is limited, but since I haven't looked at every single LXC that the helper-scripts service, maybe there are examples of other LXCs that have more information on their page, at which point I'll feel comfortable fleshing out the info for slskd.

So it sounds like you fixed your issue with slskd being unable to write to your mounted file share, that's good. Bind mounts can be finicky especially with unprivileged LXCs.

So now to soularr. I will have to investigate further to see what might be happening; perhaps the run script is messed up. Thank you for the report 👍🏻

@vhsdream commented on GitHub: Thank you for getting me this info. As far as helping people get started with configuring parts of applications that we can't do for them, the paths to the config files are in the slskd.json file, so when you go to the website to grab the oneliner command, the info is there as two notes. There are also links to documentation on the page. ![Image](https://github.com/user-attachments/assets/948472f2-c626-439d-955f-a288fc627bec) As you can surely appreciate, the space for information is limited, but since I haven't looked at every single LXC that the helper-scripts service, maybe there are examples of other LXCs that have more information on their page, at which point I'll feel comfortable fleshing out the info for slskd. So it sounds like you fixed your issue with slskd being unable to write to your mounted file share, that's good. Bind mounts can be finicky especially with unprivileged LXCs. So now to soularr. I will have to investigate further to see what might be happening; perhaps the run script is messed up. Thank you for the report 👍🏻
Author

@lklynet commented on GitHub:

Got it, it's scanning my library right now but everything seems to be working on my end.

Yeah, its working great.

You should specify that you have to update docker-compose.yml as well in the soularr folder though to point at the slskd downloads folder. That held me up for a second.

@lklynet commented on GitHub: Got it, it's scanning my library right now but everything seems to be working on my end. Yeah, its working great. You should specify that you have to update docker-compose.yml as well in the soularr folder though to point at the slskd downloads folder. That held me up for a second.
Author

@MickLesk commented on GitHub:

Its an core thing, can be ignored currently

@MickLesk commented on GitHub: Its an core thing, can be ignored currently
Author

@vhsdream commented on GitHub:

Soularr only logs to stdout by default, but you could configure it to log to a file as well; instructions are on their Github page.

Are you sure that Soularr is able to reach both Lidarr and Slskd, and that each instance can properly access the download folders?

When I was troubleshooting my own, I disabled the systemd timer, and then made sure I had something Wanted in Lidarr, then I just ran soularr manually to check if it was working.

Image

@vhsdream commented on GitHub: Soularr only logs to stdout by default, but you could configure it to log to a file as well; instructions are on their Github page. Are you sure that Soularr is able to reach both Lidarr and Slskd, and that each instance can properly access the download folders? When I was troubleshooting my own, I disabled the systemd timer, and then made sure I had something Wanted in Lidarr, then I just ran soularr manually to check if it was working. ![Image](https://github.com/user-attachments/assets/ad0bb97d-dfa3-43c9-9f74-3ab3bd4951dc)
Author

@lklynet commented on GitHub:

     __     __       __

___/ // // /
/ / / / /// __ /
(
) (
) ,< / /
/ /
/////||_,_/

⚙️ Using Default Settings on node proxmox2
🖥️ Operating System: debian
🌟 Version: 12
📦 Container Type: Unprivileged
💾 Disk Size: 4 GB
🧠 CPU Cores: 1
🛠️ RAM Size: 512 MiB
🆔 Container ID: 134
🚀 Creating a slskd LXC using the above default settings

Posting to API...
✔️ Using local for Template Storage.
✔️ Using flash for Container Storage.
✔️ Updated LXC Template List
💡 Template debian-12-standard_12.7-1_amd64.tar.zst not found in storage or seems to be corrupted. Redownloading.
✔️ Template download successful.
✔️ LXC Template is ready to use.
✔️ LXC Container 134 was successfully created.
✔️ Started LXC Container
✔️ Set up Container OS
✔️ Network Connected: 192.168.1.133
✔️ IPv4 Internet Connected
✖️ IPv6 Internet Not Connected
✔️ DNS Resolved github.com to 140.82.113.4
✔️ Updated Container OS
✔️ Installed Dependencies
✔️ Setup slskd
✔️ Installed Soularr
✔️ Created Services
✔️ Customized Container
✔️ Cleaned
/dev/fd/62: line 167: exit_code: unbound variable
/dev/fd/63: line 76: SPINNER_PID: unbound variable
root@proxmox2:~#

I'm not sure what is up with the two bottom lines for unbound variables

@lklynet commented on GitHub: __ __ __ _____/ /____/ /______/ / / ___/ / ___/ //_/ __ / (__ ) (__ ) ,< / /_/ / /____/_/____/_/|_|\__,_/ ⚙️ Using Default Settings on node proxmox2 🖥️ Operating System: debian 🌟 Version: 12 📦 Container Type: Unprivileged 💾 Disk Size: 4 GB 🧠 CPU Cores: 1 🛠️ RAM Size: 512 MiB 🆔 Container ID: 134 🚀 Creating a slskd LXC using the above default settings Posting to API... ✔️ Using local for Template Storage. ✔️ Using flash for Container Storage. ✔️ Updated LXC Template List 💡 Template debian-12-standard_12.7-1_amd64.tar.zst not found in storage or seems to be corrupted. Redownloading. ✔️ Template download successful. ✔️ LXC Template is ready to use. ✔️ LXC Container 134 was successfully created. ✔️ Started LXC Container ✔️ Set up Container OS ✔️ Network Connected: 192.168.1.133 ✔️ IPv4 Internet Connected ✖️ IPv6 Internet Not Connected ✔️ DNS Resolved github.com to 140.82.113.4 ✔️ Updated Container OS ✔️ Installed Dependencies ✔️ Setup slskd ✔️ Installed Soularr ✔️ Created Services ✔️ Customized Container ✔️ Cleaned /dev/fd/62: line 167: exit_code: unbound variable /dev/fd/63: line 76: SPINNER_PID: unbound variable root@proxmox2:~# I'm not sure what is up with the two bottom lines for unbound variables
Author

@IReclaimer commented on GitHub:

It happens to us all.

I made the change and that fixed the CPU issue.

However, it doesn't seem to be doing anything at all now. As far as I can tell (soularr doesn't provide any logs, unfortunately), it's not looking at lidarr, making a list of what it needs to find, and then going out to slskd to find it. There's no logs to suggest soularr is talking to lidarr in lidarr and none in slskd to suggest that soularr is talking to slskd either.

@IReclaimer commented on GitHub: It happens to us all. I made the change and that fixed the CPU issue. However, it doesn't seem to be doing anything at all now. As far as I can tell (soularr doesn't provide any logs, unfortunately), it's not looking at lidarr, making a list of what it needs to find, and then going out to slskd to find it. There's no logs to suggest soularr is talking to lidarr in lidarr and none in slskd to suggest that soularr is talking to slskd either.
Author

@MickLesk commented on GitHub:

and after all, ready for merge or need to modify ?

@MickLesk commented on GitHub: and after all, ready for merge or need to modify ?
Author

@github-actions[bot] commented on GitHub:

A PR has been created for slskd: community-scripts/ProxmoxVE#3516

@github-actions[bot] commented on GitHub: A PR has been created for slskd: community-scripts/ProxmoxVE#3516
Author

@vhsdream commented on GitHub:

Uh, I don't think you need to do anything with that. This isn't using Docker in any way whatsoever.

@vhsdream commented on GitHub: Uh, I don't think you need to do anything with that. This isn't using Docker in any way whatsoever.
Author

@vhsdream commented on GitHub:

I just made some small changes to the update script to clean up some things (specify .service in the stop command for soularr, clearer naming for the files that are temporarily moved before the update, move the clean-up to the end); but I think it is ready 👍🏻

@vhsdream commented on GitHub: I just made some small changes to the update script to clean up some things (specify `.service` in the stop command for soularr, clearer naming for the files that are temporarily moved before the update, move the clean-up to the end); but I think it is ready 👍🏻
Author

@lklynet commented on GitHub:

Well I have no idea what I changed then but it's working 100% for me after a fresh install

@lklynet commented on GitHub: Well I have no idea what I changed then but it's working 100% for me after a fresh install
Author

@github-actions[bot] commented on GitHub:

Files deleted with PR #154

@github-actions[bot] commented on GitHub: Files deleted with PR #154
Author

@michelroegl-brunner commented on GitHub:

Merged with #3516 in ProxmoxVE

@michelroegl-brunner commented on GitHub: Merged with #3516 in ProxmoxVE
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: community-scripts/ProxmoxVED#124