<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>rclone — LowEndSpirit</title>
        <link>https://staging.lowendspirit.com/index.php?p=/</link>
        <pubDate>Fri, 10 Apr 2026 03:54:21 +0000</pubDate>
        <language>en</language>
            <description>rclone — LowEndSpirit</description>
    <atom:link href="https://staging.lowendspirit.com/index.php?p=/discussions/tagged/rclone/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>Plex, rclone mounts and bandwidth usage</title>
        <link>https://staging.lowendspirit.com/index.php?p=/discussion/2185/plex-rclone-mounts-and-bandwidth-usage</link>
        <pubDate>Tue, 01 Dec 2020 15:28:42 +0000</pubDate>
        <category>Technical</category>
        <dc:creator>flips</dc:creator>
        <guid isPermaLink="false">2185@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>I've been having some fun lately, setting up Plex on a new VPS, mounting my music share using <code>rclone mount</code>.</p>

<p>I've been using the same setup previously, on a slower node, but with only outbound bandwidth measured. There I mounted my music from pCloud.</p>

<p>On this new node I figured I would try out/use <a href="https://staging.lowendspirit.com/index.php?p=/profile/koofr" rel="nofollow">@koofr</a>. So I mounted the drive, using</p>

<pre><code>/usr/bin/rclone mount --uid 1000 --gid 1000 --syslog --stats 1m \
  -v --allow-other --read-only  myKoofr:/music/ /srv/music/
</code></pre>

<p>I installed plexmediaserver and added 149 GB of music to the library, from that Koofr-rclone mount.<br />
That triggered Plex to download ~900 GB of data ...<br />
(Actually jumped in at ~600 and tried figuring out what's going on. By then it was halvway through indexing/scanning the library. <a href="https://staging.lowendspirit.com/index.php?p=/profile/AnthonySmith" rel="nofollow">@AnthonySmith</a> suggested I could try to add <code>--ignore-checksum</code>, so I did. This might have helped, as it "only" used 300 GB for scanning the last 150 GB.</p>

<p>I also tried pCloud, and I tried adding some more options for caching, using something like this:</p>

<pre><code>/usr/bin/rclone mount --uid 1000 --gid 1000 --syslog --stats 1m \
  --buffer-size 1G --no-modtime --dir-cache-time 90m \
  --cache-dir /tmp/rclone-cache --vfs-cache-mode full \
  --vfs-cache-max-size 1G \
  -v --allow-other --read-only --ignore-checksum \
  my-pcloud:/music/ /srv/music/
</code></pre>

<p>Still a rescan of my library used <strong>150 GB</strong> of traffic. Maybe <code>--no-modtime</code> prevents it from seeing ctime/mtime, so it <em>has</em> to re-download every file to check ... D'oh!  <img src="https://staging.lowendspirit.com/plugins/emojiextender/emoji/twitter/open_mouth.png" title=":o" alt=":o" height="18" />  <img src="https://staging.lowendspirit.com/plugins/emojiextender/emoji/twitter/confounded.png" title=":s" alt=":s" height="18" />  <img src="https://staging.lowendspirit.com/plugins/emojiextender/emoji/twitter/tongue.png" title=":p" alt=":p" height="18" /></p>

<p>Trying again, a <em>Scan Library Files</em> after removing that option, with only 3 albums (re)added, it seems that still triggered a full re-download ... (Actually waiting for it to finish, but looks like it's downloading a lot.) I would think Plex stored in it's db the timestamps etc, so it wouldn't have to redownload everything every time. <img src="https://staging.lowendspirit.com/plugins/emojiextender/emoji/twitter/astonished.png" title=":astonished:" alt=":astonished:" height="18" /></p>

<p>I've seen recommendations on using sshfs instead, but this makes me curious ... I could of course try davs/dav2s (the cloud providers don't offer ssh, I think). But users like <a href="https://staging.lowendspirit.com/index.php?p=/profile/Mason" rel="nofollow">@Mason</a> and <a href="https://staging.lowendspirit.com/index.php?p=/profile/Wolveix" rel="nofollow">@Wolveix</a> do run rclone (albeit with GSuite, IIRC), so it should be feasible.<br />
If nightly rescan triggers redownload of every file, it's really not very nice ... Not a vital service for me, of course, but it triggers my curiosity. <img src="https://staging.lowendspirit.com/plugins/emojiextender/emoji/twitter/smile.png" title=":)" alt=":)" height="18" /></p>

<p>Maybe I'm missing something obvious, or maybe you have some insight to share on this? <img src="https://staging.lowendspirit.com/plugins/emojiextender/emoji/twitter/smile.png" title=":)" alt=":)" height="18" /></p>
]]>
        </description>
    </item>
    <item>
        <title>What does your backup setup look like?</title>
        <link>https://staging.lowendspirit.com/index.php?p=/discussion/325/what-does-your-backup-setup-look-like</link>
        <pubDate>Sun, 15 Dec 2019 19:01:22 +0000</pubDate>
        <category>Technical</category>
        <dc:creator>ulayer</dc:creator>
        <guid isPermaLink="false">325@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Curious to see how everyone does their backups, this is how we do ours <img src="https://staging.lowendspirit.com/plugins/emojiextender/emoji/twitter/smile.png" title=":smile:" alt=":smile:" height="18" />;</p>

<p>Currently we automate all of our backups using an Ansible role that manages our <a rel="nofollow" href="https://www.borgbackup.org/" title="borgbackup">borgbackup</a> server along with all of the borg clients. It adds/modifies scripts and crons on the clients (hypervisors) based on the variables we set and makes sure the clients can SSH into an unprivileged user and only access a specific directory as specified in <code>.ssh/authorized_keys</code> so it can push all of the data into a borg repo. Before it runs the borg portion of the script though, another tool on Proxmox (vzdump) backups up all of the VMs daily to the local disk and compresses them with pigz (multi-threaded gzip). Borg will then send all of the specified directories &amp; files to our remote borg server for safe keeping.</p>

<p>I just added support for rclone so that after the all of the clients finish their backups for the day, it will do a weekly <code>rclone sync</code> on the borg server to our bucket on <a rel="nofollow" href="https://wasabi.com/" title="Wasabi">Wasabi</a> object storage. I picked Wasabi, because they don't charge for egress (outgoing bandwidth). So in the event of a major disaster and we've lost our borg server, we could retrieve our borg repos from Wasabi.</p>
]]>
        </description>
    </item>
   </channel>
</rss>
