Unfortunately, current Wyng storage model is not very convenient for file-based cloud syncing. Wyng stores all chunks as individual files, resulting in millions of small files. I.e., in wyng issue 179 alvinstarr reported that he has 224 million chunk files and 937526 subdirectories in the backup directory.
If I back up my Qubes via Wyng with chunk size of 64 KB (to improve deduplication efficiency) - normally I have 5-6+ million files in Wyng backup folder (including hardlinked ones). That practically means dedicating a separate partition to backup folder, with a simpler filesystem (ext4 or xfs). When I kept the backup folder on a larger btrfs partition (on a single 7200rpm HDD), the time to traverse the whole filesystem (i.e., search) blew out of proportion. When I was syncing the Wyng backup folder via Syncthing - its on-disk database swelled to 4.6 GB, and I observed its memory usage spike up to 14 GB RAM. Thatâs not practical for me, unfortunately - not all my NAS instances have that much RAM to spare. If syncing the Wyng backup as partition - blocksyncâing a 150 GB partition to an image stored on Keybase takes 2+ days, thatâs also not practical for me, and not safe - a interrupted transfer may leave the filesystem on the remote image replica in inconsistent state.
I suppose it would be desirable to get a more scalable storage model in Wyng - merging chunks into pack files of configurable size (i.e., 16-100 MB), the way Restic does. (Another more theoretical option could be storing the chunks not in filesystem, but in a database of some appropriate sort - databases are better suited to a use case of millions of small records than filesystems.) Also would be good to have dynamically adjusting directory depth to ensure a reasonable number of files per directory.
For the time being the solution proposed by @solene remains more practical for me, though