This is some great investigation and the solution.
Let me also add a couple of extra bits of information, just in case someone else encounters this issue. There were two separate issues which Bommi experienced and we were able to replicate them as well:
1. Bash/Linux error for an exceeded character limit for any command in Bash
This manifests in an error message reporting that
Argument list too long
./backupuploader.sh: line 10: /usr/bin/curl: Argument list too long
The reason this happened was that at that point the BASE64-encoded file was so large (i.e. the BASE64 string was too long) that it crossed the maximum command size length in Bash. This may differ between different systems, but in my testing I found I was able to push a BASE64-encoded file with the size of the file before encoding up to 64KB (128KB source file before encoding would no longer work).
The solution for this is to do what Bommi described above and that is to basically externally inject the data part of the request from an external file in which you'd build it. You can do so in whichever way works for you, e.g. as Bommi showcased above using AWK.
2. MySQL error for an exceeding maximum packet size
This would manifest with this error HTTP response
{"timestamp":1670442742149,"code":422,"error":"Unprocessable Entity","message":"Unable to process entity"}
While HTTP error 422 indicates an issue with user data, in this case the encoded backup, the issue turned out to be rooted in MySQL, which sets restrictions to maximum packet size. This also varies from system to system and from version to version, but in my case I had the global variable
max_allowed_packet set to 16MB, which wasn't naturally enough, if talking about very large files (e.g. large binary backups, which would be practically impossible to compress). Here's some more information on the variable
https://dev.mysql.com/doc/refman/8.0/en ... wed_packet
Lastly, let me touch on HSQL (the built-in file-based DB) and PostgreSQL and this particular limit. PostgreSQL doesn't implement this limit and while HSQL doesn't do that either, there is another important consideration when it comes down to HSQL.
While it doesn't pose such limits for the file size, the way it is written onto the drive means that the data will first be kept in Unimus' RAM allocation, i.e. Unimus' java-assigned memory allocation (heap space), so if your Unimus instance is naturally closer the default or your own set allocation headroom and you try to upload a large file (I would personally say that unless we talk probably a couple of hundred megabytes files, you should be fine), Unimus may hit the heap space capacity.
If that were to happen, however, the script would return this HTTP error 500
{"timestamp":1670511996608,"code":500,"error":"Internal Server Error","message":"Unknown error, please contact us"}
in this case you'd need to increase the memory allocation for Unimus. We do have a Wiki article for that
https://wiki.unimus.net/display/UNPUB/C ... mory+usage
In either case, feel free to reach out to us in case, you experience any of such issues.