Hi!
I am currently trying to implement the API backup like described here:
https://blog.unimus.net/automating-mikr ... -to-guide/
For smaller files everything is ok, but in my case I need to send binary files which are quite big.
In my case the file is 55 megabytes, but in my customer environements those files can be up to 500 megabytes.
The curl command ends with this error message:
/usr/bin/curl: Argument list is too long
Do you have any hint for me?
Best Regards
bommi
[Solved] Uploading big files using API
I created a simplified backup script like this:
#!/bin/bash
encodedbackup=$(base64 -w 0 pcs-binary.conf)
testbackup=$(base64 -w 0 test-backup.conf)
curl -s -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer MYSECRETAPITOKEN" -d '{"backup":"'"$testbackup"'","type":"BINARY"}' "http://127.0.0.1:8085/api/v2/devices/3/backups"
printf "\n"
curl -s -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer MYSECRETAPITOKEN" -d '{"backup":"'"$encodedbackup"'","type":"BINARY"}' "http://127.0.0.1:8085/api/v2/devices/3/backups"
These are the filetypes and size of both files:
~# file test-backup.conf
test-backup.conf: ASCII text
~# file pcs-binary.conf
pcs-binary.conf: ASCII text
~# du -sh test-backup.conf
80K test-backup.conf
~# du -sh pcs-binary.conf
55M pcs-binary.conf
When I now execute my script, I get this output:
~# ./test.sh
{"data":{"success":"true"}}
./test.sh: Zeile 6: /usr/bin/curl: Die Argumentliste ist zu lang
The first response line is for the 80K test file and the secondary is the response for the bigger file.
In english the second line means: "/usr/bin/curl: Argument list too long"
Best Regards
bommi
#!/bin/bash
encodedbackup=$(base64 -w 0 pcs-binary.conf)
testbackup=$(base64 -w 0 test-backup.conf)
curl -s -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer MYSECRETAPITOKEN" -d '{"backup":"'"$testbackup"'","type":"BINARY"}' "http://127.0.0.1:8085/api/v2/devices/3/backups"
printf "\n"
curl -s -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer MYSECRETAPITOKEN" -d '{"backup":"'"$encodedbackup"'","type":"BINARY"}' "http://127.0.0.1:8085/api/v2/devices/3/backups"
These are the filetypes and size of both files:
~# file test-backup.conf
test-backup.conf: ASCII text
~# file pcs-binary.conf
pcs-binary.conf: ASCII text
~# du -sh test-backup.conf
80K test-backup.conf
~# du -sh pcs-binary.conf
55M pcs-binary.conf
When I now execute my script, I get this output:
~# ./test.sh
{"data":{"success":"true"}}
./test.sh: Zeile 6: /usr/bin/curl: Die Argumentliste ist zu lang
The first response line is for the 80K test file and the secondary is the response for the bigger file.
In english the second line means: "/usr/bin/curl: Argument list too long"
Best Regards
bommi
I found a solution for my issue!
Please find my quick and dirty backup script to verify this procedure:
#!/bin/bash
# Convert the original backup file into a bas64 file:
base64 -w 0 backup-file > base64-backup-file
# Surround the base64 data with the json structure:
awk 'FNR==1 {print "{\"backup\":\""$0"\",\"type\":\"BINARY\"}"}' pcs-base64.backup > json-backup-file
# Upload it to unimus by sending the file:
curl -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer MYSECRETAPITOKEN" -d @json-backup-file "http://127.0.0.1:8085/api/v2/devices/3/backups"
As my backups are multiple hundred megabytes big, I also needed to change the configuration of my mysql database.
There is a value called "max_allowed_packet" which I changed to allow 1G big files.
Please find my quick and dirty backup script to verify this procedure:
#!/bin/bash
# Convert the original backup file into a bas64 file:
base64 -w 0 backup-file > base64-backup-file
# Surround the base64 data with the json structure:
awk 'FNR==1 {print "{\"backup\":\""$0"\",\"type\":\"BINARY\"}"}' pcs-base64.backup > json-backup-file
# Upload it to unimus by sending the file:
curl -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer MYSECRETAPITOKEN" -d @json-backup-file "http://127.0.0.1:8085/api/v2/devices/3/backups"
As my backups are multiple hundred megabytes big, I also needed to change the configuration of my mysql database.
There is a value called "max_allowed_packet" which I changed to allow 1G big files.
-
- Posts: 198
- Joined: Thu Aug 05, 2021 6:35 pm
This is some great investigation and the solution.
Let me also add a couple of extra bits of information, just in case someone else encounters this issue. There were two separate issues which Bommi experienced and we were able to replicate them as well:
1. Bash/Linux error for an exceeded character limit for any command in Bash
This manifests in an error message reporting that Argument list too long
./backupuploader.sh: line 10: /usr/bin/curl: Argument list too long
The reason this happened was that at that point the BASE64-encoded file was so large (i.e. the BASE64 string was too long) that it crossed the maximum command size length in Bash. This may differ between different systems, but in my testing I found I was able to push a BASE64-encoded file with the size of the file before encoding up to 64KB (128KB source file before encoding would no longer work).
The solution for this is to do what Bommi described above and that is to basically externally inject the data part of the request from an external file in which you'd build it. You can do so in whichever way works for you, e.g. as Bommi showcased above using AWK.
2. MySQL error for an exceeding maximum packet size
This would manifest with this error HTTP response
{"timestamp":1670442742149,"code":422,"error":"Unprocessable Entity","message":"Unable to process entity"}
While HTTP error 422 indicates an issue with user data, in this case the encoded backup, the issue turned out to be rooted in MySQL, which sets restrictions to maximum packet size. This also varies from system to system and from version to version, but in my case I had the global variable max_allowed_packet set to 16MB, which wasn't naturally enough, if talking about very large files (e.g. large binary backups, which would be practically impossible to compress). Here's some more information on the variable
https://dev.mysql.com/doc/refman/8.0/en ... wed_packet
Lastly, let me touch on HSQL (the built-in file-based DB) and PostgreSQL and this particular limit. PostgreSQL doesn't implement this limit and while HSQL doesn't do that either, there is another important consideration when it comes down to HSQL.
While it doesn't pose such limits for the file size, the way it is written onto the drive means that the data will first be kept in Unimus' RAM allocation, i.e. Unimus' java-assigned memory allocation (heap space), so if your Unimus instance is naturally closer the default or your own set allocation headroom and you try to upload a large file (I would personally say that unless we talk probably a couple of hundred megabytes files, you should be fine), Unimus may hit the heap space capacity.
If that were to happen, however, the script would return this HTTP error 500
{"timestamp":1670511996608,"code":500,"error":"Internal Server Error","message":"Unknown error, please contact us"}
in this case you'd need to increase the memory allocation for Unimus. We do have a Wiki article for that
https://wiki.unimus.net/display/UNPUB/C ... mory+usage
In either case, feel free to reach out to us in case, you experience any of such issues.
Let me also add a couple of extra bits of information, just in case someone else encounters this issue. There were two separate issues which Bommi experienced and we were able to replicate them as well:
1. Bash/Linux error for an exceeded character limit for any command in Bash
This manifests in an error message reporting that Argument list too long
./backupuploader.sh: line 10: /usr/bin/curl: Argument list too long
The reason this happened was that at that point the BASE64-encoded file was so large (i.e. the BASE64 string was too long) that it crossed the maximum command size length in Bash. This may differ between different systems, but in my testing I found I was able to push a BASE64-encoded file with the size of the file before encoding up to 64KB (128KB source file before encoding would no longer work).
The solution for this is to do what Bommi described above and that is to basically externally inject the data part of the request from an external file in which you'd build it. You can do so in whichever way works for you, e.g. as Bommi showcased above using AWK.
2. MySQL error for an exceeding maximum packet size
This would manifest with this error HTTP response
{"timestamp":1670442742149,"code":422,"error":"Unprocessable Entity","message":"Unable to process entity"}
While HTTP error 422 indicates an issue with user data, in this case the encoded backup, the issue turned out to be rooted in MySQL, which sets restrictions to maximum packet size. This also varies from system to system and from version to version, but in my case I had the global variable max_allowed_packet set to 16MB, which wasn't naturally enough, if talking about very large files (e.g. large binary backups, which would be practically impossible to compress). Here's some more information on the variable
https://dev.mysql.com/doc/refman/8.0/en ... wed_packet
Lastly, let me touch on HSQL (the built-in file-based DB) and PostgreSQL and this particular limit. PostgreSQL doesn't implement this limit and while HSQL doesn't do that either, there is another important consideration when it comes down to HSQL.
While it doesn't pose such limits for the file size, the way it is written onto the drive means that the data will first be kept in Unimus' RAM allocation, i.e. Unimus' java-assigned memory allocation (heap space), so if your Unimus instance is naturally closer the default or your own set allocation headroom and you try to upload a large file (I would personally say that unless we talk probably a couple of hundred megabytes files, you should be fine), Unimus may hit the heap space capacity.
If that were to happen, however, the script would return this HTTP error 500
{"timestamp":1670511996608,"code":500,"error":"Internal Server Error","message":"Unknown error, please contact us"}
in this case you'd need to increase the memory allocation for Unimus. We do have a Wiki article for that
https://wiki.unimus.net/display/UNPUB/C ... mory+usage
In either case, feel free to reach out to us in case, you experience any of such issues.
Hi both,
Thanks for the above information, it was super helpful!
I just wanted to add one extra point that I came across with the Nginx config. My file wasn't very large, only about 7MB, but Nginx threw a 413 entity too large error.
I added the line ' client_max_body_size 20M' into the server section of /etc/nginx/conf.d/<unimus>.conf and it accepted the input after that.
Thanks,
Lee
Thanks for the above information, it was super helpful!
I just wanted to add one extra point that I came across with the Nginx config. My file wasn't very large, only about 7MB, but Nginx threw a 413 entity too large error.
I added the line ' client_max_body_size 20M' into the server section of /etc/nginx/conf.d/<unimus>.conf and it accepted the input after that.
Thanks,
Lee