Firstly here is the script that I am running into issues for my custom game server.
#!/bin/sh #backup sqlites
for db in map players; do
sqlite3 $db.sqlite “.timeout 1000” “.backup $db-backup_date '+%m->%d-%Y'.sqlite”
done #moves backuped sqlites
mv -backup_.sqlite /home/user/backups #backup the file based contents
tar czf /home/user/backups/world_date '+%m-%d-%Y'.tar.gz -->exclude=‘.sqlite’ *
This is located where it should be and both manual and Cron Job operations creates a few issues…
Should the game server be active (exploring in map) the backups get totally screwed up since for some reason the .backup command doesn’t cope with this well??
File sizes for the databases are just a few kbs when they are as much as 3xxMB in size currently and the other still being xxkbs in size!
Tar.gz also backs itself up (so it does ~/backup/own.tar.gz instead of the files in the bash’s current directory).
I get “0 bytes” sqlites in a parent directory as well.
The only “error” I was able to get out of running it manually while auto walking in the server was
“tar: map.sqlite-journal: file changed as we read it”
Even though as shown above I “excluded” it and other DBs using the wildcards from the tarring processing. As the Sqlite3 .backup commands were supposed to deal with the data base backing up instead as commented. Honestly I am not sure WHY this is the case when the game server is active enough to flip it out of the control.
Anyone able to help me troubleshoot my backup script to not to be so messed up like this?
Also, to my knowledge a backup is aborted if another application requests a write lock to the database and then a retry is automatically performed (which takes time). I’m on my phone currently so it’s a bit hard to read through the docs, but it should all be in there. 99% of all SQLite issues are related to locks.
I see so basically what may be happening here is that…
Backup script is started and tries to process
Fails because of mentioned lock request and while trying to “retry” tar takes over thus that obviously failing too and therefore the script ends
If that is the case, then how can we ensure we ensure that we get an “still good” backup for the associated non databases files when .backup does eventually able to scoop in and take a backup?
What you’re saying shouldn’t be possible since the sqlite process ended. Retries should be done in the same process if I’m not mistaken. But sqlite has always been a mess… so yeah I wouldn’t say it’s impossible.
I’m more thinking the timeout causes the backup to abort which in turn leads to other strange things. Another possible explanation is that you’re trying to backup the same database with the same script more than once at the same time. I’m on my phone so my reply is a bit short, but try and run the plain backup command in parallel to your current backup task (use a different filename and preferably a different time).
That’s may to be the issue I been experiencing but I am not 100% sure as I didn’t do it while the game server was ACTIVELY in use. A simple wait $BACK_PID appeared to make it work at least while the game server is idling.
The only possible issue from here would be…
and if that’s the case then the script could actually breaks. But I really needs to go to bed so I don’t have time to auto forward in game to find new lands to force the game to be “under active load” sqlite wise.
I really do not get why it’s failing on the cron job tasks even though I gave it way more time than needed per cycle (an hour when the script only need literal seconds to process and there been no active players either that would “delay” this).
As the script would instead do the weirdness it been doing from the start. WHAT IS GOING ON HERE??
Everything works as intended with no outputs to the command line after waiting for the script to process (about 5 or fewer seconds I noticed). So the dbs are actually their respected sizes so is the tar.gz and what’s should be backed up gets backed up. Plus there aren’t empty db files in the /home/user directory either.
Yep. Here’s what I think is happening here – it’s trying to put a copy of the backup it’s currently creating (backups/world_11-11-25-2019.tar.gz) into the backup, and is killing itself as a result. Try saving the backup anywhere else on your fs besides that directory (i.e. /home/user/world_ `date '+%I-%m-%d-%Y'`.tar.gz would be fine), then see what happens. Also best to just put the absolute path instead of “*” to make sure it’s pointing to the right directory.