I love Backups!
This one is for a daily dump of the database, rolling over every 10 days
Put the following into a file in /etc/cron.daily/postgres-backup
#!/bin/bash
DIR=/backup/pgsql
FTPUSR=yourusername
FTPPASS=yourftppass
FTPHOST=yourftphost
LIST=$(su - postgres -c "/usr/bin/psql -lt" |/usr/bin/awk '{ print $1}' |/bin/grep -vE '^-|:|^List|^Name|template[0|1]')
DATE=$(/bin/date '+%Y%m%d');
TENDAY=$(/bin/date -d'10 days ago' '+%Y%m%d');
/bin/mkdir -p $DIR/$DATE/
/bin/chown postgres:postgres $DIR/$DATE/
for d in $LIST
do
su - postgres -c "/usr/bin/pg_dump $d | gzip -c > $DIR/$DATE/$d.sql.gz"
done
rm -rf $DIR/$TENDAY/
/usr/bin/lftp -u "$FTPUSR,$FTPPASS" ${FTPHOST} -e "set ftp:ssl-protect-data true;mirror --reverse --delete $DIR/ /; exit"
|
#!/bin/bash DIR=/backup/pgsql FTPUSR=yourusername FTPPASS=yourftppass FTPHOST=yourftphost LIST=$(su - postgres -c "/usr/bin/psql -lt" |/usr/bin/awk '{ print $1}' |/bin/grep -vE '^-|:|^List|^Name|template[0|1]') DATE=$(/bin/date '+%Y%m%d'); TENDAY=$(/bin/date -d'10 days ago' '+%Y%m%d'); /bin/mkdir -p $DIR/$DATE/ /bin/chown postgres:postgres $DIR/$DATE/ for d in $LIST do su - postgres -c "/usr/bin/pg_dump $d | gzip -c > $DIR/$DATE/$d.sql.gz" done rm -rf $DIR/$TENDAY/ /usr/bin/lftp -u "$FTPUSR,$FTPPASS" ${FTPHOST} -e "set ftp:ssl-protect-data true;mirror --reverse --delete $DIR/ /; exit"
I’m working on servers all day everyday, mostly by SSH. This can be annoying when you switch tabs and then want to go back to another and its timed you out and dead.
A nice quick shortcut to kill that session is to type in ‘~.’ This needs to be on a new line so it wont hurt to press enter beforehand. You will not see it echo on screen, but it will kill the inactive/dead ssh session nicely.
Before you login again, if you are running Linux on your local machine, you can add the following into /etc/ssh/ssh_config
ServerAliveInterval 5
This will send a packet every 5 seconds to keep the connection alive. Very handy. You can make that longer if you want to use less bandwidth and your servers are setup ok with a longer keep alive.
Once you have logged into your server you can edit the /etc/sshd_config and add in the following
TCPKeepAlive yes
ClientAliveInterval 60
This will keep things ticking over from the server point of view. Once you have added that in , restart the SSHD daemon with /etc/init.d/ssh restart . This will not kill your current SSH session, though it pays to logout and log back in again for it to take effect.
Do remember security, you do not want to leave your local machine alone, unlocked, with a live SSH session running logged in as root on a production server!
Today i had a person who had an interesting problem. They were getting the message ‘disk is full’ despite having plenty of free space. Luckily for him, my first thought was ‘inodes?’
I logged in and checked his inode usage
root@askdev:# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 525312 524844 468 100% /
varrun 65579 27 65552 1% /var/run
varlock 65579 2 65577 1% /var/lock
udev 65579 2696 62883 5% /dev
devshm 65579 1 65578 1% /dev/shm
|
root@askdev:# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/xvda1 525312 524844 468 100% / varrun 65579 27 65552 1% /var/run varlock 65579 2 65577 1% /var/lock udev 65579 2696 62883 5% /dev devshm 65579 1 65578 1% /dev/shm
This shows that all the inodes on the disk itself are full.
I used the following script to determine where the inode usage was most
root@askdev:/# for i in `ls -1A`; do echo “`find $i | sort -u | wc -l` $i”; done | sort -rn | head -5
468388 var
49844 usr
18741 proc
5187 sys
5026 root
|
root@askdev:/# for i in `ls -1A`; do echo “`find $i | sort -u | wc -l` $i”; done | sort -rn | head -5 468388 var 49844 usr 18741 proc 5187 sys 5026 root
I tracked it down to /var/lib/php5/ and all the session files in there.
I used a find to then find any that were older than 10 days
root@askdev:/var/lib/php5# find ./ -type f -mtime +10 | wc -l
111041
|
root@askdev:/var/lib/php5# find ./ -type f -mtime +10 | wc -l 111041
High inode usage is usually caused by a massive number of small files. In this case the session files are normally stored somewhere temporary and removed when not in use. Either there could have been a bug in the code not removing them or it was a higher traffic website.
You can delete files older than 10 days if you want with the following command
cd /dir/of/inodes
find ./ -type f -mtime +10 | xargs rm
|
cd /dir/of/inodes find ./ -type f -mtime +10 | xargs rm