Synchronise and clone existing server onto another?
Would what i've put in the title be possible?

I want to clone and synchronise one of my Linux servers with the other so if one ever fails, the other is primed with all the latest information before failure.
[Image: img.php?userid=8551]
The solution here is rsync.
Setup keywuth between the two servers so it wouldn't ask for passwords.

If you want it to get synced on file change, use lsyncd. It's a daemon that fires a command when a file changes in a dictionary. I used this to achieve continious syncing between a US and EU VPS.
@Translucent do you have any updates regarding this? Please let us know.
I think the easiest and the most powerful syncing app for Linux is Dropbox client.
It offers 2GB free cloud space and up to 18GB free cloud space if you refer friends.
And they have promotions if you buy a new phone (Samsung +50GB, HTC+25GB, etc)
It has a super and very useful features and you can always access to your files from the web too.

cd ~ && wget -O - "" | tar xzf - && ~/.dropbox-dist/dropboxd
cd ~ && wget -O - "" | tar xzf - && ~/.dropbox-dist/dropboxd

You need a desktop PC to install it to your VPS because you have to validate your account in a web browser.

cd ~ && wget "" -O dropbox && chmod +x dropbox && mv dropbox /usr/local/bin

Next start dropbox with "dropbox start" command.

You can create tasks and automatize process with crontab. "crontab -e"
(2016-11-15, 4:28:19 am)Hidden Refuge Wrote:  @Translucent do you have any updates regarding this? Please let us know.

I haven't been able to test it out. Could you direct me to a tutorial that could perhaps prove useful to me?

The Dropbox method will not work unfortunately because it would require a lot more space than 18GB.
[Image: img.php?userid=8551]
I would use rsync + cron to to backup files in such situations. cron will run the rsync (or bash script) at the specified times. for example, every 2 hours dump the database to a file, compress it, and upload it to the backup server. I think this is the way to go, unless there is an easier way of doing it.

I cannot provide a guide unless I know what exactly you want. I've read through your post several times and I'm a bit between two edges.

1. You either want to built a failover that automatically changes to the next alive server out of a pool of available servers.
2. You want to sync a whole server to a second server.

Option 2 is actually quite pointless because it could simply break the destination server and would waste a lot of resources. If you still want it you can use the search function and look for the rsync guide by @rudra. I was never fond of syncing up whole servers.

For option 1 you need to do some research and find some solution for your own HA failover setup. I can't point into directions though because I've never bothered to do such setups. However a group I'm in has developed such a tool and currently uses it successfully on our own servers for failover in case of downtimes.

GitHub Repo:
latest commit of picored seems to be one year ago. Another point of importance is that it works by monitoring signals groom each on the pool and then running a script once or more to change the DNS entries. It does not clone or keeps updating a backup system incrementally.

I read somewhere that a program called heartbeat and another for incremental backup might together work to do a nice job here if what you want is a fail-safe.

Note that backup or incremental backup when applied to only the relevant application files works much better.

If you want a way to do a fast setup of a clone system then may be rsync. A fresh one is always better unless we are talking about a very compatible or identical system here.

Good luck.
Many thanks to Freevps, Chris (cw1998), The Guy( ID 4810), optimus, GHP and the other  staff members.
Oh, I forgot to mention that in addition to picored we use rsync to only sync the website files (instead of the whole server) and we use the Galera MySQL cluster for the database clustering HA failover.
I was looking for a mechanism for cloning entire files, one of them was, the open VZ dump, in case of a open VZ container, but it requires the intervention of the provider and not all providers are willing to provide the VZ dump. I then learnt about rsync, but i didnt find it convincing. Apart from that I didnt find anyways of finding ways to sync an entire container as per say.
[Image: img.php?v2=1&userid=19445]
Thank you to and for FREE VPS  Angel

Users browsing this thread: 1 Guest(s)