We’ve recently encountered a challenge with one of our internal apps – How do I reduce the MySQL database size on AWS RDS? When it comes to increasing size of the db – there is no easier thing – you just go to the AWS console and modify instance, but if you try to shrink storage then AWS console doesn’t allow to do it – saying it can just go up. And because we’ve managed to change our application to use less storage – to avoid extra cost – we needed to reduce the size of our database. The easiest thing to do would be a mysqldump and use that dump to recreate db, but our db was big enough and it took too much time to create it and we couldn’t afford any longer downtime. After some time spent on dba.stackexchange and stackoverflow and a couple of tryouts of different approaches we’ve managed to copy our db to a smaller instance.
In this post, I will guide you briefly through the plan we’ve created and will show some tricks we used to accomplish this task.
- Create a read replica of source database – we will use this replica to copy data to a new server.
- Create new RDS instance with desired parameters.
- Create an EC2 instance that you will use to run mysqldump command from – if you have an instance that you can use – that’s great, but make sure it can connect to both source and destination db – if not then change your databases security permissions to allow this machine access.
Stop replication on replica with command:
This will ensure that data that we’re copying across will not change and we will know the binlog location from which we will need to synchronise our new server with the old one.
Create empty database with the same name on destination server and recreate all users and permissions that you gonna need after switching to a new server (this can be done after, but ensuring you have all ready now might save you some time for a switch)
Run this command from your EC2 instance to dump data from replica and copy it to a new server at the same time:
mysqldump –compress –verbose -h -u root -p –port 3306 –single-transaction -q database_name | pv -pterabc -N inbound | dd obs=16384K | dd obs=16384K | dd obs=16384K | dd obs=16384K | pv -pterabc -N outbound | mysql –compress -h -u root -p –port 3306 -D database_name
This is pretty much a dumping the whole database and piping that to the mysql command which takes all that and recreates in a destination database. We could potentially just use base pipe, but for larger databases it takes hours to copy stuff across. We’ve managed to speed up this process from approx 16 hours to less than 1 hour for our db (around 40GB of data was copied) with some extra options:
- Compress option for mysqldump and mysql commands which ensures that we’re receiving/sending commands compressed instead of a clear text
- To avoid blocking our copy process when mysqldump gets data from source and mysql writes to destination – we’ve used dd utility to create a buffer between them which ensures that one process is not waiting too long for incoming data and always have something to work with
- To give us some insight into what’s happening during command execution we’ve used pv utility (you can install it with apt-get on ubuntu), that shows us what’s coming in and coming out.
On a source replica run a command:
SHOW SLAVE STATUS
You’ll use Relay_Master_Log_File and Exec_Master_Log_Pos in next step
On the destination server run command:
For host details use your master source db. In our case, we had to use our master instance IP to make it work because for some reason it didn’t want to work when we were passing the aws format url. For mysql_binary_log_file_name and mysql_binary_log_file_location use the values from previous step.
You can use command:
SHOW SLAVE STATUS
To check if the instance is up to date with master.
Whenever you’re ready you can shutdown your application, use command:
On your destination server to make it a new master and then switch your environment to use a new server. If you’re happy enough with a new database, you can now safely decommission the source server and replica that you’ve used to migrate.
CloudRanger provides the world’s easiest to use backup and recovery solution for Amazon Web Services. We make it easy to manage your backups & servers running on AWS cloud. Using CloudRanger, you can easily manage backups and retention of your RDS, EC2 and Redshift resources with snapshots and AMIs.
With an easy to use interface, managing your routine AWS tasks is simple and effective. CloudRanger saves your team time and hassle, making the day-to-day management of your AWS resources easier and more automated. CloudRanger can also help you save on your EC2 costs by starting/stopping non-production instances automatically when you need them.
Try CloudRanger for Free
Whether you are a small business, a medium business, or an enterprise user of Amazon Web Services, CloudRanger has the backup and recovery features that will meet your specific needs. Our easy-to-use AWS cloud management system is simple to install, which means your business can be up and running with CloudRanger in no time. And best of all, you can even try CloudRanger for free for 14 days. So why not experience all the cost-saving features and time-saving benefits CloudRanger has to offer?