I typically use a MySQL GUI like Sequel Pro when I do database dumps and imports working from my Mac. GUI apps often give checkboxes that allow you to choose whether to include the structure/content/drop table command for each table in an export.
When using mysqldump
on the command line, though, it's not as simple. You can either do a full dump and exclude a few tables entirely (using --ignore-table
, or dump the structures of just one set of tables using the -d
option. But you can't do both in one go with mysqldump
.
However, you can use the power of redirection to do both commands at once to result in one dump file with all your tables, with structure only for the tables you specify:
mysqldump -u username -p -h hostname database \
--ignore-table=database.cache_form\
--ignore-table=database.cache_entity_node\
--ignore-table=database.cache_views_data\
--ignore-table=database.sessions\
--ignore-table=database.watchdog\
> ~/Desktop/database-dump.sql\
&& mysqldump -u username -p -h hostname -d database \
cache_form \
cache_entity_node \
cache_views_data \
sessions \
watchdog \
>> ~/Desktop/database-dump.sql
The \
at the end of the line tells your terminal session to continue to the next line (for the sake of clarity, I don't like printing long commands one one super-long line!), and I'll explain what's happening line by line:
First, we use the mysqldump command, passing in the username (username
, hostname (hostname
), and database name (database
), and password (just use the -p
argument, so you'll be prompted for your password).
Next, we give this first command a list of tables to ignore completely—no data or structure for these tables will be included in the dump.
We direct mysqldump's output to a file named database-dump.sql
in our Desktop folder.
Then, we give another mysqldump command (&&
tells bash to run the following after the first command completes), and pass in the same info, except this time we add in -d
, which tells mysqldump to exclude table data for the list of tables defined after the database (database
).
We pass in the list of tables (just plain old text strings with spaces between each one), and redirect the output with two greater than signs (>>
)—which appends rather than replaces the contents of a file—into the same database dump we started filling up in our first mysqldump command.
If you're going to use this command a lot, you should probably put it in a shell script and pass in arguments ($1
, $2
, $3
, etc.) for the parameters like the username, host, database, and output file. You could also compile the list of tables a little nicer, but that's outside the scope of this blog post.
Note that, if you're using Drupal, you should use drush's skip-tables
and structure-tables
configuration to do the same thing with drush sql commands, but much more easily.
Some helpful hints leading to this successful command were gleaned from this SO post.
Comments
Thanks for the tip. That's very useful!
For big databases, one may consider to pipe output to gzip. In that case, it would go like this:
First dump (with --ignore-table):
mysqldump etc.. | gzip -f > database-dump.sql.gz
Second dump (with -d):
mysqldump etc.. | gzip -f >> database-dump.sql.gz