clone.pl - The art of cloning a database easily and quickly

In database patch set 11.2.0.2, Oracle quietly rolled out the ability to create a clone of your database (relatively) quickly using dNFS (Direct NFS).  See DocID 1210656.1 for details (https://support.oracle.com/epmos/faces/DocumentDisplay?id=1210656.1).

The idea is familiar to anyone who's done any kind of data virtualisation before.  Starting from a base point (the read-only image copy), all reads come from the base, any changes are recorded in delta files (or change logs).  When the database reads data, it first of all checks if there's a newer version in the delta files, and if it can't find anything, reads from the source file.

I'm not aiming to write up a how-to guide here as there are several excellent ones already (the one I keep on referring to is Tim Hall's Oracle Base blog here: http://www.oracle-base.com/articles/11g/clonedb-11gr2.php)

The basics are:

  1. Take an image copy of your source database (RMAN or copy the datafiles using OS utils while the DB is shut down)
  2. Put the image copy somewhere that the clone DB can read
  3. Arrange for an NFS mount point that the clone can write to, for storing the changes to data files
  4. Set up the appropriate directory structures (adump, dpdump, data file directory) on your clone system
  5. Set up some environment variables (export ....=....)
  6. Use clone.pl to create 2 scripts that will generate a new control file and rename the files appropriately
  7. Run the 2 scripts that came out of clone.pl
  8. Fix up the temporary tablespace (the scripts don't do this correctly)
And that's it.  At least in theory.  There are some finer points to look out for but nothing major.  The main problems I've had have been to do with recovery after starting the clone from a hot backup (the datafiles are inconsistent so need recovery).

My clones are all ~210MB, and about 200MB of that is the new redo logs!  Obviously doing some inserts and updates does increase the size, but it's going to take some experimenting to find out how much it changes in our environment.

It's taken me about a day to get to the point where I feel I'm comfortable enough with the process to be able to explain, automate and propose it as a solution to the company's requirements for creating dev/test instances.  The main things I need to work on now are finding out how much memory I need per clone, and what kind of change rate we can expect in each clone.

I'm in the middle of scripting this as much as possible, though I'm not quite sure if it's going to be possible to script the whole process from end to end without doing a cold backup on the source database.

Though having said that, I've just thought of a way... hmmmm.

(Note to self: combine "touch", "find /archivelogsdir -newer start.txt ! -newer end.txt -name \*.arc", and a for loop, a bit like this..... 

for i in $(find . -newer start.txt ! -newer finish.txt -name \*.dbf)
do 
  echo "catalog /mnt/archivelog/${i#./};"
done

That should do something...)

Comments

Popular posts from this blog

Data Guard with Transparent Application Failover (TAF)

RMAN-05531 During RMAN Duplicate from Active Data Guard Standby

Data pump - "ORA-39786: Number of columns does not match between export and import databases"