Last weekend, we faced one of the larger challenges that we’ve faced in my time at my current client.  Migrate multiple database systems, across multiples sites, all to new hardware (with new endian format), new database version (12.1.0.2)…and of course, try to do it with as small a disruption to the service as possible.

We are not a Goldengate customer, so pursuing a zero-downtime migration was not considered, and to be honest, even we had such facilities at our disposal, I think we would still have taken a small outage just to allow a “quiet time” to do verification checks etc.

I had done a similar exercise many years ago, with a client moving from 9i to 10g.  In those days, migration with low downtime was a massively complex affair.  I remember countless SQL and shell scripts, all designed to do as much work concurrently as possible (multiple physical networks, multiple concurrent jobs pulling tables over db links, concurrent index builds, etc etc etc), all designed to keep that outage time to a minimum.  High risk days indeed.

In contrast, using the cross-platform transportable tablespaces, this current migration ran very smoothly indeed.  Below is a rough guide as to what is needed to perform such a migration – I didnt want to list precisely our steps, because people will mistakenly treat that as the definitive prescription for how to do it.  This is more of an overview, which will get you started, but hopefully you’ll be doing your own thorough planning and testing when the time comes !

Firstly, you create a fresh new database on your target system.  It will just have the usual tablespaces (SYSTEM, SYSAUX, etc).  It will be the recipient of all of the non-default tablespaces that will be be transporting over from our source system.

On the source system, first you’ll need a datapump of all the object definitions, without unloading any data.

expdp ... dumpfile=metadata_full.dmp full=y content=metadata_only exclude=user,role,role_grant,profile exclude=table_statistics exclude=index_statistics

and for each datafile, you need an imagecopy using RMAN, for example:

backup as copy tag 'my_tablespace' datafile 50,60,61 format '/backup_dest/file%f.bkp; 

And I do this for all of the user tablespaces, ie, not those that would be created as part of a fresh db install.  Note: Ensure compression is turned off for all RMAN parts of this process.

Notice I excluded statistics in the datapump.  Whilst datapump can also unload the optimizer stats for you, we found impdp to very slow for bringing those stats back into the target system, so we took control of that ourselves

exec dbms_stats.create_stat_table(user,'STATS'); 
exec dbms_stats.export_database_stats('ST'ATS,'WHOLE_DB',user); 

Now – so far, our source system is still running.  There has been no outage or disruption to service.  Those datafile copy backups we took, now need to be converted to the endian format of the target system, once again using RMAN, with a convert command for each file.

convert from platform 'AIX-Based Systems (64-bit)' datafile '/backup_dest/file50.bkp'  format '/target_dest/file50.dbf'; 

So now we have a converted set of files, but of course, they are a “fuzzy” copy of the data since the copies were taken with the source system still active.  This is where the nice part of cross-platform migration comes in.  I can regularly take incrementals from my source system, and convert/apply them to the freshly converted datafile copies.  Thus I can keep my target system up to date with my source system, without taking my source system offline.

So I can find the base checkpoint# from all files, and take an incremental of each tablespace since that point in time

select MIN(checkpoint_change#) from v$datafile; 

backup incremental from scn <mincheckpoint> tablespace 'MY_TSPACE' format  '/backup_dest/%U'; 

The incrementals can then be converted/applied to the target system using a little bit of PLSQL

DECLARE 
  d  varchar2(512); 
  h  varchar2(512) ; 
  t  varchar2(30) ; 
  b1 boolean ; 
  b2 boolean ;

BEGIN 
  d := sys.dbms_backup_restore.deviceAllocate; 
  sys.dbms_backup_restore.applysetdatafile(   check_logical=>false, cleanup=>false) ;

  sys.dbms_backup_restore.applyDatafileTo( 
     dfnumber=>50, 
     toname =>'/target_dest/file50.dbf, 
     fuzziness_hint=>0, 
     max_corrupt =>0, 
     islevel0=>0, 
     recid=>0, 
     stamp=>0);

  sys.dbms_backup_restore.restoreSetPiece( 
     handle=>'/backup_dest/incr12345', 
     tag=>null, 
     fromdisk=>true, 
     recid=>0, 
     stamp=>0) ;

  sys.dbms_backup_restore.restoreBackupPiece( 
    done=>d, 
    params=>null, 
    outhandle=>h, 
    outtag=>t, 
    failover=>failover);

  sys.dbms_backup_restore.restoreCancel(TRUE); 
  sys.dbms_backup_restore.deviceDeallocate;

END; 
/

 

This process can be repeated indefinitely until you are ready to cutover to the new system.  When that time will comes, the process is similar

Your source system now must go read-only (and hence this most likely is the commencement of your outage)

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read only'; 
end loop; 
end; 
/

You now take a datapump of the tablespaces you are going to transport over.  We are omitting the statistics, because we already have them.

expdp ... exclude=table_statistics exclude=index_statistics dumpfile=transport.dmp transport_full_check=no transport_tablespaces=tspace1,tspace2,... 

And you take a final incremental backup, and apply it to the target system.  Now all is in readiness.

Create all the required users by datapump-ing across the network

impdp ... network_link=source_db_link full=y include=user,role,role_grant,profile 

and then import the transportable tablespace definitions

impdp dumpfile=transport.dmp transport_datafiles='/target_dest/file10.dbf','/target_dest/file11.dbf','/target_dest/file12.dbf',... 

And then set your tablespaces to read-write

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' and status = 'READ ONLY' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read write'; 
end loop; 
end; 
/

And bring in all of the other bits and pieces (views, PL/SQL, etc etc) using the metadata dump you took earlier

impdp ... dumpfile=full.dmp full=y exclude=table_statistics exclude=index_statistics

and voila…you will have a migrated database with minimal downtime.  In our case, the transportable datapump export/import, plus the last incremental (thanks to block change tracking), was around 30mins, and the import of all of our PL/SQL etc around 30mins as well.  Not too bad for multiple multi-terabyte databases.

We then upgraded our stats table, and used dbms_stats to import the statistics, and started verification testing.

That covers the broad steps you’ll be taking.  There’s plenty of information on MOS and on OTN about the nitty gritty, but all in all, we were very happy with outcome.

16 responses to “Cross-platform database migration”

  1. Good to hear everything is working .
    My question is how You’ve handled sequences ?
    GG

    1. Connor McDonald Avatar
      Connor McDonald

      Good point – I should have elaborated on that. Sequence *definitions* will come in the datapump, but the next sequence value to be used is incorrect (since its basically just the DDL to recreate the sequence). So you need do your own recreate script, which grabs the last_number+cache value as a starting point from dba_sequences.

      Cheers,
      Connor

  2. The incremental part looks the same as of Doc ID 1389592.1 .
    Any particular reason You did not use rman-xttconvert_2.0 script and a note as whole ?
    Regards
    GG

    1. Connor McDonald Avatar
      Connor McDonald

      It was our starting point (and I dont want to understate the value of that), but its handling of concurrency wasn’t particularly flash, plus it had a few problems here and there when it came to datapump generation, and the way it handled incrementals. Its also the same reason for my blog post being a “guide” rather than a “prescription” – every customer / environment will have their particular nuances, and to blindly adopt any instructional list is a recipe for problems 🙂

      Cheers,
      Connor

  3. (..) from platform ‘AIX-Based Systems (64-bit)’… please what was the rationale behind such migration?,is it some wider industry trend, just cost cutting, technology refresh?

  4. Transportable tablespaces are so useful for this type of work I don’t even want to remember what it was like ages ago when they were not available!
    One of the less run of the mill uses for them I have is to preserve tablespaces in uat and dev refreshes from production backups. Rather than data pump all required extra tablespaces from the targets into flat files and then reimporting after refresh, I prefer to read-only and just transport out the developer’s tablespaces from the test and dev dbs, then rman restore from a prod backup followed by adding the tablespaces back in after rman is done! That way I can very easily preserve developer’s work in progress and still refresh the data from a simple rman backup. Works a treat and saves me large amounts of time every month.

  5. Hello Connor,
    we’re facing a similar task, 10 databases with about 160 TB, cross plattform, so I’m interested how many TB you moved overall ?
    How big have your “multiple multi-terabyte databases” been together ?
    Regards Dirk

    1. Hi Dirk,

      The size should be of minimal consequence, because the ‘outage’ will be defined by

      a- length of time of last incremental (which with block change tracking should be small)
      b- length of time to export/import the transportable tablespace definitions
      c- length of time to export/import the non-segment definitions (plsql etc etc)

      So (b) and (c) are where the challenges are – eg, if you had eBiz or similar, and you’re compiling tens of thousands of plsql packages, then you’d be thinking about alternate strategies (eg a change freeze and pre-compiling beforehand etc). Similarly, large number of partitions means longer import times just for the dictionary maintenance and stats import. (If its any use – we totalled around 15,000 table partitions and 40,000 index partitions across the board)

      But to answer your question, I havent looked too closely,but I’m guessing between 10 and 20 in total.

  6. Hi Connor,

    this “rough guide” would have been correct from 10g to 11g, wouldn’t it?

    Thanks

    1. Connor McDonald Avatar
      Connor McDonald

      Hi Sergio,

      I *think* that one of the requirements is that you have to be 11.2.0.4 or higher in order to do cross-platform incrementals, which of course is the critical element in order to not suffer downtime proportional to database size.

      You’d probably need to check with Support to get a definitive answer.

      Cheers,
      Connor

  7. Hello
    is the same procedure applicable from windows 2008 11.2.0.4
    to Oracle on oel6 12.1.0.2 ?
    actually i am facing some issues in the convert of the incremental backups,
    i cant find the rman command that convert incremental backups
    and the above procedure returned error on ” failover=>failover);”

    Thanks

  8. Jorge Zorrilla Avatar
    Jorge Zorrilla

    Hello Connor
    I don’t understand well the incremental applying procedure.
    You have to run these procedure for every datafile on the target database ?
    What happend if your incremental backup has more than one file ?

    Regards.

    1. The incremental will be applied for each tablespace. But as I said in the post – “I didnt want to list precisely our steps”.

      Start with MOS note 1389592.1 and use that as a starting point to build an initial workflow, then customize to suit your needs.

  9. Thanks for the document.
    You are explaining a manual XTTS (+incremental backup) migration method here, right?

    Anyways; if we implement this XTTS(+incrementeal backup) method using xttdriver.pl as documented in “11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)”; then there is a requirement to have source tablespace in READ-WRITE mode. (The set of tablespaces being moved must all be online, and contain no offline data files. Tablespaces must be “READ WRITE”.)

    My questions are:

    1) Why is there such a requirement for executing this XTTS method using xttdriver.pl?
    2) Does this manual method, that you explained here, require the source tablespace to be in READ WRITE mode , as well?

    Thanks and Regards

  10. Thanks. Inspirational post – made me think how we can do this with 7 API calls for the file image as well as incrementals.

Got some thoughts? Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Blog at WordPress.com.