Quick tip–identity columns

Lets say I’ve been reading about schema separation, and thus I am going to have a schema which owns all of my objects, which I’ll call APP_OWNER, which will have no connection privilege and a separate schema called APP_ADMIN which will take care of all of the DDL tasks.

Here’s my setup:


SQL> create user app_owner identified by app_owner;
 
User created.
 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5  to app_admin identified by app_admin;
 
Grant succeeded.

I have granted APP_ADMIN the ability to create tables in any schema, and create sequences in any schema. I did the latter, because I know that if I use the “IDENTITY” clause for a column, then behind the scenes I’ll be creating a sequence to populate those ascending values.  So it looks like I am ready to go and create my objects.  Let’s create that first table


SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
create table app_owner.t(pk integer generated always as identity)
*
ERROR at line 1:
ORA-01031: insufficient privileges

Hmmm….that is not what we were expecting. It turns out that to create an identity column in another schema you need more than just CREATE ANY SEQUENCE. You also need SELECT ANY SEQUENCE.


SQL> conn / as sysdba
Connected. 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5    select any sequence
  6  to app_admin identified by app_admin;
 
Grant succeeded.
 
SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
 
Table created.

And there we go Smile

Footnote: If you’ve never seen the syntax “grant <privs> to <user> identified by <pass>” it is a quick shortcut to both create the user account and assign privileges in a single command

Transportable Tablespace–part 2

I did a little demo of sharing a tablespace between two databases a few days back – you can see the details here or by just scrolling down Smile if you’re on the home page.

To avoid clouding the demonstration I omitted something in the details, but I’ll share that now, because it could be critical depending on how you currently use transportable tablespaces.

Let me do the most basic of examples now, transporting a tablespace from one database to another:

First, we make our tablespace read only, and Datapump export out the metadata


SQL> alter tablespace DEMO read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

C:\>expdp transport_tablespaces=DEMO directory=TEMP dumpfile=tts.dmp

Export: Release 12.1.0.2.0 - Production on Tue Apr 18 14:16:06 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":  /******** AS SYSDBA transport_tablespaces=DEMO directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  C:\TEMP\TTS.DMP
******************************************************************************
Datafiles required for transportable tablespace DEMO:
  C:\ORACLE\ORADATA\NP12\DEMO.DBF
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Tue Apr 18 14:17:03 2017 elapsed 0 00:00:50

Then, I copy the datafile(s) to the target location and Datapump import the metadata.


C:\>copy C:\oracle\oradata\np12\DEMO.DBF C:\oracle\oradata\db122\DEMO.DBF
        1 file(s) copied.

C:\>impdp transport_datafiles=C:\oracle\oradata\db122\DEMO.DBF directory=TEMP dumpfile=tts.dmp

Import: Release 12.2.0.1.0 - Production on Tue Apr 18 14:17:27 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01":  /******** AS SYSDBA transport_datafiles=C:\oracle\oradata\db122\DEMO.DBF directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Tue Apr 18 14:17:46 2017 elapsed 0 00:00:15

And voila, there is my tablespace in the target database…


C:\>sql122

SQL*Plus: Release 12.2.0.1.0 Production on Tue Apr 18 14:19:08 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 18 2017 14:14:19 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production



SQL> select * from dba_tablespaces
  2  where tablespace_name = 'DEMO'
  3  @pr
==============================
TABLESPACE_NAME               : DEMO
BLOCK_SIZE                    : 8192
INITIAL_EXTENT                : 1048576
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
MAX_SIZE                      : 2147483645
PCT_INCREASE                  : 0
MIN_EXTLEN                    : 1048576
STATUS                        : READ ONLY
CONTENTS                      : PERMANENT
LOGGING                       : LOGGING
FORCE_LOGGING                 : NO
EXTENT_MANAGEMENT             : LOCAL
ALLOCATION_TYPE               : UNIFORM
PLUGGED_IN                    : YES
SEGMENT_SPACE_MANAGEMENT      : AUTO
DEF_TAB_COMPRESSION           : DISABLED
RETENTION                     : NOT APPLY
BIGFILE                       : NO
PREDICATE_EVALUATION          : HOST
ENCRYPTED                     : NO
COMPRESS_FOR                  :
DEF_INMEMORY                  : DISABLED
DEF_INMEMORY_PRIORITY         :
DEF_INMEMORY_DISTRIBUTE       :
DEF_INMEMORY_COMPRESSION      :
DEF_INMEMORY_DUPLICATE        :
SHARED                        : SHARED
DEF_INDEX_COMPRESSION         : DISABLED
INDEX_COMPRESS_FOR            :
DEF_CELLMEMORY                :
DEF_INMEMORY_SERVICE          :
DEF_INMEMORY_SERVICE_NAME     :
LOST_WRITE_PROTECT            : OFF
CHUNK_TABLESPACE              : N

PL/SQL procedure successfully completed.

“Gee thanks, Connor” I can hear you muttering. A demonstration of the flippin’ obvious!

But there is one thing that is not apparent from the export or import logs. Let’s take a squizz at the database alert log for the target, that is, the database we imported the tablespace into.


DW00 started with pid=69, OS id=632, wid=1, job SYS.SYS_IMPORT_TRANSPORTABLE_01
2017-04-18T14:17:34.208631+08:00
Plug in tablespace DEMO with datafile
  'C:\oracle\oradata\db122\DEMO.DBF'
2017-04-18T14:17:46.199645+08:00
ALTER TABLESPACE "DEMO" READ WRITE
Completed: ALTER TABLESPACE "DEMO" READ WRITE
2017-04-18T14:17:46.665512+08:00
ALTER TABLESPACE "DEMO" READ ONLY
Completed: ALTER TABLESPACE "DEMO" READ ONLY

That is a change in 12c. Whilst our imported tablespace ends up as read only as it has always done, during the import process, there was a small window where the tablespace was READ WRITE. This is needed to make some metadata corrections to the tablespace on the way in.

So if you do intend to share tablespaces between databases, that is, share a single copy of the file, make sure take some precautions. On my Windows laptop, standard Windows file locking prohibited me from causing any damage to my source datafile, but on other platforms you might to set those files to read only at the OS level just in case. Of course, you’ll then see a warning during the Datapump import saying that the momentary change to read/write could not be done, but that is not a critical problem.  The transport will still complete.

Sharing a tablespace between 2 databases

I was reading an interesting discussion today about multiple databases each containing large amounts of read-only data.  If that read-only data is common, then it would make sense to have a single copy of that data and have both databases share it.

Well, as long as you can isolate that data into its own tablespace, then you can do that easily with Oracle by transporting the metadata between two databases and leaving the files in place.

Here’s an example

Source database


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0      Production
TNS for 64-bit Windows: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL> create tablespace i_am_on_121 datafile 'C:\oracle\oradata\tts\my_tspace' size 50m;

Tablespace created.

SQL> create table t tablespace i_am_on_121 as select * from dba_objects;

Table created.

SQL> alter tablespace i_am_on_121 read only;

Tablespace altered.

C:\>expdp transport_tablespaces=i_am_on_121 directory=TEMP dumpfile=tts.dmp

Export: Release 12.1.0.2.0 - Production on Fri Apr 14 08:50:24 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Username: mcdonac/*****

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01":  mcdonac/******** transport_tablespaces=i_am_on_121 directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TRANSPORTABLE_01 is:
  C:\TEMP\TTS.DMP
******************************************************************************
Datafiles required for transportable tablespace I_AM_ON_121:
  C:\ORACLE\ORADATA\TTS\MY_TSPACE
Job "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Fri Apr 14 08:51:16 2017 elapsed 0 00:00:47

and then we import it into a different database (and this one even is a different version!).

Target database


C:\Users\hamcdc>impdp transport_datafiles=C:\oracle\oradata\tts\my_tspace directory=TEMP dumpfile=tts.dmp

Import: Release 12.2.0.1.0 - Production on Fri Apr 14 08:51:28 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: mcdonac/*****

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "MCDONAC"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TRANSPORTABLE_01":  mcdonac/******** transport_datafiles=C:\oracle\oradata\tts\my_tspace directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE    12.2.0.1.0      Production
TNS for 64-bit Windows: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production

SQL> select count(*) from t;

  COUNT(*)
----------
     92934

SQL> select * from dba_tablespaces
  2  where tablespace_name = 'I_AM_ON_121'
  3  @pr
==============================
TABLESPACE_NAME               : I_AM_ON_121
BLOCK_SIZE                    : 8192
INITIAL_EXTENT                : 65536
NEXT_EXTENT                   :
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
MAX_SIZE                      : 2147483645
PCT_INCREASE                  :
MIN_EXTLEN                    : 65536
STATUS                        : READ ONLY
CONTENTS                      : PERMANENT
LOGGING                       : LOGGING
FORCE_LOGGING                 : NO
EXTENT_MANAGEMENT             : LOCAL
ALLOCATION_TYPE               : SYSTEM
PLUGGED_IN                    : YES
SEGMENT_SPACE_MANAGEMENT      : AUTO
DEF_TAB_COMPRESSION           : DISABLED
RETENTION                     : NOT APPLY
BIGFILE                       : NO
PREDICATE_EVALUATION          : HOST
ENCRYPTED                     : NO
COMPRESS_FOR                  :
DEF_INMEMORY                  : DISABLED
DEF_INMEMORY_PRIORITY         :
DEF_INMEMORY_DISTRIBUTE       :
DEF_INMEMORY_COMPRESSION      :
DEF_INMEMORY_DUPLICATE        :
SHARED                        : SHARED
DEF_INDEX_COMPRESSION         : DISABLED
INDEX_COMPRESS_FOR            :
DEF_CELLMEMORY                :
DEF_INMEMORY_SERVICE          :
DEF_INMEMORY_SERVICE_NAME     :
LOST_WRITE_PROTECT            : OFF
CHUNK_TABLESPACE              : N

PL/SQL procedure successfully completed.

Better to be safe than sorry…

I’ve always been worried about taking a script that is fine to run in my non-production environments (in particular a DROP script) and accidentally running it in a Production environment, shortly followed by the typing up of a fresh resume to look for a new job once the mistake is discovered Smile

The standard means of a “least privilege required” model when it comes to Production access is a good step along this road to protecting the database. But ultimately, at one time or another, an administrator with appropriate privileges, good intentions, but (say) bad cut-paste skills might inadvertently run a drop command against a database they did not want to.  (A well publicised example of this happened quite recently https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/)

So it would be nice to have a method of adding a ‘double check’ facility so that we can guard against a DROP being run on your production system inadvertently. That way, even if you did  run that drop script against Production, you would be “saved from yourself”. Here’s one such possible means you might want to consider. We’ll use a context variable that has to be set in order for DROP commands to be processed.


--
-- Our context
--
SQL> create context DDL_CHECK using ddl_checker;

Context created.

--
-- A routinte to set it
--
SQL> create or replace
  2  procedure ddl_checker(p_val varchar2) is
  3  begin
  4    dbms_session.set_context('DDL_CHECK','ENABLED',p_val);
  5  end;
  6  /

Procedure created.

--
-- If we are on the Production database, DROP DDL's only proceed is the context variable is set
--
SQL> create or replace
  2  trigger ddl_checker_before
  3  before drop on scott.schema
  4  begin
  5    if sys.database_name like '%PROD%' and sys_context('DDL_CHECK','ENABLED') is null
  6    then
  7      raise_application_error(-20000,'DDL confirmation context is not set');
  8    end if;
  9  end;
 10  /

Trigger created.

--
-- And after every DROP, the context is reset for the next use so it cannot be left on inadvertently
--
SQL> create or replace
  2  trigger ddl_checker_after
  3  after drop on scott.schema
  4  begin
  5    ddl_checker(null);
  6  end;
  7  /

Trigger created.

SQL>
SQL>
SQL> grant execute on ddl_checker to scott;

Grant succeeded.

SQL>

So now we can see what happens when SCOTT attempts some DDL in Production by mistake.


SQL> conn scott/tiger
Connected.

SQL> drop table t purge;
drop table t purge
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: DDL confirmation context is not set
ORA-06512: at line 4

--
-- We must explicitly express our intention to perform a DDL
--
SQL> exec admin.ddl_checker('true')

PL/SQL procedure successfully completed.

--
-- and only then are we allowed to run a DROP command
--
SQL> drop table t purge;

Table dropped.

--
-- and in doing so, we don't have any lingering access
--
SQL> drop table t1 purge;
drop table t1 purge
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: DDL confirmation context is not set
ORA-06512: at line 4


SQL>

I stress – this is not about securing administrator access or replacing your existing processes for auditing or locking down the privileges on your production systems. This is just about that extra check to help your over-worked and/or over-tired administrators from committing a small mistake that becomes a catastrophic error.

DDL triggers – interesting results

This question came in on AskTom, yielding a very interesting result when it comes to DDL triggers. To set the scene, I’ll first create a table called T which is just a copy of SCOTT.EMP


SQL> create table scott.t as select * from scott.emp;

Table created.

SQL> desc scott.t
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- -----------------
 EMPNO                                                             NOT NULL NUMBER(4)
 ENAME                                                                      VARCHAR2(10)
 JOB                                                                        VARCHAR2(9)
 MGR                                                                        NUMBER(4)
 HIREDATE                                                                   DATE
 SAL                                                                        NUMBER(7,2)
 COMM                                                                       NUMBER(7,2)
 DEPTNO                                                                     NUMBER(2)

Now let’s say our requirement is to capture anything that relates to columns for tables in the SCOTT schema. So if we create a table, then we want to see all of those columns, and also, if we alter an existing table, then we want to see what the table looks like as a result after the columns have been added or dropped. So let’s build a simple DDL trigger to do that. First, I’ll create a procedure which will list all of the columns for any table name that we pass it.  We’ll work exclusively in SCOTT for this example.


SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3  begin
  4    for i in ( select column_name from dba_tab_columns
  5               where table_name = p_tab
  6               and owner = 'SCOTT'
  7             )
  8    loop
  9      dbms_output.put_line(i.column_name);
 10    end loop;
 11  end;
 12  /

Procedure created.

And we’ll give it a quick test so make sure it works


SQL> set serverout on
SQL> exec create_audit_trigger('EMP')
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO

PL/SQL procedure successfully completed.

We’ll now put that inside our DDL trigger any time we perform an ALTER or CREATE DDL operation on a table.


SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  BEGIN
  6    create_audit_trigger(ORA_DICT_OBJ_NAME);
  7  END;
  8  /

Trigger created.

SQL> conn scott/tiger
Connected.

Now we’ll create a table and see if it works.


SQL> set serverout on
SQL> create table t1 ( x int, y int );
X
Y

Table created.

And we’ll drop a column to test that


SQL> alter table t1 drop column y;
X

So far so good. Now let us add a column to the table.


SQL> alter table t add x int;
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO  -- ???? no X

Table altered.

This is an unexpected result. At the time within the trigger we query the data dictionary to obtain a list of columns, it does not yet appear in the definition. Now that the statement has completed, if we perform a describe on the table, we can see that the new column X is indeed there.


SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- --------------
 EMPNO                                                                   NOT NULL NUMBER(4)
 ENAME                                                                            VARCHAR2(10)
 JOB                                                                              VARCHAR2(9)
 MGR                                                                              NUMBER(4)
 HIREDATE                                                                         DATE
 SAL                                                                              NUMBER(7,2)
 COMM                                                                             NUMBER(7,2)
 DEPTNO                                                                           NUMBER(2)
 X                                                                                NUMBER(38)

The only workaround I can currently think of is to do work asynchronously, via DBMS_JOB. Rather than DBMS_OUTPUT, I’ll log the column names in a table so we can view them after the fact:


SQL> create table col_debug ( ts timestamp, col_name varchar2(50));

Table created.

SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3    l_now timestamp := systimestamp;
  4  begin
  5    for i in ( select column_name from dba_tab_columns
  6               where table_name = p_tab
  7               and owner = 'SCOTT'
  8             )
  9    loop
 10      insert into col_debug values (l_now,i.column_name);
 11    end loop;
 12    commit;
 13  end;
 14  /

Procedure created.

SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  declare
  6    j int;
  7  BEGIN
  8    dbms_job.submit(j,'create_audit_trigger('''||ORA_DICT_OBJ_NAME||''');');
  9  END;
 10  /

Trigger created.

Now I repeat the addition of a column, and the check out debugging table


SQL> alter table t add x1 int;

Table altered.

SQL> select * from col_debug;

TS                                                                          COL_NAME
--------------------------------------------------------------------------- ---------------
24-FEB-17 03.02.36.553000 PM                                                EMPNO
24-FEB-17 03.02.36.553000 PM                                                ENAME
24-FEB-17 03.02.36.553000 PM                                                JOB
24-FEB-17 03.02.36.553000 PM                                                MGR
24-FEB-17 03.02.36.553000 PM                                                HIREDATE
24-FEB-17 03.02.36.553000 PM                                                SAL
24-FEB-17 03.02.36.553000 PM                                                COMM
24-FEB-17 03.02.36.553000 PM                                                DEPTNO
24-FEB-17 03.02.36.553000 PM                                                X
24-FEB-17 03.02.36.553000 PM                                                X1

10 rows selected.

So the trigger submits a job rather than does the work itself.

Why does an ADD column operation behave differently to DROP and CREATE? That remains a mystery Smile


					

Duplicate constraints are impossible right ?

Here’s a very simple example of a table called PARENT being a (surprise surprise) parent in a referential integrity relationship to a (drum roll for my choice of name) CHILD table


SQL> create table parent ( p int, constraint PAR_PK primary key (p) );

Table created.

SQL> create table child ( c int,
  2        p int
  3      );

Table created.

SQL> alter table child add constraint fk1 foreign key ( p ) references parent ( p );

Table altered.

That is all as we would expect, and similarly, if I inadvertently try to add the same foreign key constraint, I’ll get an error


SQL> alter table child add constraint fk2 foreign key ( p ) references parent ( p );
alter table child add constraint fk2 foreign key ( p ) references parent ( p )
                                     *
ERROR at line 1:
ORA-02275: such a referential constraint already exists in the table

So you might be wondering, how on earth did I get myself into the following calamity on my database:


SQL> select c.constraint_name, cc.column_name, c.r_constraint_name
  2  from   user_constraints c,
  3         user_cons_columns cc
  4  where  c.table_name = 'CHILD'
  5  and    c.constraint_type = 'R'
  6  and    c.constraint_name = cc.constraint_name;

CONSTRAINT_NAME                COLUMN_NAME                    R_CONSTRAINT_NAME
------------------------------ ------------------------------ --------------------
FK1                            P                              PAR_PK
FK2                            P                              PAR_PK

Yes – thats TWO foreign key constraints implementing the identical check.  How did I bypass the duplicate check we saw above ?

It’s just a little idiosyncrasy in all versions of Oracle since the inception of foreign key constraints. If you specify the constraints as part of the table creation DDL, the check for duplicates is skipped.


SQL> create table child ( c int,
  2    p int,
  3    constraint fk1 foreign key ( p ) references parent ( p ) ,
  4    constraint fk2 foreign key ( p ) references parent ( p )
  5  );

Table created.

It is worth looking for and rectifying this condition on your databases, because it can lead to confusing errors, for example, when you do a datapump of such a schema – the import will (correctly) fail on the second constraint, but you’ll be scratching your head when you go look at the imported result and see that the constraint is there !

This is fixed in 12c onwards.


SQL> create table child ( c int,
  2    p int,
  3    constraint fk1 foreign key ( p ) references parent ( p ) ,
  4    constraint fk2 foreign key ( p ) references parent ( p )
  5  );
  constraint fk2 foreign key ( p ) references parent ( p )
                 *
ERROR at line 4:
ORA-02274: duplicate referential constraint specifications

Partition count for interval partitioned tables

When dealing with a RANGE partitioned table, the defined partitions dictate all of the data that can be placed into the table. For example, if I have a SALES table as per below


SQL> create table SALES
  2    ( cal_year  date,
  3      txn_id    int,
         ...
         ...
 24    )
 25  partition by range ( cal_year )
 26  (
 27    partition p_low values less than ( date '2000-01-01' ),
 28    partition p2000 values less than ( date '2001-01-01' ),
         ...
         ...
 34    partition p2016 values less than ( date '2017-01-01' )
 35  );

Table created.

then the existing partitions define a natural upper bound on the value of CAL_YEAR that I can insert into the table. For example, if I attempt to add a row for the year 2018, I get the familiar ORA-14400 that has called out many a DBA at the stroke of midnight on New Years Eve Smile


SQL> insert into SALES
  2  values ( date '2018-01-01', .... );

insert into SALES
            *
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition

As many will know, the resolution to this is either a maintenance task to ensure that there are sufficient partitions defined, or to use the INTERVAL partitioning method, which came available in 11g.


SQL> create table SALES
  2    ( cal_year  date,
  3      txn_id    int,
         ...
         ...
 23    )
 24  partition by range ( cal_year )
 25  INTERVAL( NUMTOYMINTERVAL(1,'YEAR'))
 26  (
 27    partition p_low values less than ( date '2000-01-01' ),
 28  );

Table created.

And I can observe partitions being created as required as data is added to the table


SQL> select PARTITION_NAME, HIGH_VALUE
  2  from   user_tab_partitions
  3  where  table_name = 'SALES';

PARTITION_NAME            HIGH_VALUE
------------------------- --------------------------------
P00                       TIMESTAMP' 2000-01-01 00:00:00'

SQL> insert into SALES
  2  values ( to_date('12-DEC-2011'),....);

SQL> select PARTITION_NAME, HIGH_VALUE
  2  from   user_tab_partitions
  3  where  table_name = 'SALES';

PARTITION_NAME            HIGH_VALUE
------------------------- --------------------------------
P00                       TIMESTAMP' 2000-01-01 00:00:00'
SYS_P362                  TIMESTAMP' 2012-01-01 00:00:00'

But this isn’t a post about how interval partitioning is defined, because it’s a topic that is now well understood and well detailed in the documentation and on many blogs.

I wanted to touch on a something more subtle that you might encounter when using interval partitioned tables. Let me do a query on the SALES table, which has been recreated (as INTERVAL partitioned) but is empty. Here is the execution plan when I query the table.


SQL> select * from SALES; --empty


-------------------------------------------------------------
| Id  | Operation           | Name  | Rows  | Pstart| Pstop |
-------------------------------------------------------------
|   0 | SELECT STATEMENT    |       |     1 |       |       |
|   1 |  PARTITION RANGE ALL|       |     1 |     1 |1048575|
|   2 |   TABLE ACCESS FULL | SALES |     1 |     1 |1048575|
-------------------------------------------------------------

Wow! One million partitions ! That might seem odd, because we know that our table has been defined only with a single partition, and even that might not be instantiated yet depending on our choice of “deferred_segment_creation” parameter on the database. But the explanation is relatively simple. The moment we define a table as interval partitioned, we in effect know “in advance” the definition of every single interval that will ever follow. The starting point for the intervals is known due to the initial partition definition in the DDL, and the size/length of the interval maps out every possible future partition.

image

The maximum number of partitions is 1048575, which is then reflected in the execution plan.

You’ll see similar information when you create an index on such a table. If the index is local, and hence follows the same partitioning scheme as the underlying table, then it too has potentially 1048575 partitions all not yet in use, but known in advance. So if you look at the PARTITION_COUNT column for such an index, you’ll also see that the database will state that it has a (very) high partition count


SQL> create index sales_ix on sales ( some_col ) local;

Index created.

SQL> select TABLE_NAME,INDEX_NAME,PARTITION_COUNT from user_part_indexes;

TABLE_NAME                     INDEX_NAME                     PARTITION_COUNT
------------------------------ ------------------------------ ---------------
SALES                          SALES_IX                               1048575

1 row selected.

So if you see anything suggesting one million partitions, double check to see if you really have that many.

image