DDL triggers – interesting results

This question came in on AskTom, yielding a very interesting result when it comes to DDL triggers. To set the scene, I’ll first create a table called T which is just a copy of SCOTT.EMP


SQL> create table scott.t as select * from scott.emp;

Table created.

SQL> desc scott.t
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- -----------------
 EMPNO                                                             NOT NULL NUMBER(4)
 ENAME                                                                      VARCHAR2(10)
 JOB                                                                        VARCHAR2(9)
 MGR                                                                        NUMBER(4)
 HIREDATE                                                                   DATE
 SAL                                                                        NUMBER(7,2)
 COMM                                                                       NUMBER(7,2)
 DEPTNO                                                                     NUMBER(2)

Now let’s say our requirement is to capture anything that relates to columns for tables in the SCOTT schema. So if we create a table, then we want to see all of those columns, and also, if we alter an existing table, then we want to see what the table looks like as a result after the columns have been added or dropped. So let’s build a simple DDL trigger to do that. First, I’ll create a procedure which will list all of the columns for any table name that we pass it.  We’ll work exclusively in SCOTT for this example.


SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3  begin
  4    for i in ( select column_name from dba_tab_columns
  5               where table_name = p_tab
  6               and owner = 'SCOTT'
  7             )
  8    loop
  9      dbms_output.put_line(i.column_name);
 10    end loop;
 11  end;
 12  /

Procedure created.

And we’ll give it a quick test so make sure it works


SQL> set serverout on
SQL> exec create_audit_trigger('EMP')
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO

PL/SQL procedure successfully completed.

We’ll now put that inside our DDL trigger any time we perform an ALTER or CREATE DDL operation on a table.


SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  BEGIN
  6    create_audit_trigger(ORA_DICT_OBJ_NAME);
  7  END;
  8  /

Trigger created.

SQL> conn scott/tiger
Connected.

Now we’ll create a table and see if it works.


SQL> set serverout on
SQL> create table t1 ( x int, y int );
X
Y

Table created.

And we’ll drop a column to test that


SQL> alter table t1 drop column y;
X

So far so good. Now let us add a column to the table.


SQL> alter table t add x int;
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO  -- ???? no X

Table altered.

This is an unexpected result. At the time within the trigger we query the data dictionary to obtain a list of columns, it does not yet appear in the definition. Now that the statement has completed, if we perform a describe on the table, we can see that the new column X is indeed there.


SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- --------------
 EMPNO                                                                   NOT NULL NUMBER(4)
 ENAME                                                                            VARCHAR2(10)
 JOB                                                                              VARCHAR2(9)
 MGR                                                                              NUMBER(4)
 HIREDATE                                                                         DATE
 SAL                                                                              NUMBER(7,2)
 COMM                                                                             NUMBER(7,2)
 DEPTNO                                                                           NUMBER(2)
 X                                                                                NUMBER(38)

The only workaround I can currently think of is to do work asynchronously, via DBMS_JOB. Rather than DBMS_OUTPUT, I’ll log the column names in a table so we can view them after the fact:


SQL> create table col_debug ( ts timestamp, col_name varchar2(50));

Table created.

SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3    l_now timestamp := systimestamp;
  4  begin
  5    for i in ( select column_name from dba_tab_columns
  6               where table_name = p_tab
  7               and owner = 'SCOTT'
  8             )
  9    loop
 10      insert into col_debug values (l_now,i.column_name);
 11    end loop;
 12    commit;
 13  end;
 14  /

Procedure created.

SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  declare
  6    j int;
  7  BEGIN
  8    dbms_job.submit(j,'create_audit_trigger('''||ORA_DICT_OBJ_NAME||''');');
  9  END;
 10  /

Trigger created.

Now I repeat the addition of a column, and the check out debugging table


SQL> alter table t add x1 int;

Table altered.

SQL> select * from col_debug;

TS                                                                          COL_NAME
--------------------------------------------------------------------------- ---------------
24-FEB-17 03.02.36.553000 PM                                                EMPNO
24-FEB-17 03.02.36.553000 PM                                                ENAME
24-FEB-17 03.02.36.553000 PM                                                JOB
24-FEB-17 03.02.36.553000 PM                                                MGR
24-FEB-17 03.02.36.553000 PM                                                HIREDATE
24-FEB-17 03.02.36.553000 PM                                                SAL
24-FEB-17 03.02.36.553000 PM                                                COMM
24-FEB-17 03.02.36.553000 PM                                                DEPTNO
24-FEB-17 03.02.36.553000 PM                                                X
24-FEB-17 03.02.36.553000 PM                                                X1

10 rows selected.

So the trigger submits a job rather than does the work itself.

Why does an ADD column operation behave differently to DROP and CREATE? That remains a mystery Smile


					

FOLLOWS clause

In early Oracle releases, you could have 1 trigger per type of trigger per table (eg if you had a before-row-insert trigger, you could have only 1). Now there is no limit. But that does raise the question – if I have “n” BEFORE ROW triggers on a table, in which order do they fire ? Let’s look at the following example:


SQL> drop table t1 purge;

Table dropped.

SQL> create table T1 ( x int, y int, z int );

Table created.

SQL> create or replace
  2  trigger trg1
  3  before insert on t1
  4  for each row
  5  begin
  6    :new.y := :new.x;
  7  end;
  8  /

Trigger created.

SQL> create or replace
  2  trigger trg2
  3  before insert on t1
  4  for each row
  5  begin
  6    :new.z := :new.y;
  7  end;
  8  /

Trigger created.

Seems simple enough…Copy ‘x’ into ‘y’, and then copy ‘y’ into ‘z’. So lets see what happens



SQL> insert into t1 (x) values (1);

1 row created.

SQL> select * from t1;

         X          Y          Z
---------- ---------- ----------
         1          1


Where did “Z” go ? What happened was TRG2 fired first, and then TRG1 fired. The firing order is indeterminate.

To solve this, we can use the FOLLOWS command to dictate the order in which triggers must fire.


SQL> create or replace
  2  trigger trg2
  3  before insert on t1
  4  for each row
  5  follows trg1
  6  begin
  7    :new.z := :new.y;
  8  end;
  9  /

Trigger created.

SQL> insert into t1 (x) values (1);

1 row created.

SQL> select * from t1;

         X          Y          Z
---------- ---------- ----------
         1          1
         1          1          1

SQL>
SQL>

So now you can see (if you have a lot of triggers) where FOLLOWS might come in useful.

Where did my triggers go ?

You need to be careful when coding and using triggers when it comes to Datapump (or anything that transposes triggers between schemas).  A lot of people make assumptions about what will happen with their triggers, and often get a nasty shock when they see the reality.  Lets explore with an example

I’m going to add three triggers to the standard EMP table in the SCOTT schema


SQL> connect scott/tiger
Connected.
SQL> create or replace
  2  trigger TRG1 before insert on scott.EMP
  3  for each row
  4  declare
  5    x int;
  6  begin
  7    select count(*) into x from scott.dept;
  8  end;
  9  /

Trigger created.

SQL>
SQL>
SQL> create or replace
  2  trigger TRG2 before insert on EMP
  3  for each row
  4  declare
  5    x int;
  6  begin
  7    select count(*) into x from scott.dept;
  8  end;
  9  /

Trigger created.

SQL>
SQL>
SQL> create or replace
  2  trigger TRG3 before insert on EMP
  3  for each row
  4  declare
  5    x int;
  6  begin
  7    select count(*) into x from dept;
  8  end;
  9  /

Trigger created.

Now we’ll unload the entire schema and copy it to a new schema called SCOTT2


expdp directory=TEMP dumpfile=schema.dmp logfile=schema.log schemas=scott

Export: Release 12.1.0.2.0 - Production on Fri Nov 27 10:31:29 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: system/********

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/******** directory=TEMP dumpfile=schema.dmp logfile=schema.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "SCOTT"."ABC"                               5.898 KB       1 rows
. . exported "SCOTT"."DEPT"                              6.031 KB       4 rows
. . exported "SCOTT"."EMP"                               8.781 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          5.960 KB       5 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  C:\TEMP\SCHEMA.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Fri Nov 27 10:31:56 2015 elapsed 0 00:00:24


impdp directory=TEMP dumpfile=schema.dmp logfile=schemaimp.log schemas=scott remap_schema=scott:scott2

Import: Release 12.1.0.2.0 - Production on Fri Nov 27 10:31:57 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: system/********

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":  system/******** directory=TEMP dumpfile=schema.dmp logfile=schemaimp.log schemas=scott remap_schema=scott:scott2 Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT2"."ABC"                              5.898 KB       1 rows
. . imported "SCOTT2"."DEPT"                             6.031 KB       4 rows
. . imported "SCOTT2"."EMP"                              8.781 KB      14 rows
. . imported "SCOTT2"."SALGRADE"                         5.960 KB       5 rows
. . imported "SCOTT2"."BONUS"                                0 KB       0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
ORA-39082: Object type TRIGGER:"SCOTT2"."TRG1" created with compilation warnings
ORA-39082: Object type TRIGGER:"SCOTT2"."TRG2" created with compilation warnings
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at Fri Nov 27 10:32:16 2015 elapsed 0 00:00:14

SQL> select trigger_name, status
  2  from dba_triggers
  3  where owner = 'SCOTT2';

TRIGGER_NAME                   STATUS
------------------------------ --------
TRG1                           ENABLED
TRG2                           ENABLED
TRG3                           ENABLED

So at this point, everything seems to be ok. But we should look more closely…


SQL> select object_name, status
  2  from dba_objects
  3  where owner = 'SCOTT2'
  4  and object_name like 'TRG%';

OBJECT_NAME                              STATUS
---------------------------------------- -------
TRG3                                     VALID
TRG2                                     INVALID
TRG1                                     INVALID

Two of the triggers are invalid. And here’s where things really catch people out …


SQL> select trigger_name, table_owner, table_name
  2  from   dba_triggers
  3  where  owner = 'SCOTT2';

TRIGGER_NAME                   TABLE_OWNER          TABLE_NAME
------------------------------ -------------------- -----------
TRG1                           SCOTT                EMP
TRG2                           SCOTT2               EMP
TRG3                           SCOTT2               EMP

Ouch…we have a trigger owned by SCOTT2 but the triggering table is SCOTT ! Probably not what we intended.

That’s how we do it because people forget that triggers do not have to be in the same schema as the object they are triggering on. For example, I might have a schema called “SCOTT_TRIGGERS” where I define all of my triggers for tables owned by SCOTT.  So even though an import might ask to remap a schema, we cant really assume that every reference to “SCOTT” in either the trigger definition or trigger body can just be changed.  We can only remap the schema (hence the name of the parameter).  In the case above, TRG2 is invalid because the trigger body is trying to reference SCOTT.EMP and does not have any permissions to do so.

So be careful when transposing (either by datapump or any other means) triggers between schemas.

Avoiding public embarrassment with triggers

If you create a trigger that does not compile, any part of your application that refers to the table with the invalid trigger is basically dead until that trigger becomes valid.  Even if a DML on that table is not going affect any rows, you still wont be able to issue it.  For example,

SQL> delete from T where 1=0;
delete from T where 1=0
            *
ERROR at line 1:
ORA-04098: trigger 'TRG' is invalid and failed re-validation

That’s a bad bad place for your application…and a bad bad place for your career Smile

In 11g and above, you can avoid this using the DISABLE keyword every time you create a trigger

All triggers should be created as:

create or replace
trigger MY_TRIGGER
DISABLE
begin
   ...
end;

If the trigger is created and compiled successfully, then  you can enable it.

alter trigger MY_TRIGGER enable

If the trigger for some unforeseen reason does not compile, it is disabled, and hence, the failed compilation will not break your application.

Make it a coding standard for your database developers.

Dumb triggers part 2 – session level control

In the previous post, I pontificated about triggers that “lock you in” to having them fire, which can create dramas when it comes to doing data patching.

Maybe you can design your application around this, but if you cant, the last thing you want to be doing is having to take an outage so that you can disable the trigger whilst you do your data maintenance.  Ideally you want the trigger to fire as per normal, except in your session.

And that’s actually pretty easy to do.  A simple example is below

SQL> create or replace
  2  package TRIGGER_CTL is
  3
  4  --
  5  -- Session level control of triggers for data patching etc
  6  --
  7
  8    -- add a trigger to NOT fire for this session
  9    procedure disable(p_trigger_name varchar2);
 10
 11    -- reinstate normal trigger operation for this session
 12    procedure enable(p_trigger_name varchar2);
 13
 14    -- reinstate all triggers for this session
 15    procedure enable_all;
 16
 17    -- return if trigger is active in this session (which of course is the default)
 18    function enabled_in_session(p_trigger_name varchar2) return boolean;
 19
 20  end;
 21  /

Package created.

SQL> create or replace
  2  package body TRIGGER_CTL is
  3
  4    type t_disabled_triggers is table of number
  5      index by varchar2(30);
  6
  7    g_disabled_triggers t_disabled_triggers;
  8
  9  procedure disable(p_trigger_name varchar2) is
 10  begin
 11    g_disabled_triggers(upper(p_trigger_name)) := 1;
 12  end;
 13
 14  procedure enable(p_trigger_name varchar2) is
 15  begin
 16    if g_disabled_triggers.exists(upper(p_trigger_name)) then
 17      g_disabled_triggers.delete(upper(p_trigger_name));
 18    end if;
 19  end;
 20
 21  procedure enable_all is
 22  begin
 23    g_disabled_triggers.delete;
 24  end;
 25
 26  function enabled_in_session(p_trigger_name varchar2) return boolean is
 27  begin
 28    return not g_disabled_triggers.exists(upper(p_trigger_name));
 29  end;
 30
 31  end;
 32  /

Package body created.

Once we’ve got that little utility coded up, its easy to get session level control over triggers, simply by adding a check

CREATE OR REPLACE TRIGGER MY_TRG
BEFORE INSERT OR UPDATE ON MY_TABLE
FOR EACH ROW
BEGIN
if trigger_ctl.enabled_in_session('MY_TRG') then
   
end if;   
END;
/

And to control the trigger at session level, its then just a call to your API

SQL> exec trigger_ctl.disable('MY_TRG')
SQL> -- then your data maintenance
SQL> exec trigger_ctl.enable('MY_TRG')

Really dumb triggers

Some people hate triggers, some people love triggers…

I am not that opinionated on them in either direction, but one thing I do hate, whether it be a trigger or not, is dumb code.  And today’s post just happens to be about dumb code in a trigger.

Consider this simple trigger (you see these everywhere pre 12c):

CREATE OR REPLACE TRIGGER MY_TRG
  BEFORE INSERT ON MY_TABLE 
  FOR EACH ROW
BEGIN
   SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL;
END;
/

Seems harmless enough…and I’m not talking about whether we should be using “:new.col := seq.nextval”, because under the covers it will do a select-from-dual anyway.

The reason the code is really dumb…is you are totally screwed if you need to some data maintenance…

Consider the following scenario:

Customer: “Last week, I accidentally deleted MY_PK_COL = 10, here’s what the row should look like, can you please put it back?”

Your response is simple….“No I cant”.

Why ? Because you can never, ever re-insert MY_PK_COL = 10 if the sequence has advanced past 10.  All of your options are nasty…

a) reset the sequence value ?  What of other transactions taking place?

b) disable the trigger ? ditto.

Now people will get up on their soap box and say “Its a surrogate key, the value should be meaningless, it shouldn’t matter what it is” etc etc…and I admire your tenacious grip on the religious argument.  But that’s like saying “All databases should have referential integrity constraints!”…Well duh, but that’s not how the real world is 😦

Its just a dumb way of coding.  If you really need these kinds of triggers (hint: you don’t), then at least code them defensively:

CREATE OR REPLACE TRIGGER MY_TRG
  BEFORE INSERT ON MY_TABLE 
  FOR EACH ROW
  when ( new.MY_PK_COL is null )
BEGIN
   SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL;
END;
/

so at least you’re not clobbering someone’s data.