DDL triggers – interesting results

This question came in on AskTom, yielding a very interesting result when it comes to DDL triggers. To set the scene, I’ll first create a table called T which is just a copy of SCOTT.EMP


SQL> create table scott.t as select * from scott.emp;

Table created.

SQL> desc scott.t
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- -----------------
 EMPNO                                                             NOT NULL NUMBER(4)
 ENAME                                                                      VARCHAR2(10)
 JOB                                                                        VARCHAR2(9)
 MGR                                                                        NUMBER(4)
 HIREDATE                                                                   DATE
 SAL                                                                        NUMBER(7,2)
 COMM                                                                       NUMBER(7,2)
 DEPTNO                                                                     NUMBER(2)

Now let’s say our requirement is to capture anything that relates to columns for tables in the SCOTT schema. So if we create a table, then we want to see all of those columns, and also, if we alter an existing table, then we want to see what the table looks like as a result after the columns have been added or dropped. So let’s build a simple DDL trigger to do that. First, I’ll create a procedure which will list all of the columns for any table name that we pass it.  We’ll work exclusively in SCOTT for this example.


SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3  begin
  4    for i in ( select column_name from dba_tab_columns
  5               where table_name = p_tab
  6               and owner = 'SCOTT'
  7             )
  8    loop
  9      dbms_output.put_line(i.column_name);
 10    end loop;
 11  end;
 12  /

Procedure created.

And we’ll give it a quick test so make sure it works


SQL> set serverout on
SQL> exec create_audit_trigger('EMP')
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO

PL/SQL procedure successfully completed.

We’ll now put that inside our DDL trigger any time we perform an ALTER or CREATE DDL operation on a table.


SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  BEGIN
  6    create_audit_trigger(ORA_DICT_OBJ_NAME);
  7  END;
  8  /

Trigger created.

SQL> conn scott/tiger
Connected.

Now we’ll create a table and see if it works.


SQL> set serverout on
SQL> create table t1 ( x int, y int );
X
Y

Table created.

And we’ll drop a column to test that


SQL> alter table t1 drop column y;
X

So far so good. Now let us add a column to the table.


SQL> alter table t add x int;
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO  -- ???? no X

Table altered.

This is an unexpected result. At the time within the trigger we query the data dictionary to obtain a list of columns, it does not yet appear in the definition. Now that the statement has completed, if we perform a describe on the table, we can see that the new column X is indeed there.


SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- --------------
 EMPNO                                                                   NOT NULL NUMBER(4)
 ENAME                                                                            VARCHAR2(10)
 JOB                                                                              VARCHAR2(9)
 MGR                                                                              NUMBER(4)
 HIREDATE                                                                         DATE
 SAL                                                                              NUMBER(7,2)
 COMM                                                                             NUMBER(7,2)
 DEPTNO                                                                           NUMBER(2)
 X                                                                                NUMBER(38)

The only workaround I can currently think of is to do work asynchronously, via DBMS_JOB. Rather than DBMS_OUTPUT, I’ll log the column names in a table so we can view them after the fact:


SQL> create table col_debug ( ts timestamp, col_name varchar2(50));

Table created.

SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3    l_now timestamp := systimestamp;
  4  begin
  5    for i in ( select column_name from dba_tab_columns
  6               where table_name = p_tab
  7               and owner = 'SCOTT'
  8             )
  9    loop
 10      insert into col_debug values (l_now,i.column_name);
 11    end loop;
 12    commit;
 13  end;
 14  /

Procedure created.

SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  declare
  6    j int;
  7  BEGIN
  8    dbms_job.submit(j,'create_audit_trigger('''||ORA_DICT_OBJ_NAME||''');');
  9  END;
 10  /

Trigger created.

Now I repeat the addition of a column, and the check out debugging table


SQL> alter table t add x1 int;

Table altered.

SQL> select * from col_debug;

TS                                                                          COL_NAME
--------------------------------------------------------------------------- ---------------
24-FEB-17 03.02.36.553000 PM                                                EMPNO
24-FEB-17 03.02.36.553000 PM                                                ENAME
24-FEB-17 03.02.36.553000 PM                                                JOB
24-FEB-17 03.02.36.553000 PM                                                MGR
24-FEB-17 03.02.36.553000 PM                                                HIREDATE
24-FEB-17 03.02.36.553000 PM                                                SAL
24-FEB-17 03.02.36.553000 PM                                                COMM
24-FEB-17 03.02.36.553000 PM                                                DEPTNO
24-FEB-17 03.02.36.553000 PM                                                X
24-FEB-17 03.02.36.553000 PM                                                X1

10 rows selected.

So the trigger submits a job rather than does the work itself.

Why does an ADD column operation behave differently to DROP and CREATE? That remains a mystery Smile


					

Duplicate constraints are impossible right ?

Here’s a very simple example of a table called PARENT being a (surprise surprise) parent in a referential integrity relationship to a (drum roll for my choice of name) CHILD table


SQL> create table parent ( p int, constraint PAR_PK primary key (p) );

Table created.

SQL> create table child ( c int,
  2        p int
  3      );

Table created.

SQL> alter table child add constraint fk1 foreign key ( p ) references parent ( p );

Table altered.

That is all as we would expect, and similarly, if I inadvertently try to add the same foreign key constraint, I’ll get an error


SQL> alter table child add constraint fk2 foreign key ( p ) references parent ( p );
alter table child add constraint fk2 foreign key ( p ) references parent ( p )
                                     *
ERROR at line 1:
ORA-02275: such a referential constraint already exists in the table

So you might be wondering, how on earth did I get myself into the following calamity on my database:


SQL> select c.constraint_name, cc.column_name, c.r_constraint_name
  2  from   user_constraints c,
  3         user_cons_columns cc
  4  where  c.table_name = 'CHILD'
  5  and    c.constraint_type = 'R'
  6  and    c.constraint_name = cc.constraint_name;

CONSTRAINT_NAME                COLUMN_NAME                    R_CONSTRAINT_NAME
------------------------------ ------------------------------ --------------------
FK1                            P                              PAR_PK
FK2                            P                              PAR_PK

Yes – thats TWO foreign key constraints implementing the identical check.  How did I bypass the duplicate check we saw above ?

It’s just a little idiosyncrasy in all versions of Oracle since the inception of foreign key constraints. If you specify the constraints as part of the table creation DDL, the check for duplicates is skipped.


SQL> create table child ( c int,
  2    p int,
  3    constraint fk1 foreign key ( p ) references parent ( p ) ,
  4    constraint fk2 foreign key ( p ) references parent ( p )
  5  );

Table created.

It is worth looking for and rectifying this condition on your databases, because it can lead to confusing errors, for example, when you do a datapump of such a schema – the import will (correctly) fail on the second constraint, but you’ll be scratching your head when you go look at the imported result and see that the constraint is there !

This is fixed in 12c onwards.


SQL> create table child ( c int,
  2    p int,
  3    constraint fk1 foreign key ( p ) references parent ( p ) ,
  4    constraint fk2 foreign key ( p ) references parent ( p )
  5  );
  constraint fk2 foreign key ( p ) references parent ( p )
                 *
ERROR at line 4:
ORA-02274: duplicate referential constraint specifications

The real question is … why are you NOT blogging

Colleague Jeff Smith published an interesting post the other day about his “rules and regulations” for blogging, but the overriding theme (Ed: – this is my opinion, I’m not speaking for Jeff)  was that the “what” he blogs about was – anything he’s passionate about, and the “when” was – whenever felt inspired to do so.

That got me thinking about blogging in general.  I think it is safe to say

  • there are things in your working life that feel passionate about

(Because not to do so means you’re in the wrong career)

  • they are not all negative Smile

(same reason)

  • there’s a good chance that someone in your field of the IT industry has the same passions

(I got news for you…you’re not the only guy/girl running Oracle, or Microsoft, or SAP, or Java, or …)

  • most people in the IT industry know about this thing called the “internet” Smile

(Don’t blame me…blame Netscape , Facebook and cat videos)

So with those things in mind, to not  be blogging about the things in your career that you are passionate about … is … well…. sortta just plain rude.  The script you wrote, the problem you solved, the frustration you had with the way something worked – all of these are items of potential benefit for someone out there in the community.  Moreover, the act of putting it to “paper” might assist you if the topic at hand is one of a seemingly unsolvable problem.  And of course, the more people that are blogging, the greater the chance that you will receive a reciprocating benefit, namely, someone else’s blog post will assist you with some element of your working day.

You might be thinking – “Oh..I dont have necessary software to blog”  That doesn’t really cut it anymore since all of the blogging platforms have reasonable editors just sitting there right in the browser.  I know this because I’m typing this entry into such an editor right now.

Or you might be thinking – “Oh..I dont have the time to write one”  Yeah, that doesn’t really cut it either Smile If you haven’t got the time to write it down, just do it video instead.  And before you tell me that “Oh..I dont have necessary equipment”  let me share with you what I use for a lot of my “away from home” filming:

20170216_171434

In terms of a parts list, we have

  • standard compact camera (because they all do Full HD nowadays)
  • wind suppression device – yes, that’s the kitchen sponge sitting over the camera microphone slots
  • anti-vibration kit – that would be the paint roller swivelling on the piece of wood
  • zoom extension – that’s the broom handle to which the paint roller is attached
  • horizontal stabilisation lock – or “rubber bands” by their other name Smile

Rest assured, Steven Spielberg I’m definitely not, but I’m not trying to be.  And what’s more, I don’t think the viewing audience is interested tremendously in cinematic wonder – they’re interested in content that might help them have an easier and more productive time at work.  I’m just sharing what interests me, on the off chance that it will benefit you.

So whether it is blogging or video or any other medium, don’t be shy.  Jump on board and try your hand.  You might surprise yourself at how easy it is to reach out to the greater community in your IT world.

Give it a go !

Partition count for interval partitioned tables

When dealing with a RANGE partitioned table, the defined partitions dictate all of the data that can be placed into the table. For example, if I have a SALES table as per below


SQL> create table SALES
  2    ( cal_year  date,
  3      txn_id    int,
         ...
         ...
 24    )
 25  partition by range ( cal_year )
 26  (
 27    partition p_low values less than ( date '2000-01-01' ),
 28    partition p2000 values less than ( date '2001-01-01' ),
         ...
         ...
 34    partition p2016 values less than ( date '2017-01-01' )
 35  );

Table created.

then the existing partitions define a natural upper bound on the value of CAL_YEAR that I can insert into the table. For example, if I attempt to add a row for the year 2018, I get the familiar ORA-14400 that has called out many a DBA at the stroke of midnight on New Years Eve Smile


SQL> insert into SALES
  2  values ( date '2018-01-01', .... );

insert into SALES
            *
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition

As many will know, the resolution to this is either a maintenance task to ensure that there are sufficient partitions defined, or to use the INTERVAL partitioning method, which came available in 11g.


SQL> create table SALES
  2    ( cal_year  date,
  3      txn_id    int,
         ...
         ...
 23    )
 24  partition by range ( cal_year )
 25  INTERVAL( NUMTOYMINTERVAL(1,'YEAR'))
 26  (
 27    partition p_low values less than ( date '2000-01-01' ),
 28  );

Table created.

And I can observe partitions being created as required as data is added to the table


SQL> select PARTITION_NAME, HIGH_VALUE
  2  from   user_tab_partitions
  3  where  table_name = 'SALES';

PARTITION_NAME            HIGH_VALUE
------------------------- --------------------------------
P00                       TIMESTAMP' 2000-01-01 00:00:00'

SQL> insert into SALES
  2  values ( to_date('12-DEC-2011'),....);

SQL> select PARTITION_NAME, HIGH_VALUE
  2  from   user_tab_partitions
  3  where  table_name = 'SALES';

PARTITION_NAME            HIGH_VALUE
------------------------- --------------------------------
P00                       TIMESTAMP' 2000-01-01 00:00:00'
SYS_P362                  TIMESTAMP' 2012-01-01 00:00:00'

But this isn’t a post about how interval partitioning is defined, because it’s a topic that is now well understood and well detailed in the documentation and on many blogs.

I wanted to touch on a something more subtle that you might encounter when using interval partitioned tables. Let me do a query on the SALES table, which has been recreated (as INTERVAL partitioned) but is empty. Here is the execution plan when I query the table.


SQL> select * from SALES; --empty


-------------------------------------------------------------
| Id  | Operation           | Name  | Rows  | Pstart| Pstop |
-------------------------------------------------------------
|   0 | SELECT STATEMENT    |       |     1 |       |       |
|   1 |  PARTITION RANGE ALL|       |     1 |     1 |1048575|
|   2 |   TABLE ACCESS FULL | SALES |     1 |     1 |1048575|
-------------------------------------------------------------

Wow! One million partitions ! That might seem odd, because we know that our table has been defined only with a single partition, and even that might not be instantiated yet depending on our choice of “deferred_segment_creation” parameter on the database. But the explanation is relatively simple. The moment we define a table as interval partitioned, we in effect know “in advance” the definition of every single interval that will ever follow. The starting point for the intervals is known due to the initial partition definition in the DDL, and the size/length of the interval maps out every possible future partition.

image

The maximum number of partitions is 1048575, which is then reflected in the execution plan.

You’ll see similar information when you create an index on such a table. If the index is local, and hence follows the same partitioning scheme as the underlying table, then it too has potentially 1048575 partitions all not yet in use, but known in advance. So if you look at the PARTITION_COUNT column for such an index, you’ll also see that the database will state that it has a (very) high partition count


SQL> create index sales_ix on sales ( some_col ) local;

Index created.

SQL> select TABLE_NAME,INDEX_NAME,PARTITION_COUNT from user_part_indexes;

TABLE_NAME                     INDEX_NAME                     PARTITION_COUNT
------------------------------ ------------------------------ ---------------
SALES                          SALES_IX                               1048575

1 row selected.

So if you see anything suggesting one million partitions, double check to see if you really have that many.

image

AskTom TV

Sometimes at conferences I get asked – “How do you know all of the answers for the AskTom questions?”

I’d love to say “Because we’re super smart” Smile but the reality is, we’re just like anyone else with a passion for database technology.  We like to research things, apply our skills and use our experience to solve problems.

So I’ve realised that even though we try to give as much detail as possible when we are helping the AskTom community, it might not always be immediately apparent what thought process we followed when tackling a problem.

To help with that, we now have “AskTom TV”.   It’s not really a TV show Smile but what we’re doing is taking a sample AskTom question, and talking about how we tackled a problem, rather than just jumping straight to the solution.

The first two episodes are here, and there will be more to come throughout 2017.

Enjoy.

The SQL Loader log file … as data

I had an interesting AskTom question recently where the poster was using SQL Loader to load in tables, but wanted to be able to analyze the resultant log file after execution.  And of course, what better way to analyze..well…anything…than with a database and some SQL.

So we need to be able to access the log file as a table, and an external table is perfect for that, so let’s start there.

Here’s a sample SQL Loader log file (with a little perturbation to preserve anonymity).  It’s quite complex because multiple tables were loaded as part of a single SQL Loader run.


Table SCOTT.T1: 222455 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1229074 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T2:
  202547 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1248982 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T3:
  952092 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  499437 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T4:
  74373 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1377156 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T5:
  62 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1451467 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Space allocated for bind array:               11920000 bytes(10000 rows)
Read   buffer bytes:60485760
Total logical records skipped:          0
Total logical records read:       1451529
Total logical records rejected:         0
Total logical records discarded:        0
Run began on Mon Jan 02 16:47:04 2017
Run ended on Mon Jan 02 16:47:42 2017
Elapsed time was:     00:00:38.17
Elapsed time was:     00:00:38.17
CPU time was:         00:00:32.61

Now obviously it’s a trivial exercise to parse that file as an external table with one row per file line.


SQL> CREATE TABLE sqlldr_log (
  2    msg varchar2(200)
  3  )
  4  ORGANIZATION EXTERNAL (
  5    TYPE ORACLE_LOADER
  6    DEFAULT DIRECTORY temp
  7    ACCESS PARAMETERS (
  8      RECORDS DELIMITED BY NEWLINE
  9      FIELDS
 10      (
 11        msg      position(1:512)
 12      )
 13    )
 14    LOCATION ('sqlldr.log')
 15  )
 16  REJECT LIMIT UNLIMITED;

Table created.

SQL> select msg from sqlldr_log;

MSG
-----------------------------------------------------------------------------------------
Table SCOTT.T1: 222455 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1229074 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T2:
  202547 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1248982 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T3:
  952092 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  499437 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T4:
  74373 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1377156 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Table SCOTT.T5:
  62 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  1451467 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.
Space allocated for bind array:               11920000 bytes(10000 rows)
Read   buffer bytes:60485760
Total logical records skipped:          0
Total logical records read:       1451529
Total logical records rejected:         0
Total logical records discarded:        0
Run began on Mon Jan 02 16:47:04 2017
Run ended on Mon Jan 02 16:47:42 2017
Elapsed time was:     00:00:38.17
Elapsed time was:     00:00:38.17
CPU time was:         00:00:32.61

35 rows selected.

SQL>

but “So what ?” I hear you say. Now I got a text file coming as a text output in a query. I don’t seem to be much better off.

But now that I have the full power of SQL, I can start to do some simple parsing of the rows to (for example), extract the row load information.

Firstly, I’ll extract just those rows that have the text “Table” or “Rows” in them, and use a CASE statement to parse each of the “type” of rows (success, failed, etc) into individual columns


SQL> select
  2     rownum r
  3    ,substr(msg,1,50)
  4    ,case when msg like 'Table%' then substr(msg,7,instr(msg,':')-7) end tname
  5    ,case when msg like '  %Rows%successfully%' then substr(msg,3,instr(msg,'Rows')-4) end loaded
  6    ,case when msg like '  %Rows%data errors%' then substr(msg,3,instr(msg,'Rows')-4) end errs_data
  7    ,case when msg like '  %Rows%all WHEN cla%' then substr(msg,3,instr(msg,'Rows')-4) end errs_when
  8    ,case when msg like '  %Rows%were null%' then substr(msg,3,instr(msg,'Rows')-4) end errs_null
  9  from sqlldr_log
 10  where msg like 'Table%' or msg like '  %Rows%';

         R SUBSTR(MSG,1,50)                                   TNAME                          LOADED       ERRS_DATA    ERRS_WHEN    ERRS_NULL
---------- -------------------------------------------------- ------------------------------ ------------ ------------ ------------ ------------
         1 Table SCOTT.T1: 222455 Rows successfully loaded.   SCOTT.T1
         2   0 Rows not loaded due to data errors.                                                        0
         3   1229074 Rows not loaded because all WHEN clauses                                                          1229074
         4   0 Rows not loaded because all fields were null.                                                                        0
         5 Table SCOTT.T2:                                    SCOTT.T2
         6   202547 Rows successfully loaded.                                                202547
         7   0 Rows not loaded due to data errors.                                                        0
         8   1248982 Rows not loaded because all WHEN clauses                                                          1248982
         9   0 Rows not loaded because all fields were null.                                                                        0
        10 Table SCOTT.T3:                                    SCOTT.T3
        11   952092 Rows successfully loaded.                                                952092
        12   0 Rows not loaded due to data errors.                                                        0
        13   499437 Rows not loaded because all WHEN clauses                                                           499437
        14   0 Rows not loaded because all fields were null.                                                                        0
        15 Table SCOTT.T4:                                    SCOTT.T4
        16   74373 Rows successfully loaded.                                                 74373
        17   0 Rows not loaded due to data errors.                                                        0
        18   1377156 Rows not loaded because all WHEN clauses                                                          1377156
        19   0 Rows not loaded because all fields were null.                                                                        0
        20 Table SCOTT.T5:                                    SCOTT.T5
        21   62 Rows successfully loaded.                                                    62
        22   0 Rows not loaded due to data errors.                                                        0
        23   1451467 Rows not loaded because all WHEN clauses                                                          1451467
        24   0 Rows not loaded because all fields were null.                                                                        0

24 rows selected.

Now with a simple analytic function, I can “fill in the blanks” with my table name so it appears in every row


SQL> with base_data as (
  2  select
  3     rownum r
  4    ,substr(msg,1,50)
  5    ,case when msg like 'Table%' then substr(msg,7,instr(msg,':')-7) end tname
  6    ,case when msg like '  %Rows%successfully%' then substr(msg,3,instr(msg,'Rows')-4) end loaded
  7    ,case when msg like '  %Rows%data errors%' then substr(msg,3,instr(msg,'Rows')-4) end errs_data
  8    ,case when msg like '  %Rows%all WHEN cla%' then substr(msg,3,instr(msg,'Rows')-4) end errs_when
  9    ,case when msg like '  %Rows%were null%' then substr(msg,3,instr(msg,'Rows')-4) end errs_null
 10  from sqlldr_log
 11  where msg like 'Table%' or msg like '  %Rows%'
 12  )
 13  select
 14    last_value(tname ignore nulls) over ( order by r ) as tname,
 15    loaded,
 16    errs_data,
 17    errs_when,
 18    errs_null
 19  from base_data;

TNAME                          LOADED       ERRS_DATA    ERRS_WHEN    ERRS_NULL
------------------------------ ------------ ------------ ------------ ------------
SCOTT.T1
SCOTT.T1                                    0
SCOTT.T1                                                 1229074
SCOTT.T1                                                              0
SCOTT.T2
SCOTT.T2                       202547
SCOTT.T2                                    0
SCOTT.T2                                                 1248982
SCOTT.T2                                                              0
SCOTT.T3
SCOTT.T3                       952092
SCOTT.T3                                    0
SCOTT.T3                                                 499437
SCOTT.T3                                                              0
SCOTT.T4
SCOTT.T4                       74373
SCOTT.T4                                    0
SCOTT.T4                                                 1377156
SCOTT.T4                                                              0
SCOTT.T5
SCOTT.T5                       62
SCOTT.T5                                    0
SCOTT.T5                                                 1451467
SCOTT.T5                                                              0

24 rows selected.

and once I’ve got that, then all I need to do put that result within a GROUP BY, and voila ! I now have a summary my SQL Loader execution in a nice usable format


SQL> with base_data as (
  2  select
  3     rownum r
  4    ,substr(msg,1,50)
  5    ,case when msg like 'Table%' then substr(msg,7,instr(msg,':')-7) end tname
  6    ,case when msg like '  %Rows%successfully%' then substr(msg,3,instr(msg,'Rows')-4) end loaded
  7    ,case when msg like '  %Rows%data errors%' then substr(msg,3,instr(msg,'Rows')-4) end errs_data
  8    ,case when msg like '  %Rows%all WHEN cla%' then substr(msg,3,instr(msg,'Rows')-4) end errs_when
  9    ,case when msg like '  %Rows%were null%' then substr(msg,3,instr(msg,'Rows')-4) end errs_null
 10  from sqlldr_log
 11  where msg like 'Table%' or msg like '  %Rows%'
 12  ), pad_table_name as
 13  (
 14  select
 15    last_value(tname ignore nulls) over ( order by r ) as tname,
 16    loaded,
 17    errs_data,
 18    errs_when,
 19    errs_null
 20  from base_data
 21  )
 22  select
 23    tname,
 24    max(loaded) loaded,
 25    max(errs_data) errs_data,
 26    max(errs_when) errs_when,
 27    max(errs_null) errs_null
 28  from   pad_table_name
 29  where  loaded is not null
 30    or errs_data  is not null
 31    or errs_when is not null
 32    or errs_null is not null
 33  group by tname
 34  order by 1;

TNAME                          LOADED       ERRS_DATA    ERRS_WHEN    ERRS_NULL
------------------------------ ------------ ------------ ------------ ------------
SCOTT.T1                                    0            1229074      0
SCOTT.T2                       202547       0            1248982      0
SCOTT.T3                       952092       0            499437       0
SCOTT.T4                       74373        0            1377156      0
SCOTT.T5                       62           0            1451467      0

5 rows selected.

 

Footnote: If the LAST_VALUE analytic function left you bamboozled, check out my video series on them here

How do I get my trace files on a db-as-a-service

Yeah, sure it would be cool to crank up some big time powered VM’s in the cloud and let rip, but the reality is – if you’re starting out on a cloud exploration, you probably want to (initially at least) just dip your toes in the water and start with something small.  For example, if I wanted to play with 12c Release 2, I can just sign up for an Exadata Express service so I can explore the new features without breaking the bank.

But whatever the need, accessing a database as a service as opposed to a server, there’s often that fear of “handing over the reins”, that is, that I’ll not be able to do the things I want to do, especially when it comes to OS level access.  And for a developer or DBA, perhaps a thing that might raise alarm bells is: “How will I access my trace files ?”

Well, in 12c Release 2, there’s two nifty new views to help out here.  You can access trace file information directly from the database.  Here’s a simple example


SQL> desc V$DIAG_TRACE_FILE
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ADR_HOME                               VARCHAR2(444)
 TRACE_FILENAME                         VARCHAR2(68)
 CHANGE_TIME                            TIMESTAMP(3) WITH TIME ZONE
 MODIFY_TIME                            TIMESTAMP(3) WITH TIME ZONE
 CON_ID                                 NUMBER


SQL> select TRACE_FILENAME
  2  from   V$DIAG_TRACE_FILE
  3  order by 1;

TRACE_FILENAME
-----------------------------------------
db122_cjq0_32100.trc
db122_cjq0_3602.trc
db122_cjq0_3736.trc
db122_cjq0_6044.trc
db122_cjq0_8937.trc
db122_m001_20300.trc
db122_m001_21807.trc
db122_ora_10158.trc
db122_ora_5351.trc
db122_p000_32038.trc
db122_p000_3526.trc
db122_p000_3703.trc
db122_p000_4101.trc
db122_p000_5002.trc
db122_p000_6036.trc
db122_p000_8848.trc
...
...

And once I’ve identified a trace file that I’m interested in, I can grab it contents just as easily.



SQL> desc V$DIAG_TRACE_FILE_CONTENTS
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ADR_HOME                               VARCHAR2(444)
 TRACE_FILENAME                         VARCHAR2(68)
 RECORD_LEVEL                           NUMBER
 PARENT_LEVEL                           NUMBER
 RECORD_TYPE                            NUMBER
 TIMESTAMP                              TIMESTAMP(3) WITH TIME ZONE
 PAYLOAD                                VARCHAR2(4000)
 SECTION_ID                             NUMBER
 SECTION_NAME                           VARCHAR2(64)
 COMPONENT_NAME                         VARCHAR2(64)
 OPERATION_NAME                         VARCHAR2(64)
 FILE_NAME                              VARCHAR2(64)
 FUNCTION_NAME                          VARCHAR2(64)
 LINE_NUMBER                            NUMBER
 THREAD_ID                              VARCHAR2(64)
 SESSION_ID                             NUMBER
 SERIAL#                                NUMBER
 CON_UID                                NUMBER
 CONTAINER_NAME                         VARCHAR2(30)
 CON_ID                                 NUMBER




SQL> select PAYLOAD
  2  from   V$DIAG_TRACE_FILE_CONTENTS
  3  where  TRACE_FILENAME = 'orcl12c_ora_4163.trc'
  4  order by LINE_NUMBER;

PAYLOAD
----------------------------------------------------------------------------------------------------------------------------------
Trace file /u01/app/oracle/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_ora_4163.trc
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Build label:    RDBMS_12.2.0.1.0_LINUX.X64_161003
ORACLE_HOME:    /u01/app/oracle/product/12.2/db_1
System name:    Linux
Node name:      vbgeneric
Release:        3.8.13-118.14.1.el7uek.x86_64
Version:        #2 SMP Mon Oct 31 17:32:03 PDT 2016
Machine:        x86_64
Instance name: orcl12c
Redo thread mounted by this instance: 1
Oracle process number: 49
Unix process pid: 4163, image: oracle@vbgeneric


*** 2017-01-22T23:53:13.661980-05:00 (ORCL(3))

*** SESSION ID:(48.50034) 2017-01-22T23:53:13.662038-05:00
*** CLIENT ID:() 2017-01-22T23:53:13.662050-05:00
*** SERVICE NAME:(orcl) 2017-01-22T23:53:13.662059-05:00
*** MODULE NAME:(SQL*Plus) 2017-01-22T23:53:13.662069-05:00
*** ACTION NAME:() 2017-01-22T23:53:13.662079-05:00
*** CLIENT DRIVER:(SQL*PLUS) 2017-01-22T23:53:13.662087-05:00
*** CONTAINER ID:(3) 2017-01-22T23:53:13.662097-05:00

WAIT #139958764906240: nam='PGA memory operation' ela= 102 p1=0 p2=0 p3=0 obj#=1376 tim=656737684
WAIT #139958764906240: nam='SQL*Net message to client' ela= 38 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656738633
WAIT #139958764906240: nam='SQL*Net message from client' ela= 2608 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656741328
CLOSE #139958764906240:c=0,e=110,dep=0,type=1,tim=656741672
=====================
PARSING IN CURSOR #139958764708168 len=36 dep=0 uid=117 oct=47 lid=117 tim=656745298 hv=4128301241 ad='7429cdd8' sqlid='5t10uu7v11s5t'

BEGIN DBMS_OUTPUT.ENABLE(NULL); END;
END OF STMT
PARSE #139958764708168:c=3000,e=3496,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=656745290
EXEC #139958764708168:c=0,e=147,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=0,tim=656745624
WAIT #139958764708168: nam='SQL*Net message to client' ela= 10 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656745713
WAIT #139958764708168: nam='SQL*Net message from client' ela= 5605 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656751369
CLOSE #139958764708168:c=0,e=64,dep=0,type=0,tim=656751853
=====================
PARSING IN CURSOR #139958859515432 len=332 dep=1 uid=0 oct=3 lid=0 tim=656752148 hv=2698389488 ad='614a4b08' sqlid='acmvv4fhdc9zh'


select obj#,type#,ctime,mtime,stime, status, dataobj#, flags, oid$, spare1, spare2, spare3, signature, spare7, spare8, spare9, nvl
(dflcollid, 16382), creappid, creverid, modappid, modverid, crepatchid, modpatchid from obj$ where owner#=:1 and name=:2 and names
pace=:3 and remoteowner is null and linkname is null and subname is null

END OF STMT
EXEC #139958859515432:c=1000,e=58,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=656752148
FETCH #139958859515432:c=0,e=32,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=656752256
CLOSE #139958859515432:c=0,e=2,dep=1,type=3,tim=656752308
EXEC #139958859515432:c=0,e=38,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=656752540
FETCH #139958859515432:c=0,e=36,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=4,plh=813480514,tim=656752629
CLOSE #139958859515432:c=0,e=7,dep=1,type=3,tim=656752675
=====================
PARSING IN CURSOR #139958765170592 len=868 dep=3 uid=0 oct=3 lid=0 tim=656755217 hv=3633507567 ad='60fdfec8' sqlid='121ffmrc95v7g'


select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags,i.property,i.pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.di
stkey,i.lblkkey,i.dblkkey,i.clufac,i.cols,i.analyzetime,i.samplesize,i.dataobj#,nvl(i.degree,1),nvl(i.instances,1),i.rowcnt,mod(i.
pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0),nvl(c.deferrable#+c.valid#,0),nvl(i.spare1,i.intcols),i.spare4,i.spare2,i.
spare6,decode(i.pctthres$,null,null,mod(trunc(i.pctthres$/256),256)),nvl(i.evaledition#,1),nvl(i.unusablebefore#,0),nvl(i.unusable
beginning#,0), ist.cachedblk,ist.cachehit,ist.logicalread from ind$ i, ind_stats$ ist, (select enabled, min(intcols) unicols,min(t
o_number(bitand(defer,1))) deferrable#,min(to_number(bitand(defer,4))) valid# from cdef$ where obj#=:1 and enabled > 1 group by en
abled) c where i.obj#=c.enabled(+) and i.obj# = ist.obj#(+) and i.bo#=:1 order by i.obj#

END OF STMT
EXEC #139958765170592:c=0,e=137,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=4,plh=2542797530,tim=656755214
FETCH #139958765170592:c=1000,e=235,p=0,cr=3,cu=0,mis=0,r=0,dep=3,og=4,plh=2542797530,tim=656755601
CLOSE #139958765170592:c=0,e=2,dep=3,type=3,tim=656755692
=====================
PARSING IN CURSOR #139958765274984 len=552 dep=3 uid=0 oct=3 lid=0 tim=656755788 hv=1798149220 ad='6148cc98' sqlid='9t4vxdppkv534'
...
...
...

Easy as that !