Logwriter I/O

If you are on any version of the database past 10.2.0.4, then savvy DBA’s may have noticed the following message popping up occasionally in their trace files


Warning: log write time 540ms, size 444KB

In itself, that is quite a nice little addition – an informational message letting you know that perhaps your log writer performance is worth closer investigation.  MOS Note 601316.1 talks a little more about this message.

So let’s say you have seen this warning, and you are interested in picking up more information.  Well… you could start scanning trace files from time to time, and parsing out the content etc, or do some analysis perhaps using Active Session History, but given that these warnings are (by default) triggered at above 500ms, there’s a chance you might miss them via ASH.

In 12c, this has become a lot simpler – because it looks like the same data is now exposed via a view, v$lgwrio_outlier


sql> select function_name,io_size,file_name,io_latency
  2  from   V$LGWRIO_OUTLIER;

FUNCTION_NAME IO_SIZE FILE_NAME                        IO_LATENCY
------------- ------- ------------------------------- -----------
LGWR               16 /u03/oradata/MYDB/control01.ctl         745
LGWR               16 /u03/oradata/MYDB/control01.ctl         506
LGWR               12 /u02/oradata/MYDB/redo09.dbf            705
LGWR                1 /u02/oradata/MYDB/redo08.dbf            502
LGWR                5 /u02/oradata/MYDB/redo07.dbf            538
LGWR              211 /u02/oradata/MYDB/redo10.dbf           1783
LGWR             1024 /u02/oradata/MYDB/redo10.dbf            597
LGWR                0 /u02/oradata/MYDB/redo10.dbf            597
LGWR               10 /u02/oradata/MYDB/redo10.dbf            597
LGWR                2 /u02/oradata/MYDB/redo10.dbf            775
LGWR                0 /u02/oradata/MYDB/redo07.dbf            683
LGWR                0 /u02/oradata/MYDB/redo09.dbf            704
LGWR                1 /u02/oradata/MYDB/redo10.dbf            504
LGWR                0 /u02/oradata/MYDB/redo10.dbf            690
LGWR               16 /u03/oradata/MYDB/control01.ctl         504
LGWR               58 /u02/oradata/MYDB/redo08.dbf            705 
LGWR                1 /u02/oradata/MYDB/redo08.dbf            704
...

Obviously the trace files are also tagged with the timestamp of occurrence, whereas the view does not expose this infromation.

But let’s just say two things here – (a) where there is a will there is a way, and (b) like most V$ views, it is based on an X$ object, which might contain more information Smile

For other types of I/O in the database, you can also take a look at V$IO_OUTLIER

Extending Flashback Data Archive in 12c

Flashback Data Archive (previously called Total Recall) has been around for a long time now. For those unfamiliar with it, (and by the way, if you are on Enterprise Edition, you should be familiar with it, because its a free feature), here is a very quick primer.

Create a tablespace to hold your history, and create a flashback archive using that space allocation.



SQL> create tablespace SPACE_FOR_ARCHIVE
  2  datafile 'C:\ORACLE\DB11\ARCH_SPACE.DBF' 
  3  size 100M;

Tablespace created.



SQL> CREATE FLASHBACK ARCHIVE longterm
  2  TABLESPACE space_for_archive
  3  RETENTION 1 YEAR;

Flashback archive created.

Then add your tables to the archive. Some tables will be created automatically to support being able to record all of the changes to that table for as long as the archive duration specifies


SQL> ALTER TABLE EMP FLASHBACK ARCHIVE LONGTERM;

Table altered.

SQL> select table_name
  2  from   user_tables
  3  /

TABLE_NAME
---------------------------
SYS_FBA_HIST_71036
SYS_FBA_TCRV_71036
SYS_FBA_DDL_COLMAP_71036
EMP

And then voila! We can track changes to our table over time, well beyond the typical undo_retention setting for the database.


SQL> select empno, ename, job, sal, comm,
  2      nvl(VERSIONS_STARTTIME,LAST_MOD) TS
  3     ,nvl(VERSIONS_OPERATION,'I') op
  4  from EMP
  5  versions between timestamp
  6    timestamp '2014-02-11 20:12:00' and
  7    systimestamp
  8  order by empno;


     EMPNO ENAME      JOB              SAL       COMM TS           O
---------- ---------- --------- ---------- ---------- ------------ -
      7369 SMITH      CLERK            806            08.10.51 PM  I
      7369 SMITH      SALES           8060       1000 08.12.10 PM  U
      7499 ALLEN      SALESMAN        1606  300000000 08.10.51 PM  I
      7521 WARD       SALESMAN        1256  500000000 08.10.51 PM  I
      7566 JONES      MANAGER         2981            08.10.51 PM  I
      ...
      7900 JAMES      CLERK            956            08.10.51 PM  I
      7902 FORD       ANALYST         3006            08.10.51 PM  I
      7934 MILLER     CLERK           1306            08.10.51 PM  I
      7934 MILLER     CLERK           1306            08.12.10 PM  D

But sometimes we want to know more about the history. We want to know who did the change, what machine were they on, etc etc. Auditors tend to be a little snippy about those kind of things Smile

  • You: “I can confirm that the SALARY field was updated at 9:01am on January 25th”
  • Auditor: “And who did that change?”
  • You: “Er….um…..someone”

But it might not be practical to augment all of your tables with WHO, WHEN, WHERE etc columns to capture that information. That might involve a lot of application changes and testing.

If you’re on 12c, you might be in luck. Flashback Data Archive can capture additional information, assuming your application is taking advantage of the myriad of instrumentation facilities offered by the database.

We simply turn on context tracking, which can capture information on a per transaction basis


SQL> begin
  2    dbms_flashback_archive.set_context_level(
  3        level=> 'ALL');
  4  end;

PL/SQL procedure successfully completed.

Now when we perform changes to the table, we can look at the transaction ID (XID) for that change.


SQL> update EMP
  2  set    sal = sal*10
  3  where  empno = 7499;

1 row updated.

SQL> commit;


SQL> select XID from SYS_FBA_HIST_510592;

XID
----------------
09000B00C7080000

And using that XID, we can pick up the context information about that transaction


SQL> set serverout on
SQL> begin
  2    dbms_output.put_line(
  3      dbms_flashback_archive.get_sys_context (
  4         '09000B00C7080000', 'USERENV', 'SESSION_USER'));
  5
  6    dbms_output.put_line(
  7      dbms_flashback_archive.get_sys_context (
  8         '09000B00C7080000', 'USERENV', 'HOST'));
  9
 10    dbms_output.put_line(
 11      dbms_flashback_archive.get_sys_context (
 12         '09000B00C7080000', 'USERENV', 'MODULE'));
 13  end;
 14  /
SCOTT
WORKGROUP\XPS
SQL*Plus

How much more ? Well, quite a lot actually !


SQL> desc SYS.SYS_FBA_CONTEXT_AUD

 Name                      Null?    Type
 ------------------------- -------- ------------------
 XID                                RAW(8)
 ACTION                             VARCHAR2(256)
 AUTHENTICATED_IDENTITY             VARCHAR2(256)
 CLIENT_IDENTIFIER                  VARCHAR2(256)
 CLIENT_INFO                        VARCHAR2(256)
 CURRENT_EDITION_NAME               VARCHAR2(256)
 CURRENT_SCHEMA                     VARCHAR2(256)
 CURRENT_USER                       VARCHAR2(256)
 DATABASE_ROLE                      VARCHAR2(256)
 DB_NAME                            VARCHAR2(256)
 GLOBAL_UID                         VARCHAR2(256)
 HOST                               VARCHAR2(256)
 IDENTIFICATION_TYPE                VARCHAR2(256)
 INSTANCE_NAME                      VARCHAR2(256)
 IP_ADDRESS                         VARCHAR2(256)
 MODULE                             VARCHAR2(256)
 OS_USER                            VARCHAR2(256)
 SERVER_HOST                        VARCHAR2(256)
 SERVICE_NAME                       VARCHAR2(256)
 SESSION_EDITION_NAME               VARCHAR2(256)
 SESSION_USER                       VARCHAR2(256)
 SESSION_USERID                     VARCHAR2(256)
 SESSIONID                          VARCHAR2(256)
 TERMINAL                           VARCHAR2(256)
 SPARE                              VARCHAR2(256)

And since we can use our flashback query syntax to pick up all of the transaction ID’s


SQL> select empno, ename, job, sal, comm,
  2      VERSIONS_XID
  3  from EMP
  4  versions between timestamp
  5    timestamp '2014-02-11 20:12:00' and
  6    systimestamp
  7  order by empno;

     EMPNO ENAME      JOB              SAL       COMM VERSIONS_XID
---------- ---------- --------- ---------- ---------- ----------------
      7369 SMITH      CLERK            806
      7369 SMITH      SALES           8060       1000 09001C00E04A0000
      7499 ALLEN      CLERK          16060  300000000 0A000A0024080000
      7499 ALLEN      SALESMAN       16060  300000000 09000B00C7080000
      7499 ALLEN      SALESMAN        1606  300000000
      7521 WARD       SALESMAN        1256  500000000

we can also take advantage of that new dictionary view SYS.SYS_FBA_CONTEXT_AUD to get all the context information for each transaction.


SQL> select ...
  2  from
  3  (  select
  4         empno, ename, job,
  5         sal, comm, VERSIONS_XID
  6     from EMP
  7     versions between timestamp
  8        timestamp '2014-02-11 20:12:00' and
  9        systimestamp
 10  ) e, SYS.SYS_FBA_CONTEXT_AUD  a
 11  where e.VERSIONS_XID = a.XID;

Flashback Data Archive is pretty nifty in 12c.

https://docs.oracle.com/database/121/ADFNS/adfns_flashback.htm#ADFNS01011

When local partitions….aren’t

Let’s say I’ve got a partitioned table, and because New Year’s eve is coming around, I certainly don’t want to be called out at 12:01am because I forgot to add the required partition for the upcoming year Smile.

Since 11g, I can sleep easy at night by using the INTERVAL partition scheme. Here’s my table



SQL> create table t ( x date, y int )
  2  partition by range ( x )
  3  interval ( numtoyminterval(1,'MONTH'))
  4  (
  5    partition p201412 values less than ( date '2015-01-01' )
  6  );

Table created.

SQL> insert into t
  2  select date '2015-01-01'+rownum, rownum
  3  from dual
  4  connect by level <= 330;

330 rows created.

SQL> commit;

Commit complete.

SQL> select partition_name from user_tab_partitions where table_name = 'T';

PARTITION_NAME
----------------------------------------------------------------------------
P201412
SYS_P20234
SYS_P20235
SYS_P20236
SYS_P20237
SYS_P20238
SYS_P20239
SYS_P20240
SYS_P20241
SYS_P20242
SYS_P20243
SYS_P20244

12 rows selected.

The problem is … man, do I hate those automatic names Smile. I suppose I can fix them up later, but in the meantime, I’ll create my local index on the table, but I’ll get the names right whilst I do



SQL>
SQL> create index IX on T ( y  )
  2  local
  3  (
  4      partition P201412
  5      partition P201501,
  6      partition P201502,
  7      partition P201503,
  8      partition P201504,
  9      partition P201505,
 10      partition P201506,
 11      partition P201507,
 12      partition P201508,
 13      partition P201509,
 14      partition P201510,
 15      partition P201511
 16  )
 17  ;

Index created.

So far so good… I created 12 partitions in my table, and I explicitly listed 12 partitions in my index. But you need to be careful with such an approach, because if you’ve got that script in (say) your source control system, then even though you’ve specified a LOCAL index, you have also (perhaps unknowingly) set of limit of 12 partitions on the index should be re-run that script. So if I was to drop that index and recreate with the unchanged script (or for example, you’ve done a datapump extraction of DDL etc), then you might get yourself into a jam if the table data changes.



SQL>
SQL>
SQL> drop index ix;

Index dropped.

SQL>
SQL> insert into t
  2  values (sysdate,2000);

1 row created.

SQL>
SQL>
SQL> create index IX on T ( y  )
  2  local
  3  (
  4      partition P201412
  5      partition P201501,
  6      partition P201502,
  7      partition P201503,
  8      partition P201504,
  9      partition P201505,
 10      partition P201506,
 11      partition P201507,
 12      partition P201508,
 13      partition P201509,
 14      partition P201510,
 15      partition P201511
 16  )
 17  ;
create index IX on T ( y  )
                   *
ERROR at line 1:
ORA-14024: number of partitions of LOCAL index must equal that of the underlying table

Now my same DDL does not work, because there are 13 table partitions and I only listed 12 index partitions. The easy workaround is not to list the partitions at all.


SQL>
SQL> create index IX on T ( y  )
  2  local ;

Index created.

SQL>
SQL>

and perhaps have an renaming routine that can be applied after the fact.

A 12.2 treat for the festive season

We’re being asked to store more and more data, yet keep backup windows, query performance and the like unchanged, no matter how much we store. As a result, more and more database shops are needing to partition their data. The problem is – partitioning data is a significant restructure of the data, which thus incurs a large outage and the accompanying planning and coordination.

Unless you’re on 12.2.

Here’s a demo where we can take an existing table and

  • convert it to partitioned on a range scheme, also using intervals so we don’t need to worry about ongoing range creation maintenance, and
  • take one of the existing indexes, and convert it to a local index to mirror the table partitioning scheme, and
  • take the other existing index, and globally partition it.

Oh…and of course, we will do the whole thing online without service disruption

Oh…and of course, we will do it with a single SQL command.

Ab…So….Lute….Ly awesome !



SQL*Plus: Release 12.1.0.2.0 Production on Thu Dec 22 09:53:37 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Dec 19 2016 13:38:54 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.0.3 - 64bit Production


SQL> create table T as
  2  select d.*
  3  from dba_Objects d,
  4       ( select 1 from dual
  5         connect by level <= 20 )   6  where d.object_id is not null; Table created. SQL> create index IX on t ( object_id );

Index created.

SQL> create index IX2 on t ( created, object_name );

Index created.

SQL> alter table T modify
  2  partition by range (object_id) interval (10000)
  3  (
  4    partition p1 values less than (20000)
  5  ) online
  6  update indexes
  7  ( ix  local,
  8    ix2 global partition by range (created)
  9    (
 10     partition ix2_p1 values less than (date '2016-08-01'),
 11     partition ix2_p2 values less than (maxvalue)
 12   )
 13  );

Table altered.

SQL> select partition_name
  2  from   user_tab_partitions
  3  where  table_name = 'T';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P1
SYS_P1145
SYS_P1146
SYS_P1147
SYS_P1148
SYS_P1149
SYS_P1150

7 rows selected.

SQL> select index_name, partition_name
  2  from   user_ind_partitions
  3  where  index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
IX                             SYS_P1145
IX                             SYS_P1146
IX                             SYS_P1147
IX                             SYS_P1148
IX                             SYS_P1149
IX                             SYS_P1150
IX2                            IX2_P1
IX2                            IX2_P2

9 rows selected.

SQL>

max_enabled_roles – nice touch

Some people get very excited with roles, and quickly the number of roles proliferates to huge numbers…Until this happens

ORA-28031: maximum of 148 enabled roles exceeded

But in helping someone out on AskTom, I just found a nice touch in 11.2.0.4.  I had granted my account 200 roles (since the max_enabled_roles parameter is capped at 148), fully expecting to get the standard error when attempting to connect.  But I didn’t – all was fine.  You can see what is going on via the following demo.


SQL> begin
  2  for i in 1 .. 200 loop
  3    execute immediate 'create role rr'||i;
  4    execute immediate 'grant rr'||i||' to scott';
  5  end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> conn scott/tiger
Connected.

SQL> select count(*) from session_roles;

  COUNT(*)
----------
       148

So even though I granted 200 roles, when I queried SESSION_ROLES, I did not see all 200 , I only saw 148.

My alert.log shows this

Wed Nov 30 10:32:35 2016
Maximum of 148 enabled roles exceeded for user SCOTT. Not loading all the roles.

Some people might prefer an explicit error, but hopefully any good alert log monitoring would quickly pick this up.  And since if you exceed 148 there is no workaround (except to reduce the number of roles granted), I think avoiding the error is a better strategy.

Attribute clustering (part 3)

So in part1 and part2, we looked at creating tables with clustered data.  If you’re ready to climb aboard the attribute clustering heading toward Fastville Smile you might want to take an existing table and cluster it.  In part 2 we saw how we had to be extra careful with syntax.  The same rule applies with altering a table to cluster it.  Lets start with our SOURCE_DATA table which was not clustered.



SQL> create table source_data  as
  2      select d.*
  3      from dba_objects d
  4      where object_id is not null
  5      order by dbms_random.random;

Table created.

SQL>
SQL> select object_id, owner from source_data where rownum < 50;

 OBJECT_ID OWNER
---------- ------------------------------
     37627 PUBLIC
     79081 ORDSYS
     26953 SYS
     74744 PUBLIC
     47975 PUBLIC
     94003 APEX_050000
     59014 SYS
     66409 PUBLIC
     36977 PUBLIC
     74886 PUBLIC
       565 SYS
       ...
       ...

As you see, the data has no particular ordering.  Let’s now run an ALTER command to cluster this table



SQL> alter table source_data clustering by linear order(object_id);

Table altered.

SQL> select object_id, owner from source_data where rownum < 50;

 OBJECT_ID OWNER
---------- ------------------------------
     37627 PUBLIC
     79081 ORDSYS
     26953 SYS
     74744 PUBLIC
     47975 PUBLIC
     94003 APEX_050000
     59014 SYS
     66409 PUBLIC
     36977 PUBLIC
     74886 PUBLIC
       565 SYS
     35456 SYS
     74656 SYS
     ...
     ...

As you can see, nothing seems to have happened. Now that is to be expected, because we have only set an attribute on the table, similar to (say) setting the compression attribute.

Let us now MOVE the data to “reload” the table data.



SQL> alter table source_data move;

Table altered.

SQL>
SQL> select object_id, owner from source_data where rownum < 50;

 OBJECT_ID OWNER
---------- ------------------------------
     37627 PUBLIC
     79081 ORDSYS
     26953 SYS
     74744 PUBLIC
     47975 PUBLIC
     94003 APEX_050000
     59014 SYS
     66409 PUBLIC
     36977 PUBLIC
     74886 PUBLIC
       565 SYS
     35456 SYS
     74656 SYS
     ...
     ...

And …. Bzzzzt! Still nothing has happened. The issue is here is not the MOVE command. We made a tiny error in our initial ALTER statement. Let’s try it again



SQL> alter table source_data ADD clustering by linear order(object_id);

Table altered.

SQL>
SQL> alter table source_data move;

Table altered.

SQL>
SQL> select object_id, owner from source_data where rownum < 50;

 OBJECT_ID OWNER
---------- ------------------------------
         2 SYS
         3 SYS
         4 SYS
         5 SYS
         6 SYS
         7 SYS
         8 SYS
         9 SYS
        10 SYS
        11 SYS
        12 SYS
        13 SYS
        14 SYS
        15 SYS
        16 SYS
     ...
     ...

As you can see, the keyword ADD makes all the difference.  As I said in the previous posts, always validate that your clustering is happening as you expect.

Attribute clustering (part 2)

In the previous post, I demonstrated attribute clustering by

  • creating a table of source data which contained data in randomised order, via

SQL> create table source_data  as
  2      select d.*
  3      from dba_objects d
  4      where object_id is not null
  5      order by dbms_random.random;



  • and then it loading into a table with attribute clustering defined to cluster the data into the desired order.

The obvious question is then “Why not just cluster the data by combining the two operations ?”, namely


SQL> create table t1
  2  clustering by linear order(object_id)
  3  as select d.* from dba_objects d
  4  order by dbms_random.random;

There was a reason I did my first demo in the way I did.  Let’s look at the execution plan for that CREATE statement.  Now, bear with me, for simplicity sake, rather than show the execution plan for DBA_OBJECTS access (which (which is a huge complex view), I’ll use SOURCE_DATA both with and without ORDER BY clauses as a representative.


SQL> create table t1
  2  clustering by linear order(object_id)
  3  as select * from source_data;

------------------------------------------------------------------------
| Id  | Operation                        | Name        | Rows  | Bytes |
------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |             | 99386 |    10M|
|   1 |  LOAD AS SELECT                  | T1          |       |       |
|   2 |   OPTIMIZER STATISTICS GATHERING |             | 99386 |    10M|
|   3 |    SORT ORDER BY                 |             | 99386 |    10M|
|   4 |     TABLE ACCESS FULL            | SOURCE_DATA | 99386 |    10M|
------------------------------------------------------------------------

SQL> create table t1
  2  clustering by linear order(object_id)
  3  as select * from dba_objects
  4  order by dbms_random.random;
  
------------------------------------------------------------------------
| Id  | Operation                        | Name        | Rows  | Bytes |
------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |             | 99386 |    10M|
|   1 |  LOAD AS SELECT                  | T1          |       |       |
|   2 |   OPTIMIZER STATISTICS GATHERING |             | 99386 |    10M|
|   3 |    SORT ORDER BY                 |             | 99386 |    10M|
|   4 |     TABLE ACCESS FULL            | SOURCE_DATA | 99386 |    10M|
------------------------------------------------------------------------

Notice that the execution plans appear the same.  We can see a SORT ORDER BY in the first CTAS, even though there is no ORDER BY in the SELECT part.  We can hence reasonably assume this is to achieve the desired clustering.  But the second CTAS raises an interesting question.  If I have to sort the data for the clustering and  there was an ORDER BY clause in the query, which one of those, if any, does the SORT ORDER BY in the execution relate to ?  Well, lets run our CTAS command and find out.


SQL> create table t1
  2  clustering by linear order(object_id)
  3  as select * from source_data
  4  order by dbms_random.random;

Table created.

SQL>
SQL> select object_id, owner from t1 where rownum < 50;

 OBJECT_ID OWNER
---------- ------------------------------
     72256 SYS
     29101 PUBLIC
     26037 SYS
     47229 PUBLIC
     54002 SYS
     65203 PUBLIC
     49264 SYS
     71998 SYS
     91256 APEX_040200
     85311 PUBLIC
     72948 SYS
     68626 SYS
       337 SYS
     54564 SYS
     11256 SYS
     40694 SYS
     44193 PUBLIC
     31856 SYS
     26153 SYS
     25604 PUBLIC
  ...
  ...

Notice that the data is not clustered as per our expectation.  It looks like the final ORDER BY took precedence over our attempt to cluster the data.

So you need to be careful here when using attribute clustering. The database engine is quite particular on the syntax and ordering operations.

In the example above, simply in-lining the order by statement resolves the issue


SQL> create table t1
  2  clustering by linear order(object_id)
  3  as
  4  select *
  5  from
  6  ( select * from source_data order by dbms_random.random );

Table created.

SQL>
SQL> select object_id, owner from t1 where rownum < 50;

 OBJECT_ID OWNER
---------- ------------------------------
         2 SYS
         3 SYS
         4 SYS
         5 SYS
         6 SYS
         7 SYS
         8 SYS
         9 SYS
        10 SYS
        11 SYS
        12 SYS
        13 SYS
        14 SYS
        15 SYS
        16 SYS
        17 SYS
        ...
        ...

And just in case you are wondering, there was no change the execution plan for the above statement. That is, you don’t see any view resolution and multiple SORT ORDER BY lines. It looks as before


------------------------------------------------------------------------
| Id  | Operation                        | Name        | Rows  | Bytes |
------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |             | 99386 |    10M|
|   1 |  LOAD AS SELECT                  | T1          |       |       |
|   2 |   OPTIMIZER STATISTICS GATHERING |             | 99386 |    10M|
|   3 |    SORT ORDER BY                 |             | 99386 |    10M|
|   4 |     TABLE ACCESS FULL            | SOURCE_DATA | 99386 |    10M|
------------------------------------------------------------------------

But the 1 level of in-lining was enough to fix the issue.   So always validate that you are getting the clustering you are expecting.