How do I get my trace files on a db-as-a-service

Yeah, sure it would be cool to crank up some big time powered VM’s in the cloud and let rip, but the reality is – if you’re starting out on a cloud exploration, you probably want to (initially at least) just dip your toes in the water and start with something small.  For example, if I wanted to play with 12c Release 2, I can just sign up for an Exadata Express service so I can explore the new features without breaking the bank.

But whatever the need, accessing a database as a service as opposed to a server, there’s often that fear of “handing over the reins”, that is, that I’ll not be able to do the things I want to do, especially when it comes to OS level access.  And for a developer or DBA, perhaps a thing that might raise alarm bells is: “How will I access my trace files ?”

Well, in 12c Release 2, there’s two nifty new views to help out here.  You can access trace file information directly from the database.  Here’s a simple example

 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ADR_HOME                               VARCHAR2(444)
 TRACE_FILENAME                         VARCHAR2(68)
 CHANGE_TIME                            TIMESTAMP(3) WITH TIME ZONE
 MODIFY_TIME                            TIMESTAMP(3) WITH TIME ZONE
 CON_ID                                 NUMBER

  2  from   V$DIAG_TRACE_FILE
  3  order by 1;


And once I’ve identified a trace file that I’m interested in, I can grab it contents just as easily.

 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ADR_HOME                               VARCHAR2(444)
 TRACE_FILENAME                         VARCHAR2(68)
 RECORD_LEVEL                           NUMBER
 PARENT_LEVEL                           NUMBER
 RECORD_TYPE                            NUMBER
 TIMESTAMP                              TIMESTAMP(3) WITH TIME ZONE
 PAYLOAD                                VARCHAR2(4000)
 SECTION_ID                             NUMBER
 SECTION_NAME                           VARCHAR2(64)
 COMPONENT_NAME                         VARCHAR2(64)
 OPERATION_NAME                         VARCHAR2(64)
 FILE_NAME                              VARCHAR2(64)
 FUNCTION_NAME                          VARCHAR2(64)
 LINE_NUMBER                            NUMBER
 THREAD_ID                              VARCHAR2(64)
 SESSION_ID                             NUMBER
 SERIAL#                                NUMBER
 CON_UID                                NUMBER
 CONTAINER_NAME                         VARCHAR2(30)
 CON_ID                                 NUMBER

  3  where  TRACE_FILENAME = 'orcl12c_ora_4163.trc'
  4  order by LINE_NUMBER;

Trace file /u01/app/oracle/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_ora_4163.trc
Oracle Database 12c Enterprise Edition Release - 64bit Production
Build label:    RDBMS_12.
ORACLE_HOME:    /u01/app/oracle/product/12.2/db_1
System name:    Linux
Node name:      vbgeneric
Release:        3.8.13-118.14.1.el7uek.x86_64
Version:        #2 SMP Mon Oct 31 17:32:03 PDT 2016
Machine:        x86_64
Instance name: orcl12c
Redo thread mounted by this instance: 1
Oracle process number: 49
Unix process pid: 4163, image: oracle@vbgeneric

*** 2017-01-22T23:53:13.661980-05:00 (ORCL(3))

*** SESSION ID:(48.50034) 2017-01-22T23:53:13.662038-05:00
*** CLIENT ID:() 2017-01-22T23:53:13.662050-05:00
*** SERVICE NAME:(orcl) 2017-01-22T23:53:13.662059-05:00
*** MODULE NAME:(SQL*Plus) 2017-01-22T23:53:13.662069-05:00
*** ACTION NAME:() 2017-01-22T23:53:13.662079-05:00
*** CLIENT DRIVER:(SQL*PLUS) 2017-01-22T23:53:13.662087-05:00
*** CONTAINER ID:(3) 2017-01-22T23:53:13.662097-05:00

WAIT #139958764906240: nam='PGA memory operation' ela= 102 p1=0 p2=0 p3=0 obj#=1376 tim=656737684
WAIT #139958764906240: nam='SQL*Net message to client' ela= 38 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656738633
WAIT #139958764906240: nam='SQL*Net message from client' ela= 2608 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656741328
CLOSE #139958764906240:c=0,e=110,dep=0,type=1,tim=656741672
PARSING IN CURSOR #139958764708168 len=36 dep=0 uid=117 oct=47 lid=117 tim=656745298 hv=4128301241 ad='7429cdd8' sqlid='5t10uu7v11s5t'

PARSE #139958764708168:c=3000,e=3496,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=656745290
EXEC #139958764708168:c=0,e=147,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=0,tim=656745624
WAIT #139958764708168: nam='SQL*Net message to client' ela= 10 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656745713
WAIT #139958764708168: nam='SQL*Net message from client' ela= 5605 driver id=1413697536 #bytes=1 p3=0 obj#=1376 tim=656751369
CLOSE #139958764708168:c=0,e=64,dep=0,type=0,tim=656751853
PARSING IN CURSOR #139958859515432 len=332 dep=1 uid=0 oct=3 lid=0 tim=656752148 hv=2698389488 ad='614a4b08' sqlid='acmvv4fhdc9zh'

select obj#,type#,ctime,mtime,stime, status, dataobj#, flags, oid$, spare1, spare2, spare3, signature, spare7, spare8, spare9, nvl
(dflcollid, 16382), creappid, creverid, modappid, modverid, crepatchid, modpatchid from obj$ where owner#=:1 and name=:2 and names
pace=:3 and remoteowner is null and linkname is null and subname is null

EXEC #139958859515432:c=1000,e=58,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=656752148
FETCH #139958859515432:c=0,e=32,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=656752256
CLOSE #139958859515432:c=0,e=2,dep=1,type=3,tim=656752308
EXEC #139958859515432:c=0,e=38,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=656752540
FETCH #139958859515432:c=0,e=36,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=4,plh=813480514,tim=656752629
CLOSE #139958859515432:c=0,e=7,dep=1,type=3,tim=656752675
PARSING IN CURSOR #139958765170592 len=868 dep=3 uid=0 oct=3 lid=0 tim=656755217 hv=3633507567 ad='60fdfec8' sqlid='121ffmrc95v7g'

select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags,,i.pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.di
beginning#,0), ist.cachedblk,ist.cachehit,ist.logicalread from ind$ i, ind_stats$ ist, (select enabled, min(intcols) unicols,min(t
o_number(bitand(defer,1))) deferrable#,min(to_number(bitand(defer,4))) valid# from cdef$ where obj#=:1 and enabled > 1 group by en
abled) c where i.obj#=c.enabled(+) and i.obj# = ist.obj#(+) and order by i.obj#

EXEC #139958765170592:c=0,e=137,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=4,plh=2542797530,tim=656755214
FETCH #139958765170592:c=1000,e=235,p=0,cr=3,cu=0,mis=0,r=0,dep=3,og=4,plh=2542797530,tim=656755601
CLOSE #139958765170592:c=0,e=2,dep=3,type=3,tim=656755692
PARSING IN CURSOR #139958765274984 len=552 dep=3 uid=0 oct=3 lid=0 tim=656755788 hv=1798149220 ad='6148cc98' sqlid='9t4vxdppkv534'

Easy as that !

Haversine PL/SQL

I didn’t see a PL/SQL version on for the Haversine formula so here’s a version for anyone that needs it.

SQL> create or replace
  2  function p2p_distance(
  3              p_latitude1 number,
  4              p_longitude1 number,
  5              p_latitude2 number,
  6              p_longitude2 number) return number deterministic is
  7    earth_radius  number := 6371;
  8    pi_approx     number := 3.1415927/180; 
  9    lat_delta     number := (p_latitude2-p_latitude1)*pi_approx;
 10    lon_delta     number := (p_longitude2-p_longitude1)*pi_approx;
 11    arc           number := sin(lat_delta/2) * sin(lat_delta/2) +
 12                                 sin(lon_delta/2) * sin(lon_delta/2) * cos(p_latitude1*pi_approx) * cos(p_latitude2*pi_approx);
 13  begin
 14    return earth_radius * 2 * atan2(sqrt(arc), sqrt(1-arc));
 15  end;
 16  /

Function created.

SQL> select p2p_distance(36.12, -86.67, 33.94, -118.4) from dual;


Logwriter I/O

If you are on any version of the database past, then savvy DBA’s may have noticed the following message popping up occasionally in their trace files

Warning: log write time 540ms, size 444KB

In itself, that is quite a nice little addition – an informational message letting you know that perhaps your log writer performance is worth closer investigation.  MOS Note 601316.1 talks a little more about this message.

So let’s say you have seen this warning, and you are interested in picking up more information.  Well… you could start scanning trace files from time to time, and parsing out the content etc, or do some analysis perhaps using Active Session History, but given that these warnings are (by default) triggered at above 500ms, there’s a chance you might miss them via ASH.

In 12c, this has become a lot simpler – because it looks like the same data is now exposed via a view, v$lgwrio_outlier

sql> select function_name,io_size,file_name,io_latency
  2  from   V$LGWRIO_OUTLIER;

------------- ------- ------------------------------- -----------
LGWR               16 /u03/oradata/MYDB/control01.ctl         745
LGWR               16 /u03/oradata/MYDB/control01.ctl         506
LGWR               12 /u02/oradata/MYDB/redo09.dbf            705
LGWR                1 /u02/oradata/MYDB/redo08.dbf            502
LGWR                5 /u02/oradata/MYDB/redo07.dbf            538
LGWR              211 /u02/oradata/MYDB/redo10.dbf           1783
LGWR             1024 /u02/oradata/MYDB/redo10.dbf            597
LGWR                0 /u02/oradata/MYDB/redo10.dbf            597
LGWR               10 /u02/oradata/MYDB/redo10.dbf            597
LGWR                2 /u02/oradata/MYDB/redo10.dbf            775
LGWR                0 /u02/oradata/MYDB/redo07.dbf            683
LGWR                0 /u02/oradata/MYDB/redo09.dbf            704
LGWR                1 /u02/oradata/MYDB/redo10.dbf            504
LGWR                0 /u02/oradata/MYDB/redo10.dbf            690
LGWR               16 /u03/oradata/MYDB/control01.ctl         504
LGWR               58 /u02/oradata/MYDB/redo08.dbf            705 
LGWR                1 /u02/oradata/MYDB/redo08.dbf            704

Obviously the trace files are also tagged with the timestamp of occurrence, whereas the view does not expose this infromation.

But let’s just say two things here – (a) where there is a will there is a way, and (b) like most V$ views, it is based on an X$ object, which might contain more information Smile

For other types of I/O in the database, you can also take a look at V$IO_OUTLIER

Extending Flashback Data Archive in 12c

Flashback Data Archive (previously called Total Recall) has been around for a long time now. For those unfamiliar with it, (and by the way, if you are on Enterprise Edition, you should be familiar with it, because its a free feature), here is a very quick primer.

Create a tablespace to hold your history, and create a flashback archive using that space allocation.

SQL> create tablespace SPACE_FOR_ARCHIVE
  2  datafile 'C:\ORACLE\DB11\ARCH_SPACE.DBF' 
  3  size 100M;

Tablespace created.

  2  TABLESPACE space_for_archive

Flashback archive created.

Then add your tables to the archive. Some tables will be created automatically to support being able to record all of the changes to that table for as long as the archive duration specifies


Table altered.

SQL> select table_name
  2  from   user_tables
  3  /


And then voila! We can track changes to our table over time, well beyond the typical undo_retention setting for the database.

SQL> select empno, ename, job, sal, comm,
  3     ,nvl(VERSIONS_OPERATION,'I') op
  4  from EMP
  5  versions between timestamp
  6    timestamp '2014-02-11 20:12:00' and
  7    systimestamp
  8  order by empno;

     EMPNO ENAME      JOB              SAL       COMM TS           O
---------- ---------- --------- ---------- ---------- ------------ -
      7369 SMITH      CLERK            806            08.10.51 PM  I
      7369 SMITH      SALES           8060       1000 08.12.10 PM  U
      7499 ALLEN      SALESMAN        1606  300000000 08.10.51 PM  I
      7521 WARD       SALESMAN        1256  500000000 08.10.51 PM  I
      7566 JONES      MANAGER         2981            08.10.51 PM  I
      7900 JAMES      CLERK            956            08.10.51 PM  I
      7902 FORD       ANALYST         3006            08.10.51 PM  I
      7934 MILLER     CLERK           1306            08.10.51 PM  I
      7934 MILLER     CLERK           1306            08.12.10 PM  D

But sometimes we want to know more about the history. We want to know who did the change, what machine were they on, etc etc. Auditors tend to be a little snippy about those kind of things Smile

  • You: “I can confirm that the SALARY field was updated at 9:01am on January 25th”
  • Auditor: “And who did that change?”
  • You: “Er….um…..someone”

But it might not be practical to augment all of your tables with WHO, WHEN, WHERE etc columns to capture that information. That might involve a lot of application changes and testing.

If you’re on 12c, you might be in luck. Flashback Data Archive can capture additional information, assuming your application is taking advantage of the myriad of instrumentation facilities offered by the database.

We simply turn on context tracking, which can capture information on a per transaction basis

SQL> begin
  2    dbms_flashback_archive.set_context_level(
  3        level=> 'ALL');
  4  end;

PL/SQL procedure successfully completed.

Now when we perform changes to the table, we can look at the transaction ID (XID) for that change.

SQL> update EMP
  2  set    sal = sal*10
  3  where  empno = 7499;

1 row updated.

SQL> commit;

SQL> select XID from SYS_FBA_HIST_510592;


And using that XID, we can pick up the context information about that transaction

SQL> set serverout on
SQL> begin
  2    dbms_output.put_line(
  3      dbms_flashback_archive.get_sys_context (
  4         '09000B00C7080000', 'USERENV', 'SESSION_USER'));
  6    dbms_output.put_line(
  7      dbms_flashback_archive.get_sys_context (
  8         '09000B00C7080000', 'USERENV', 'HOST'));
 10    dbms_output.put_line(
 11      dbms_flashback_archive.get_sys_context (
 12         '09000B00C7080000', 'USERENV', 'MODULE'));
 13  end;
 14  /

How much more ? Well, quite a lot actually !


 Name                      Null?    Type
 ------------------------- -------- ------------------
 XID                                RAW(8)
 ACTION                             VARCHAR2(256)
 CLIENT_IDENTIFIER                  VARCHAR2(256)
 CLIENT_INFO                        VARCHAR2(256)
 CURRENT_SCHEMA                     VARCHAR2(256)
 CURRENT_USER                       VARCHAR2(256)
 DATABASE_ROLE                      VARCHAR2(256)
 DB_NAME                            VARCHAR2(256)
 GLOBAL_UID                         VARCHAR2(256)
 HOST                               VARCHAR2(256)
 INSTANCE_NAME                      VARCHAR2(256)
 IP_ADDRESS                         VARCHAR2(256)
 MODULE                             VARCHAR2(256)
 OS_USER                            VARCHAR2(256)
 SERVER_HOST                        VARCHAR2(256)
 SERVICE_NAME                       VARCHAR2(256)
 SESSION_USER                       VARCHAR2(256)
 SESSION_USERID                     VARCHAR2(256)
 SESSIONID                          VARCHAR2(256)
 TERMINAL                           VARCHAR2(256)
 SPARE                              VARCHAR2(256)

And since we can use our flashback query syntax to pick up all of the transaction ID’s

SQL> select empno, ename, job, sal, comm,
  3  from EMP
  4  versions between timestamp
  5    timestamp '2014-02-11 20:12:00' and
  6    systimestamp
  7  order by empno;

     EMPNO ENAME      JOB              SAL       COMM VERSIONS_XID
---------- ---------- --------- ---------- ---------- ----------------
      7369 SMITH      CLERK            806
      7369 SMITH      SALES           8060       1000 09001C00E04A0000
      7499 ALLEN      CLERK          16060  300000000 0A000A0024080000
      7499 ALLEN      SALESMAN       16060  300000000 09000B00C7080000
      7499 ALLEN      SALESMAN        1606  300000000
      7521 WARD       SALESMAN        1256  500000000

we can also take advantage of that new dictionary view SYS.SYS_FBA_CONTEXT_AUD to get all the context information for each transaction.

SQL> select ...
  2  from
  3  (  select
  4         empno, ename, job,
  5         sal, comm, VERSIONS_XID
  6     from EMP
  7     versions between timestamp
  8        timestamp '2014-02-11 20:12:00' and
  9        systimestamp
 11  where e.VERSIONS_XID = a.XID;

Flashback Data Archive is pretty nifty in 12c.


I was reading an article today about how 10,000+ Mongo installations that are/were openly accessible on the internet have now been captured by ransomware, with nearly 100,000 other instances potentially vulnerable to the same issue.

Now, since I’m an Oracle blogger, you may be inclined to think the post is going to jump on the “bash MongoDB” bandwagon, but it’s not.  I am going to bash  something…but it’s not MongoDB Smile

I’m going to steal a quote from … of all things… Jurassic Park Smile

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”

In this age of DevOps, Agility, reactive-this, and responsive-that, there is an understandable desire to have a goal of “deliver fast and often”.  But I’m not attacking the various methodologies and processes out there that espouse the new age of fast and furious delivery.  Because, to my knowledge, none of them ever had a goal of “fast delivery no matter what“…they had a goal of fast and flexible delivery whilst adhering to the unchanging fundamental principle of delivering high quality, secure and robust software.  It may be functionally incomplete, or have compromises in (for example) the user experience, but we never meant to compromise on the core principles.

So when I see stories about MongoDB (or any software technology) being exposed for reasons of poor security, my lament is not for the technology but the part when a developer, or administrator, or manager, or anyone in the chain of processes that skipped an appropriate assessment of security, in effect, playing a game of Russian Roulette with their customer’s personal details.  I don’t think there is even a defence of “ignorance” here, because the moment anything is deployed in an organisation, surely the first question that must be asked either before the deployment, or even in that awkward moment afterwards when you discover something was deployed without your knowledge is … Is it secured?  And if no-one can answer that question, then surely that’s the immediate death knell on that application right there ?

If “hand on heart” a team can claim that security measures are in place, and then these are circumvented, then that’s defensible – perfection in security is such a moving target nowadays.  But it seems a stretch to think that 100,000 Mongo teams out there all did a diligent crack at securing their system before getting all excited about having their application live Sad smile

So if you’re building applications with the latest Micro-agile-internet-of-things-reactive-responsive-Java-scripted-open-source-media-rich-blahblah approach…that’s super cool.  But that’s not a Leave Pass from being responsible when it comes to security.

AskTom takes another step forward

For over 16 years, AskTom has been one of the most valuable resources available to developers and database administrators working with the Oracle Database.  With over 20,000 questions tackled and answered, along with over 120,000 follow up’s to additional queries, it remains an outstanding knowledgebase of Oracle assistance.

And today, AskTom just got a whole lot better!

We’re excited to announce a new member of AskTom team…database evangelist Maria Colgan.  Many of you will know Maria for her work with Optimizer and In-Memory, but in fact she brings decades of expertise across the entire database technology stack to assist you with your questions. Maria blogs regularly at so make sure you check in there regularly as well. With Maria on the team, AskTom keeps getting better and better in 2017!



What you "liked" last year…

Well…when I say “liked”, what I mean is “the stuff you all clicked on a lot” last year. Whether you liked it or not will remain one of those great mysteries Smile

The top 6 posts from 2016 were:

Nice to see a 12c post in there – 2017 is the year I think most people will be moving to 12c if they have not done so already.

And of course, nice to see a SQL*Plus post in there….I love it when the old simple tools still add value for people.

Thanks for supporting the blog in 2016 … I’ll have stacks more content again for you this year.