Public / private cursors

As we all know (hopefully), we should always deploy named program units (procedures/functions) within packages. Its a great method of encapsulation of logic.

However, not many are aware that its not just procedures and functions that can have a public (package spec) and private (package body) component.

Cursors can also be defined in the same way as the following example shows:




CREATE PACKAGE my_pkg AS
   CURSOR c_recent_hires RETURN emp%ROWTYPE;
   ...
END emp_stuff;
/

CREATE PACKAGE BODY my_pkg AS
   CURSOR c_recent_hires RETURN emp%ROWTYPE IS
      SELECT * FROM emp 
      WHERE hiredate > sysdate-30;
   ...
END emp_stuff;
/


AskTom–some thoughts on the future

The structure of an AskTom question contains four elements:

  • The original question
  • Our answer
  • A review, which can be posted by anyone
  • And then we can opt to add a single Followup to any of those reviews

 

image

We’re thinking of changing AskTom to allow greater flexibility, namely:

  • The original question (unchanged)
  • Our answer (unchanged)
  • A review, which can be posted by anyone (unchanged)
  • We will then allow anyone to add a Followup to a review, and also, there could be any number of Followups attached to a single review.

Let us know your thoughts via Comment on this blog.  If you prefer a private response, you can also send your thoughts to: asktom_us@oracle.com

What do you think ?

Bitwise operations

The long existing BITAND function is now within the documentation, to let you do logical AND on two numbers, and is also available from PL/SQL

image

 

If you need other bit operations, a little boolean math should suffice Smile Just make sure you stay within the limits of BINARY_INTEGER


CREATE OR replace FUNCTION bitor( x IN binary_integer, y IN binary_integer ) RETURN binary_integer  AS
BEGIN
    RETURN x - bitand(x,y) + y;
END;
/

CREATE OR replace FUNCTION bitxor( x IN binary_integer, y IN binary_integer ) RETURN binary_integer  AS
BEGIN
    RETURN bitor(x,y) - bitand(x,y);
END;
/



Auto-backups of PLSQL source

I saw this on an ideas forum today

image

and whilst most people would take care of this with a source code control system, its also true that people might do several compilations / tests with their PLSQL source before checking it in officially to their source repository. 

So the idea has some merit… and maybe we can roll our own without too much fuss. I make no claims that this is a ‘complete’ solution, but it should get you going Smile



SQL> set timing off
SQL> drop trigger plsql_trigger;

Trigger dropped.

SQL>
SQL> drop table plsql_params purge;

Table dropped.

SQL> drop table plsql_history cascade constraints purge;

Table dropped.

SQL> drop table plsql_history_source cascade constraints  purge;

Table dropped.

SQL> drop table plsql_log cascade constraints  purge;

Table dropped.

SQL>
SQL>
SQL> create table plsql_params
  2  as select 3 versions_kept from dual;

Table created.

SQL>
SQL> create table plsql_history (
  2     TSTAMP     TIMESTAMP
  3    ,OWNER      VARCHAR2(128)
  4    ,NAME       VARCHAR2(128)
  5    ,TYPE       VARCHAR2(12)
  6    ,constraint plsql_history_pk primary key ( tstamp,owner,name,type)
  7  )
  8  organization index;

Table created.

SQL>
SQL> create table plsql_history_source (
  2     TSTAMP     TIMESTAMP
  3    ,OWNER      VARCHAR2(128)
  4    ,NAME       VARCHAR2(128)
  5    ,TYPE       VARCHAR2(12)
  6    ,LINE       NUMBER
  7    ,TEXT       VARCHAR2(4000)
  8    ,constraint plsql_history_source_pk primary key ( tstamp,owner,name,type,line)
  9    ,constraint plsql_history_source_fk foreign key ( tstamp,owner,name,type) references plsql_history ( tstamp,owner,name,type )
 10  );

Table created.

SQL>
SQL> create table plsql_log
  2  (
  3     TSTAMP     TIMESTAMP
  4    ,MSG        varchar2(1000)
  5  );

Table created.

SQL>
SQL>
SQL> create or replace trigger plsql_trigger
  2  before create on SCHEMA
  3  declare
  4    l_owner   varchar2(128) := ora_dict_obj_owner;
  5    l_name    varchar2(128) := ora_dict_obj_name;
  6    l_type    varchar2(128) := ora_dict_obj_type;
  7    l_archived timestamp := systimestamp;
  8    l_tstamp_to_clear timestamp;
  9
 10    procedure logger(m varchar2) is
 11      pragma autonomous_transaction;
 12    begin
 13      insert into plsql_log values (systimestamp,m);
 14      commit;
 15    end;
 16  begin
 17    if l_type in ('FUNCTION','PROCEDURE','PACKAGE','PACKAGE BODY') then
 18       insert into plsql_history values (l_archived, l_owner, l_name, l_type);
 19
 20       insert into plsql_history_source
 21       select l_archived, owner, name, type, line, text
 22       from   dba_source
 23       where  owner = l_owner
 24       and    name = l_name
 25       and    type = l_type;
 26
 27       logger('Archived '||l_type||'-'||l_owner||'.'||l_name);
 28
 29       select max(case when tot > versions_kept and seq = tot - versions_kept then tstamp end)
 30       into   l_tstamp_to_clear
 31       from   plsql_params,
 32              ( select ph.*,
 33                       row_number() over ( order by tstamp ) as seq,
 34                       count(*) over () as tot
 35                from plsql_history ph
 36              )
 37       where  owner = l_owner
 38       and    name = l_name
 39       and    type = l_type;
 40
 41       if l_tstamp_to_clear is not null then
 42         logger('Clearance timestamp for '||l_type||'-'||l_owner||'.'||l_name||' is '||l_tstamp_to_clear);
 43
 44         delete from plsql_history_source where tstamp <= l_tstamp_to_clear;
 45         delete from plsql_history        where tstamp <= l_tstamp_to_clear;
 46         logger('Cleared '||sql%rowcount||' versions for '||l_type||'-'||l_owner||'.'||l_name);
 47       end if;
 48    end if;
 49  end;
 50  /

Trigger created.

SQL>
SQL> create or replace procedure P_TEST is
  2  begin
  3    null; -- version 1
  4  end;
  5  /

Procedure created.

SQL>
SQL> create or replace procedure P_TEST is
  2  begin
  3    null; -- version 2
  4  end;
  5  /

Procedure created.

SQL>
SQL>
SQL> create or replace procedure P_TEST is
  2  begin
  3    null; -- version 3
  4  end;
  5  /

Procedure created.

SQL>
SQL>
SQL> create or replace procedure P_TEST is
  2  begin
  3    null; -- version 4
  4  end;
  5  /

Procedure created.

SQL>
SQL>
SQL> create or replace procedure P_TEST is
  2  begin
  3    null; -- version 5
  4  end;
  5  /

Procedure created.

SQL> select * from plsql_history;

TSTAMP                             OWNER        NAME         TYPE
---------------------------------- ------------ ------------ ---------------
02-FEB-16 12.12.42.280000 PM       MCDONAC      P_TEST       PROCEDURE
02-FEB-16 12.12.42.306000 PM       MCDONAC      P_TEST       PROCEDURE
02-FEB-16 12.12.42.339000 PM       MCDONAC      P_TEST       PROCEDURE

SQL>
SQL> select tstamp, text from plsql_history_source;

TSTAMP                             TEXT
---------------------------------- ------------------------------------------------------------
02-FEB-16 12.12.42.280000 PM       procedure P_TEST is
02-FEB-16 12.12.42.280000 PM       begin
02-FEB-16 12.12.42.280000 PM         null; -- version 2
02-FEB-16 12.12.42.280000 PM       end;
02-FEB-16 12.12.42.306000 PM       procedure P_TEST is
02-FEB-16 12.12.42.306000 PM       begin
02-FEB-16 12.12.42.306000 PM         null; -- version 3
02-FEB-16 12.12.42.306000 PM       end;
02-FEB-16 12.12.42.339000 PM       procedure P_TEST is
02-FEB-16 12.12.42.339000 PM       begin
02-FEB-16 12.12.42.339000 PM         null; -- version 4
02-FEB-16 12.12.42.339000 PM       end;

12 rows selected.

SQL>
SQL> select * from plsql_log;

TSTAMP                             MSG
---------------------------------- --------------------------------------------------------------------------------
02-FEB-16 12.12.42.221000 PM       Archived PROCEDURE-MCDONAC.P_TEST
02-FEB-16 12.12.42.255000 PM       Archived PROCEDURE-MCDONAC.P_TEST
02-FEB-16 12.12.42.281000 PM       Archived PROCEDURE-MCDONAC.P_TEST
02-FEB-16 12.12.42.307000 PM       Archived PROCEDURE-MCDONAC.P_TEST
02-FEB-16 12.12.42.307000 PM       Clearance timestamp for PROCEDURE-MCDONAC.P_TEST is 02-FEB-16 12.12.42.181000 PM
02-FEB-16 12.12.42.313000 PM       Cleared 1 versions for PROCEDURE-MCDONAC.P_TEST
02-FEB-16 12.12.42.340000 PM       Archived PROCEDURE-MCDONAC.P_TEST
02-FEB-16 12.12.42.340000 PM       Clearance timestamp for PROCEDURE-MCDONAC.P_TEST is 02-FEB-16 12.12.42.254000 PM
02-FEB-16 12.12.42.340000 PM       Cleared 1 versions for PROCEDURE-MCDONAC.P_TEST

9 rows selected.

Forever is a long time

It’s interesting that we still see lots of code out “in the wild” that leans itself to taking forever. Now I’m not talking here about poorly performing SQL etc…because (at least theoretically) no matter how bad the SQL is, presumably it will finish one day, even if that day is years into the future. What I’m referring to here is the default mechanism for locking in Oracle.

If you try access a row that is locked by someone else, you will wait. Don’t get me wrong … that’s a very good thing. The alternative – of letting two people change the same data at the same time is equivalent to saying “I dont care about my data at all”.

And how long will we wait ? Well… we’ll wait forever. We will never stop waiting, until that lock has been released.

I don’t know about you … but for me… forever is long time. A really long time. I cannot recall ever seeing any kind of Service Level Agreement for an application’s response times as stating “Yup…forever is cool. Take as long as you want” Smile

So perhaps consider that when you’re building your applications in Oracle. Yes, the default is to wait forever, but a little code (which has been available since version 9) can really go a long way.

Rather than just coding:

select * from T where …for update

or

delete from T where …

You can opt to wait for a nominated amount of time to attempt to get that lock (and then report something sensible back to the user).


SQL> select * from T for update wait 60;

(60 seconds pass)

select * from T for update wait 60
*
ERROR at line 1:
ORA-30006: resource busy; acquire with WAIT timeout expired

Note that the error code is not the same as the error you could get with a NOWAIT clause, which is:

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

But you don’t even need to stop there. Why not catch that error with a SERVERERROR trigger – and then do some session level probing to get more information. You could even empower the users to take their own action :-)


SQL> create or replace trigger lock_expiry
  2  after servererror on database
  3  declare
  4    l_err varchar2(4000);
  5  begin
  6    if ( is_servererror(30006) ) then
  7      for i in (
  8      select /*+ leading(lk sess) */ sess.sid, sess.username, sess.module
  9      from   v$lock lk,
 10             v$session sess,
 11             v$session s
 12      where  lk.id1 = s.row_wait_obj#
 13      and    lk.type = 'TM'
 14      and    lk.sid = sess.sid
 15      and    s.sid = sys_context('USERENV','SID')
 16      )
 17      loop
 18        l_err := l_err || 'Session '||i.sid||' who is '||i.username||' using '||i.module || chr(10);
 19      end loop;
 20      raise_application_error(-20000,l_err);
 21    end if;
 22  end;
 23  /

Trigger created.

SQL> select * from t for update wait 60;
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: Session 717 who is MCDONAC using SQL*Plus
ORA-06512: at line 18
ORA-30006: resource busy; acquire with WAIT timeout expired

Just for completeness, note that if you having locking issue, V$SESSION gives you direct access to find out who is blocking you.


SQL> select sid, last_call_et,
  2         nvl2(lockwait,'BLOCKED',status) status,
  3         blocking_session
  4  from   v$session s;

       SID LAST_CALL_ET STATUS     BLOCKING_SESSION 
---------- ------------ ---------- ---------------- 
        39          376 INACTIVE                    
        40          412 INACTIVE
        41          412 INACTIVE
        44          421 INACTIVE
        46          340 BLOCKED                  39 
        49            4 ACTIVE
        50          453 INACTIVE
        51          453 INACTIVE


Common GATHER_PLAN_STATISTIC confusion

Most people already know about the very cool GATHER_PLAN_STATISTICS hint.  If not, you can see an example here

But here’s a common cause of confusion when using it in SQL Plus:


SQL> select /*+ gather_plan_statistics*/ count(p) from t where x > sysdate - 30;

  COUNT(P)
----------
         0

SQL> select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'))

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
--------------------
SQL_ID  9babjv8yq8ru3, child number 0

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;

NOTE: cannot fetch plan for SQL_ID: 9babjv8yq8ru3, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)



So what has happened ? Why didn’t I get my nicely formatted output ?

Well, if you have “SERVEROUTPUT” set to “on” in SQL Plus, then when your SQL statement has completed, SQL Plus makes an additional call to the database to pick up any data in the DBMS_OUTPUT buffer.

Hence when you ask for statistics on the last call made to the database, it is not the one you were expecting.

Moral of the story: Turn off serveroutput before using GATHER_PLAN_STATISTICS in SQL Plus.

FIRST_ROWS vs FIRST_ROWS_n

You might be thinking that FIRST_ROWS is pretty much the same as FIRST_ROWS_1, since they are both after the first row. However, there are some important distinctions to make here. The traditional ‘FIRST_ROWS’ parameter is not a genuine cost-based option, more a hybrid between the rule and cost. This can be observed via the following example:



SQL> create table T
  2  as select rownum x, rpad(rownum,100,'x') padding
  3  from dual
  4  connect by level <= 2000
  5  order by dbms_random.value;

Table created.

SQL>
SQL> create index IX on T ( x ) ;

Index created.

SQL> set autotrace traceonly explain
SQL>
SQL> select * from T where x < 100;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    99 | 10395 |    12   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    99 | 10395 |    12   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("X"<100)

SQL> alter session set optimizer_mode = first_rows;

Session altered.

SQL>
SQL> set autotrace traceonly explain
SQL> select * from T where x < 100;

Execution Plan
----------------------------------------------------------
Plan hash value: 3947747388

--------------------------------------------------------------------------------------------
| Id  | Operation                           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |      |    99 | 10395 |    99   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T    |    99 | 10395 |    99   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | IX   |    99 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"<100)

Notice that the optimizer has chosen a path with a HIGHER cost, so in the case of first_rows, it is not solely the cost that plays a part as a determining factor. There are some heuristics in place to bias the optimizer toward index access.

You could quite naturally assume that the FIRST_ROWS_1 is the same as FIRST_ROWS, but this is NOT the case. The optimizer use the cost calculation and no other heuristics. You can see that the ‘n’ in FIRST_ROWS_n as an important driver in determining cardinalities for query result sets.



SQL> alter session set optimizer_mode = first_rows_1;

Session altered.

SQL> select * from T where x < 100;

Execution Plan
----------------------------------------------------------
Plan hash value: 3947747388

--------------------------------------------------------------------------------------------
| Id  | Operation                           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |      |     2 |   210 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T    |     2 |   210 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | IX   |       |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"<100)

SQL> alter session set optimizer_mode = first_rows_10;

Session altered.

SQL> select * from T where x < 100;

Execution Plan
----------------------------------------------------------
Plan hash value: 3947747388

--------------------------------------------------------------------------------------------
| Id  | Operation                           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |      |    11 |  1155 |    13   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T    |    11 |  1155 |    13   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | IX   |       |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"<100)

SQL>
SQL> alter session set optimizer_mode = first_rows_100;

Session altered.

SQL> select * from T where x < 100;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    99 | 10395 |    12   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    99 | 10395 |    12   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("X"<100)

SQL>
SQL>


So we can start to see the methodology at work here. The optimizer is "taking our word" in that, (independent of what the table statistics imply), it will only need to use the first 'n' rows in the relevant source. It is a true cost adjustment rather than a change to the optimization scheme (as the original FIRST_ROWS is)