Learning is not a spectator sport

July 14, 2014

Associative arrays and Nested tables

Filed under: Uncategorized — connormcdonald @ 7:52 pm

A common criticism of PLSQL is that the "original" array datatype, now called associative arrays are perfect for passing stuff back and forth to 3GL environments (for example .Net),  but canno be used within SQL natively, for example:

SQL> create or replace
  2  package BLAH is
  3    type num_list is table of number index by pls_integer;
  4    type str_list is table of varchar2(30) index by pls_integer;
  5
  6  procedure ODP_PROC(n out num_list, s out str_list);
  7
  8  procedure USE_IN_QUERY(n num_list);
  9
 10  end;
 11  /

Package created.

SQL> create or replace
  2  package body BLAH is
  3
  4  procedure ODP_PROC(n out num_list, s out str_list) is
  5  begin
  6    select rownum, 'mystring'||rownum
  7    bulk collect into n,s
  8    from dual
  9    connect by level < 100;
 10  end;
 11
 12  procedure USE_IN_QUERY(n num_list) is
 13    x int;
 14  begin
 15    select count(*)
 16    into   x
 17    from   user_objects
 18    where object_id in (
 19       select column_value
 20       from   table(n)
 21       );
 22  end;
 23
 24  end;
 25  /

Warning: Package Body created with compilation errors.

SQL> sho err
Errors for PACKAGE BODY BLAH:
LINE/COL ERROR
-------- -----------------------------------------------------------------
14/3     PL/SQL: SQL Statement ignored
19/13    PL/SQL: ORA-22905: cannot access rows from a non-nested table
         item
19/19    PLS-00382: expression is of wrong type

So we then look at using nested tables, but of course, a lot of 3GL’s will not understand such type – sometimes needing some sort of type translation.  Hence people complain about the incompatibility etc.  But its not all that hard – why not use both to have the best of both worlds….Use associative arrays to pass externally to 3GL’s, and convert to a nested table if you want to use within SQL, eg

SQL> create or replace
  2  type  nt_num_list is table of number;
  3  /

Type created.

SQL> create or replace
  2  package BLAH is
  3    type num_list is table of number index by pls_integer;
  4    type str_list is table of varchar2(30) index by pls_integer;
  5
  6  procedure ODP_PROC(n out num_list, s out str_list);
  7
  8  procedure USE_IN_QUERY(n num_list);
  9
 10  end;
 11  /

Package created.

SQL> create or replace
  2  package body BLAH is
  3
  4  procedure ODP_PROC(n out num_list, s out str_list) is
  5  begin
  6    select rownum, 'mystring'||rownum
  7    bulk collect into n,s
  8    from dual
  9    connect by level < 100;
 10  end;
 11
 12  procedure USE_IN_QUERY(n num_list) is
 13    x int;
 14
 15    local_copy nt_num_list := nt_num_list();
 16  begin
 17    local_copy.extend(n.count);
 18    for i in 1 .. n.count loop
 19       local_copy(i) := n(i);
 20    end loop;
 21
 22    select count(*)
 23    into   x
 24    from   user_objects
 25    where object_id in (
 26       select column_value
 27       from   table(local_copy)
 28       );
 29  end;
 30
 31  end;
 32  /

Package body created.

That wasn’t that hard was it ? :-)

July 6, 2014

No Hakan factor for IOT

Filed under: Uncategorized — connormcdonald @ 5:24 pm

Sadly there seems to be no concept of the Hakan factor for an IOT.

I have an application which merges into an IOT, the merge incrementally populating a swag of initially null columns, hence growing the rows in size.  Some simple benchmarking shows the overhead of this versus merging into a table with pre-populated values:

SQL> create table T1
  2   ( x int primary key,
  3     y1 number(10),
  4     y2 number(10),
  5     y3 number(10),
  6     y4 number(10),
  7     y5 number(10),
  8     y6 number(10),
  9     y7 number(10),
 10     y8 number(10),
 11     y9 number(10),
 12     y0 number(10)
 13  )
 14  organization index
 15  /

Table created.

SQL> create table T2
  2   ( x int primary key,
  3     y1 number(10),
  4     y2 number(10),
  5     y3 number(10),
  6     y4 number(10),
  7     y5 number(10),
  8     y6 number(10),
  9     y7 number(10),
 10     y8 number(10),
 11     y9 number(10),
 12     y0 number(10)
 13  )
 14  organization index
 15  /

Table created.

SQL> insert into t1 (x) select rownum from dual connect by level <= 100000;

100000 rows created.

SQL> insert into t2 select rownum,123,123,123,123,123,123,123,123,123,123 from dual connect by level <= 100000;

100000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(user,'T1')

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'T2')

PL/SQL procedure successfully completed.

SQL> alter session set events = '10046 trace name context forever, level 8';

Session altered.

SQL> merge into t1
  2  using ( select rownum x,
  3                        12 y1,
  4                        23 y2,
  5                        34 y3,
  6                        45 y4,
  7                        56 y5,
  8                        67 y6,
  9                        78 y7,
 10                        89 y8,
 11                        90 y9,
 12                        100 y0
 13          from dual
 14          connect by level <= 20000
 15        ) m
 16  on ( t2.x = m.x )
 17  when matched then
 18  update
 19  set
 20  t2.y1 = m.y1,
 21  t2.y2 = m.y2,
 22  t2.y3 = m.y3,
 23  t2.y4 = m.y4,
 24  t2.y5 = m.y5,
 25  t2.y6 = m.y6,
 26  t2.y7 = m.y7,
 27  t2.y8 = m.y8,
 28  t2.y9 = m.y9,
 29  t2.y0 = m.y0;

20000 rows merged.

SQL> merge into t2
  2  using ( select rownum x,
  3                        12 y1,
  4                        23 y2,
  5                        34 y3,
  6                        45 y4,
  7                        56 y5,
  8                        67 y6,
  9                        78 y7,
 10                        89 y8,
 11                        90 y9,
 12                        100 y0
 13          from dual
 14          connect by level <= 20000
 15        ) m
 16  on ( t1.x = m.x )
 17  when matched then
 18  update
 19  set
 20  t1.y1 = m.y1,
 21  t1.y2 = m.y2,
 22  t1.y3 = m.y3,
 23  t1.y4 = m.y4,
 24  t1.y5 = m.y5,
 25  t1.y6 = m.y6,
 26  t1.y7 = m.y7,
 27  t1.y8 = m.y8,
 28  t1.y9 = m.y9,
 29  t1.y0 = m.y0;

20000 rows merged.

SQL> disc

So T1 was a table with mainly null columns. The trace file shows this result

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.57       0.57          0        770      56302       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.57       0.58          0        770      56302       20000

whereas when you compare it to T2 with the pre-populated "junk"

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.49       0.49          0       1046      20884       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.49       0.49          0       1046      20884       20000

Now of course you can get those better results with T1 by rebuilding the IOT with plenty of free space to hold the row expansion. Repeating the merge after a "alter table move pctfree 90" gives a trace result of:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.51       0.54        367       6396      20418       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.53       0.56        367       6396      20418       20000

but of course, I’ve also just smashed a PCTFREE 90 growth into all of the already merged (or "filled") rows.

The Hakan factor would be a nice fix….but alas

SQL> alter table T1 minimize records_per_block;
alter table T1 minimize records_per_block
*
ERROR at line 1:
ORA-28601: invalid [no]MINIMIZE option 

(Tested on v10, 11 and 12)

June 29, 2014

12c nasty with remote query optimization

Filed under: Uncategorized — connormcdonald @ 6:01 pm

We have a fairly common query process, where we run a MERGE command to compare a remote table to a local copy of it, as "poor mans" Golden Gate to bring that table up to date on a regular basis.  [Editors note: Writing MERGE's is more complicated but a lot cheaper than Golden Gate :-)]

After an upgrade to 12c, the performance of some of the MERGE’s went very bad…and you can see what happened with the (sanitised) example below:

The bold part is a join that we’ll be executing on the remote database (mydb). It’s been hinted to run in a particular way.

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

Now we can’t see directly from the explain plan how the query will be run on the remote database – we just get a "REMOTE" line in the plan. However, the additional data in the plan reveals a problem

------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT      |             |  1068K|   696M|       |  2492K  (1)| 00:01:38 |        |      |
|   1 |  MERGE               | LOC_T1      |       |       |       |            |          |        |      |
|   2 |   VIEW               |             |       |       |       |            |          |        |      |
|*  3 |    HASH JOIN OUTER   |             |  1068K|   298M|   210M|  2492K  (1)| 00:01:38 |        |      |
|   4 |     REMOTE           |             |  1068K|    90M|       | 50193   (2)| 00:00:02 | PORAI~ | R->S |
|   5 |     TABLE ACCESS FULL| T1          |    38M|  3625M|       | 91205   (2)| 00:00:04 |        |      |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("P"."SEQ"="W"."SEQ"(+))

Look very closely at what query Oracle will be throwing at the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("T1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

The two tables have been re-aliased as A1 and A2, but notice that one of the hints did NOT get corrected.  The lack of a (valid) leading hint led to a bad plan on the remote database, and performance problems as a result.  For our case, the solution was to explictly add a NO_MERGE hint into the original statement:

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ no_merge leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

which yielded a correctly hinted SQL on the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("A1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

This only has come to light on 12c – the previous version we were on (11.2.0.3) was unaffected.

Testing…the surgeon’s approach

Filed under: Uncategorized — connormcdonald @ 5:24 pm

I played a lot of volleyball in a bygone life :-) and subsequently ruined my knees to the extent that I needed surgery. I got a shock when the surgeon (after a series of x-rays and checks) said to me: "Of course, we’ll only know once we’re in there".

So here’s a body part (a knee) that’s had hundreds of thousands of years to evolve, so you’d expect that knees are pretty much the same world wide, yet an experienced and qualified surgeon puts the "we cant be 100% sure" caveat before chopping me open.

I wish we could apply the same process to testing of IT software.  I remember reading about the mantra of "its harder to fix bugs once they’re in production" and that’s certainly true.  However, somewhere along the way, that became the justification for test cycles being incredibly long and mind-bogglingly detailed.  If my Finance software can’t balance the books, then yes, that’s a big drama.  But if the "Save" button is a shade of blue that didn’t match the design screen shots – is it really worth holding back the production implementation? There are two problems with a mantra of "we will find every possible defect":

1) You can find defects in software almost ad infinitum.  They just get less and less severe, and your testing cycle bottlenecks you entire IT department.

2) You create a false confidence in the testing.  "Hell, if we spent 12 months in testing, then we’ll never find any bugs once we go to Production." 

So I say – Why not take the surgeons approach?  Sure, we’ll test the product to a reasonable level of satisfaction, but we’ll readily accept the fact that the ultimate test ground is only Production.  "We’ll only know when its in there". 

Upgrade to 12c … credentials

Filed under: Uncategorized — connormcdonald @ 3:56 pm

We did a "real" upgrade to 12c this weekend, where "real" means a production system, as opposed to my laptop, a play VM etc etc :-)

It all went relatively smoothly except for one interesting thing, that I can’t 100% say was caused by the upgrade, but it would appear to be the case.

After the upgrade, our scheduler external jobs started failing.  A quick look in the alert log revealed:

Sun Jun 29 09:26:11 2014
ORA-12012: error on auto execute of job "FIRE_SCHED_DAILY"
ORA-27496: credential "LOCAL_OS_ACCT" is disabled

So its possible (not proven) that upgrading to 12c might disable credentials. In this particular case, the database went from standalone to a pluggable database.

The remedy was the simply drop and recreate the credential

April 26, 2014

Dumb triggers part 2 – session level control

Filed under: Uncategorized — connormcdonald @ 12:10 pm

In the previous post, I pontificated about triggers that "lock you in" to having them fire, which can create dramas when it comes to doing data patching.

Maybe you can design your application around this, but if you cant, the last thing you want to be doing is having to take an outage so that you can disable the trigger whilst you do your data maintenance.  Ideally you want the trigger to fire as per normal, except in your session. 

And that’s actually pretty easy to do.  A simple example is below

SQL> create or replace
  2  package TRIGGER_CTL is
  3
  4  --
  5  -- Session level control of triggers for data patching etc
  6  --
  7
  8    -- add a trigger to NOT fire for this session
  9    procedure disable(p_trigger_name varchar2);
 10
 11    -- reinstate normal trigger operation for this session
 12    procedure enable(p_trigger_name varchar2);
 13
 14    -- reinstate all triggers for this session
 15    procedure enable_all;
 16
 17    -- return if trigger is active in this session (which of course is the default)
 18    function enabled_in_session(p_trigger_name varchar2) return boolean;
 19
 20  end;
 21  /

Package created.

SQL> create or replace
  2  package body TRIGGER_CTL is
  3
  4    type t_disabled_triggers is table of number
  5      index by varchar2(30);
  6
  7    g_disabled_triggers t_disabled_triggers;
  8
  9  procedure disable(p_trigger_name varchar2) is
 10  begin
 11    g_disabled_triggers(upper(p_trigger_name)) := 1;
 12  end;
 13
 14  procedure enable(p_trigger_name varchar2) is
 15  begin
 16    if g_disabled_triggers.exists(upper(p_trigger_name)) then
 17      g_disabled_triggers.delete(upper(p_trigger_name));
 18    end if;
 19  end;
 20
 21  procedure enable_all is
 22  begin
 23    g_disabled_triggers.delete;
 24  end;
 25
 26  function enabled_in_session(p_trigger_name varchar2) return boolean is
 27  begin
 28    return not g_disabled_triggers.exists(upper(p_trigger_name));
 29  end;
 30
 31  end;
 32  /

Package body created.

Once we’ve got that little utility coded up, its easy to get session level control over triggers, simply by adding a check

CREATE OR REPLACE TRIGGER MY_TRG
BEFORE INSERT OR UPDATE ON MY_TABLE
FOR EACH ROW
BEGIN
if trigger_ctl.enabled_in_session('MY_TRG') then
   <my original trigger body code>
end if;   
END;
/

And to control the trigger at session level, its then just a call to your API

SQL> exec trigger_ctl.disable('MY_TRG')
SQL> -- then your data maintenance
SQL> exec trigger_ctl.enable('MY_TRG')

April 24, 2014

Really dumb triggers

Filed under: Uncategorized — connormcdonald @ 1:36 pm

Some people hate triggers, some people love triggers…

I am not that opinionated on them in either direction, but one thing I do hate, whether it be a trigger or not, is dumb code.  And today’s post just happens to be about dumb code in a trigger.

Consider this simple trigger (you see these everywhere pre 12c):

CREATE OR REPLACE TRIGGER MY_TRG
  BEFORE INSERT ON MY_TABLE 
  FOR EACH ROW
BEGIN
   SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL;
END;
/

Seems harmless enough…and I’m not talking about whether we should be using ":new.col := seq.nextval", because under the covers it will do a select-from-dual anyway.

The reason the code is really dumb…is you are totally screwed if you need to some data maintenance…

Consider the following scenario:

Customer: "Last week, I accidentally deleted MY_PK_COL = 10, here’s what the row should look like, can you please put it back?"

Your response is simple…."No I cant".

Why ? Because you can never, ever re-insert MY_PK_COL = 10 if the sequence has advanced past 10.  All of your options are nasty…

a) reset the sequence value ?  What of other transactions taking place?

b) disable the trigger ? ditto.

Now people will get up on their soap box and say "Its a surrogate key, the value should be meaningless, it shouldn’t matter what it is" etc etc…and I admire your tenacious grip on the religious argument.  But that’s like saying "All databases should have referential integrity constraints!"…Well duh, but that’s not how the real world is :-(

Its just a dumb way of coding.  If you really need these kinds of triggers (hint: you don’t), then at least code them defensively:

CREATE OR REPLACE TRIGGER MY_TRG
  BEFORE INSERT ON MY_TABLE 
  FOR EACH ROW
  when ( new.MY_PK_COL is null )
BEGIN
   SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL;
END;
/

so at least you’re not clobbering someone’s data. 

April 12, 2014

An accusatory error message

Filed under: Uncategorized — connormcdonald @ 10:40 pm

I found this hilarious

SQL> startup
ORACLE instance started.

Total System Global Area 1469792256 bytes
Fixed Size                  2402776 bytes
Variable Size             536872488 bytes
Database Buffers          922746880 bytes
Redo Buffers                7770112 bytes
Database mounted.
ORA-19821: an intentionally corrupt log file was found

Really ? I intentionally corrupted my log file ?  I dont think so !

:-)

SQL injection

Filed under: Uncategorized — connormcdonald @ 10:27 pm

Another big public username and password leak…

http://o.canada.com/technology/bell-canada-security-breach-391451/

Some good reading on how it was done, and thus ensuring your code isn’t prone to SQL injection here:

http://www.troyhunt.com/2014/02/heres-how-bell-was-hacked-sql-injection.html

Pluggable database and restricted sessions

Filed under: Uncategorized — connormcdonald @ 10:25 pm

Once you get into pluggable database territory, you might need to check your usage of "alter system enable restricted session", because unless you’ve patched, there’s a little bug which lets you enable restricted session, but wont let you get out of it ! :-)

For example:

SQL> alter session set container = pdb12;
 
Session altered.
 
SQL> alter system enable restricted session;
 
System altered.
 
SQL> select logins from v$instance;
 
LOGINS
----------
RESTRICTED
 
SQL> alter system disable restricted session;
alter system disable restricted session
*
ERROR at line 1:
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
 

You can get out of the predicament, by force opening the pluggable database as shown below, but probably best to look at the latest 12c PSU, which contains a fix (unverified)

 
SQL> conn / as sysdba
Connected.
 
SQL> alter pluggable database pdb12 open force;
 
Pluggable database altered.
 
SQL> alter session set container = pdb12;
 
Session altered.
 
SQL> select logins from v$instance;
 
LOGINS
----------
ALLOWED
 
Older Posts »

The WordPress Classic Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 78 other followers