Learning is not a spectator sport

September 6, 2014

Openworld bound..

Filed under: Uncategorized — connormcdonald @ 8:18 pm

I’ll be in Openworld again this year, and have a couple of speaking slots…

I’ll talk about new features for developers in 12c on Sunday, Sep 28, 2:30 Moscone South 303

https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=2244

and of course, at the awesome Oaktable world..

http://www.oraclerealworld.com/oaktable-world/agenda/

Drop in, learn some cool things, or just pop up and say Hello!

August 17, 2014

MERGE and IOT’s ….. unhappy bedfellows

Filed under: Uncategorized — connormcdonald @ 5:32 pm

Anyone who has used Oracle for a while will be familiar with the Parent/Child locking "issue" when it comes to tables and indexes on foreign keys.  For many years you’d hear people crying "bug" etc but thankfully most now know the reason, and accept it as sensible behaviour.

But lets take a look at a slight variation on that theme.

Lets start with a table called "LOC" which will be our parent table in this example. Note that it is an IOT, and we’ll also have a child table "LOC_CHILD", which is also an IOT.

SQL> CREATE TABLE LOC
  2  (
  3    LOC_ID              NUMBER(4)            NOT NULL,
  4    DATA                     NUMBER(6),
  5    CONSTRAINT LOC_PK
  6    PRIMARY KEY
  7    ( LOC_ID)
  8    ENABLE VALIDATE
  9  )
 10  ORGANIZATION INDEX
 11  /

Table created.

SQL> CREATE TABLE LOC_CHILD
  2  (
  3    CHILD_SEQ                  NUMBER(12)          NOT NULL,
  4    LOC_ID               NUMBER(4)           NOT NULL,
  5    CHILD_DATA  NUMBER(15,6),
  6    CONSTRAINT LOC_CHILD_PK
  7    PRIMARY KEY
  8    (CHILD_SEQ, LOC_ID)
  9    ENABLE VALIDATE
 10  )
 11  ORGANIZATION INDEX
 12  /

Table created.

SQL> insert into LOC
  2  select rownum,50
  3  from dual
  4  connect by level <= 5
  5  /

5 rows created.

Now being a good DBA :-) we’ve read all the "gloom and doom" nonsense about foreign keys being indexed, so just to be careful, we’ll add that index onto our child table before adding our foreign key back to LOC.

SQL> CREATE INDEX LOC_CHILD_IX ON LOC_CHILD
  2  (LOC_ID)
  3  /

Index created.

SQL> ALTER TABLE LOC_CHILD ADD (
  2    CONSTRAINT LOC_CHILD_FK
  3    FOREIGN KEY ( LOC_ID)
  4    REFERENCES LOC (LOC_ID)
  5    ENABLE VALIDATE)
  6  /

Table altered.

SQL> insert into LOC_CHILD
  2  select rownum,mod(rownum,5)+1,dbms_random.value(1000,5000)
  3  from dual
  4  connect by level <= 2000
  5  /

2000 rows created.

SQL> commit;

Commit complete.

So the scene is set..we’ve got our tables seeded with some data, and ready to go..

Lets update a row in the parent table LOC:

SQL> UPDATE loc
  2  SET    DATA = 99
  3  WHERE  LOC_ID = 2;

1 row updated.

Now we’ll pop into a new session and update the child table LOC_CHILD:

SQL> MERGE 
  2       INTO  LOC_CHILD
  3       USING (SELECT 500 CHILD_SEQ,
  4                     2 LOC_ID,
  5                     1000 CHILD_DATA
  6                FROM DUAL) M
  7          ON (    LOC_CHILD.CHILD_SEQ = M.CHILD_SEQ
  8              AND LOC_CHILD.LOC_ID = M.LOC_ID)
  9  WHEN MATCHED
 10  THEN
 11     UPDATE SET
 12        LOC_CHILD.CHILD_DATA =  NVL (LOC_CHILD.CHILD_DATA, 0) + M.CHILD_DATA
 13  WHEN NOT MATCHED
 14  THEN
 15     INSERT     (CHILD_SEQ,
 16                 LOC_ID,
 17                 CHILD_DATA)
 18         VALUES (M.CHILD_SEQ,
 19                 M.LOC_ID,
 20                 M.CHILD_DATA);

[stuck]

And splat…we’re stuck.  (Without evidence to support it) I’d hypothesize its due to the flexibility of the merge command.  A single merge can insert, update and even delete rows, so I’m guessing that to handle this flexibility (in particular, the DELETE option) then the locking errs on the side of safety.

August 9, 2014

12c – Nested tables vs Associative arrays

Filed under: Uncategorized — connormcdonald @ 9:31 am

This was going to the be the immediate follow up to my previous post, but 12.1.0.2 came out and I got all excited about that and forgot to post this one :-)

Anyway, the previous post showed how easy it is to convert between nested tables and associative arrays.  The nice thing in 12c is that this is no longer needed – you can query the associative arrays directly

SQL> create or replace package PKG as
  2
  3    type num_list is table of number index by pls_integer;
  4
  5  end;
  6  /

Package created.

SQL>
SQL> declare
  2    v pkg.num_list;
  3  begin
  4    v(1) := 10;
  5    v(2) := 20;
  6
  7    for i in ( select * from table(v) ) loop
  8      dbms_output.put_line(i.column_value);
  9    end loop;
 10  end;
 11  /
10
20

PL/SQL procedure successfully completed.

12.1.0.2 security grrr…

Filed under: Uncategorized — connormcdonald @ 9:25 am

One of my favourite security "tricks" used to be the following:

SQL> [create|alter] user MY_USER identified by values 'impossible';

Looks odd, but by setting the encrypted value of someone’s password to something that it is impossible to encrypt to, means you’ll never be able to connect as that account.  (Think schema’s owning objects etc).

I hear you ask: "Why not just lock the account?"

Well…in my opinion, that’s a security hole.  Let’s say Oracle publishes a security bug concerning (say) the MDSYS schema.  As a hacker, I’d like to know if a database has the MDSYS schema.  All I need do is:

SQL> connect MDSYS/nonsense

Why is that a security hole ?  Because I wont get "Invalid username or password".  I’ll get "ORA-28000: the account is locked" and voila…Now I know that the MDSYS user is present in that database.

Setting a user password to a impossible encrypted value, means a hacker will never know if the user account exists at all, because the error returned is the familiar "ORA-01017: invalid username/password; logon denied"

But looked what’s happened in 12.1.0.2 :-(

SQL> create user MY_USER identified by values 'impossible';
create user MY_USER identified by values 'impossible'
*
ERROR at line 1:
ORA-02153: invalid VALUES password string
 

A backward step in my opinion…

July 14, 2014

Associative arrays and Nested tables

Filed under: Uncategorized — connormcdonald @ 7:52 pm

A common criticism of PLSQL is that the "original" array datatype, now called associative arrays are perfect for passing stuff back and forth to 3GL environments (for example .Net),  but canno be used within SQL natively, for example:

SQL> create or replace
  2  package BLAH is
  3    type num_list is table of number index by pls_integer;
  4    type str_list is table of varchar2(30) index by pls_integer;
  5
  6  procedure ODP_PROC(n out num_list, s out str_list);
  7
  8  procedure USE_IN_QUERY(n num_list);
  9
 10  end;
 11  /

Package created.

SQL> create or replace
  2  package body BLAH is
  3
  4  procedure ODP_PROC(n out num_list, s out str_list) is
  5  begin
  6    select rownum, 'mystring'||rownum
  7    bulk collect into n,s
  8    from dual
  9    connect by level < 100;
 10  end;
 11
 12  procedure USE_IN_QUERY(n num_list) is
 13    x int;
 14  begin
 15    select count(*)
 16    into   x
 17    from   user_objects
 18    where object_id in (
 19       select column_value
 20       from   table(n)
 21       );
 22  end;
 23
 24  end;
 25  /

Warning: Package Body created with compilation errors.

SQL> sho err
Errors for PACKAGE BODY BLAH:
LINE/COL ERROR
-------- -----------------------------------------------------------------
14/3     PL/SQL: SQL Statement ignored
19/13    PL/SQL: ORA-22905: cannot access rows from a non-nested table
         item
19/19    PLS-00382: expression is of wrong type

So we then look at using nested tables, but of course, a lot of 3GL’s will not understand such type – sometimes needing some sort of type translation.  Hence people complain about the incompatibility etc.  But its not all that hard – why not use both to have the best of both worlds….Use associative arrays to pass externally to 3GL’s, and convert to a nested table if you want to use within SQL, eg

SQL> create or replace
  2  type  nt_num_list is table of number;
  3  /

Type created.

SQL> create or replace
  2  package BLAH is
  3    type num_list is table of number index by pls_integer;
  4    type str_list is table of varchar2(30) index by pls_integer;
  5
  6  procedure ODP_PROC(n out num_list, s out str_list);
  7
  8  procedure USE_IN_QUERY(n num_list);
  9
 10  end;
 11  /

Package created.

SQL> create or replace
  2  package body BLAH is
  3
  4  procedure ODP_PROC(n out num_list, s out str_list) is
  5  begin
  6    select rownum, 'mystring'||rownum
  7    bulk collect into n,s
  8    from dual
  9    connect by level < 100;
 10  end;
 11
 12  procedure USE_IN_QUERY(n num_list) is
 13    x int;
 14
 15    local_copy nt_num_list := nt_num_list();
 16  begin
 17    local_copy.extend(n.count);
 18    for i in 1 .. n.count loop
 19       local_copy(i) := n(i);
 20    end loop;
 21
 22    select count(*)
 23    into   x
 24    from   user_objects
 25    where object_id in (
 26       select column_value
 27       from   table(local_copy)
 28       );
 29  end;
 30
 31  end;
 32  /

Package body created.

That wasn’t that hard was it ? :-)

July 6, 2014

No Hakan factor for IOT

Filed under: Uncategorized — connormcdonald @ 5:24 pm

Sadly there seems to be no concept of the Hakan factor for an IOT.

I have an application which merges into an IOT, the merge incrementally populating a swag of initially null columns, hence growing the rows in size.  Some simple benchmarking shows the overhead of this versus merging into a table with pre-populated values:

SQL> create table T1
  2   ( x int primary key,
  3     y1 number(10),
  4     y2 number(10),
  5     y3 number(10),
  6     y4 number(10),
  7     y5 number(10),
  8     y6 number(10),
  9     y7 number(10),
 10     y8 number(10),
 11     y9 number(10),
 12     y0 number(10)
 13  )
 14  organization index
 15  /

Table created.

SQL> create table T2
  2   ( x int primary key,
  3     y1 number(10),
  4     y2 number(10),
  5     y3 number(10),
  6     y4 number(10),
  7     y5 number(10),
  8     y6 number(10),
  9     y7 number(10),
 10     y8 number(10),
 11     y9 number(10),
 12     y0 number(10)
 13  )
 14  organization index
 15  /

Table created.

SQL> insert into t1 (x) select rownum from dual connect by level <= 100000;

100000 rows created.

SQL> insert into t2 select rownum,123,123,123,123,123,123,123,123,123,123 from dual connect by level <= 100000;

100000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(user,'T1')

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'T2')

PL/SQL procedure successfully completed.

SQL> alter session set events = '10046 trace name context forever, level 8';

Session altered.

SQL> merge into t1
  2  using ( select rownum x,
  3                        12 y1,
  4                        23 y2,
  5                        34 y3,
  6                        45 y4,
  7                        56 y5,
  8                        67 y6,
  9                        78 y7,
 10                        89 y8,
 11                        90 y9,
 12                        100 y0
 13          from dual
 14          connect by level <= 20000
 15        ) m
 16  on ( t2.x = m.x )
 17  when matched then
 18  update
 19  set
 20  t2.y1 = m.y1,
 21  t2.y2 = m.y2,
 22  t2.y3 = m.y3,
 23  t2.y4 = m.y4,
 24  t2.y5 = m.y5,
 25  t2.y6 = m.y6,
 26  t2.y7 = m.y7,
 27  t2.y8 = m.y8,
 28  t2.y9 = m.y9,
 29  t2.y0 = m.y0;

20000 rows merged.

SQL> merge into t2
  2  using ( select rownum x,
  3                        12 y1,
  4                        23 y2,
  5                        34 y3,
  6                        45 y4,
  7                        56 y5,
  8                        67 y6,
  9                        78 y7,
 10                        89 y8,
 11                        90 y9,
 12                        100 y0
 13          from dual
 14          connect by level <= 20000
 15        ) m
 16  on ( t1.x = m.x )
 17  when matched then
 18  update
 19  set
 20  t1.y1 = m.y1,
 21  t1.y2 = m.y2,
 22  t1.y3 = m.y3,
 23  t1.y4 = m.y4,
 24  t1.y5 = m.y5,
 25  t1.y6 = m.y6,
 26  t1.y7 = m.y7,
 27  t1.y8 = m.y8,
 28  t1.y9 = m.y9,
 29  t1.y0 = m.y0;

20000 rows merged.

SQL> disc

So T1 was a table with mainly null columns. The trace file shows this result

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.57       0.57          0        770      56302       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.57       0.58          0        770      56302       20000

whereas when you compare it to T2 with the pre-populated "junk"

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.49       0.49          0       1046      20884       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.49       0.49          0       1046      20884       20000

Now of course you can get those better results with T1 by rebuilding the IOT with plenty of free space to hold the row expansion. Repeating the merge after a "alter table move pctfree 90" gives a trace result of:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.51       0.54        367       6396      20418       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.53       0.56        367       6396      20418       20000

but of course, I’ve also just smashed a PCTFREE 90 growth into all of the already merged (or "filled") rows.

The Hakan factor would be a nice fix….but alas

SQL> alter table T1 minimize records_per_block;
alter table T1 minimize records_per_block
*
ERROR at line 1:
ORA-28601: invalid [no]MINIMIZE option 

(Tested on v10, 11 and 12)

June 29, 2014

12c nasty with remote query optimization

Filed under: Uncategorized — connormcdonald @ 6:01 pm

We have a fairly common query process, where we run a MERGE command to compare a remote table to a local copy of it, as "poor mans" Golden Gate to bring that table up to date on a regular basis.  [Editors note: Writing MERGE's is more complicated but a lot cheaper than Golden Gate :-)]

After an upgrade to 12c, the performance of some of the MERGE’s went very bad…and you can see what happened with the (sanitised) example below:

The bold part is a join that we’ll be executing on the remote database (mydb). It’s been hinted to run in a particular way.

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

Now we can’t see directly from the explain plan how the query will be run on the remote database – we just get a "REMOTE" line in the plan. However, the additional data in the plan reveals a problem

------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT      |             |  1068K|   696M|       |  2492K  (1)| 00:01:38 |        |      |
|   1 |  MERGE               | LOC_T1      |       |       |       |            |          |        |      |
|   2 |   VIEW               |             |       |       |       |            |          |        |      |
|*  3 |    HASH JOIN OUTER   |             |  1068K|   298M|   210M|  2492K  (1)| 00:01:38 |        |      |
|   4 |     REMOTE           |             |  1068K|    90M|       | 50193   (2)| 00:00:02 | PORAI~ | R->S |
|   5 |     TABLE ACCESS FULL| T1          |    38M|  3625M|       | 91205   (2)| 00:00:04 |        |      |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("P"."SEQ"="W"."SEQ"(+))

Look very closely at what query Oracle will be throwing at the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("T1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

The two tables have been re-aliased as A1 and A2, but notice that one of the hints did NOT get corrected.  The lack of a (valid) leading hint led to a bad plan on the remote database, and performance problems as a result.  For our case, the solution was to explictly add a NO_MERGE hint into the original statement:

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ no_merge leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

which yielded a correctly hinted SQL on the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("A1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

This only has come to light on 12c – the previous version we were on (11.2.0.3) was unaffected.

Testing…the surgeon’s approach

Filed under: Uncategorized — connormcdonald @ 5:24 pm

I played a lot of volleyball in a bygone life :-) and subsequently ruined my knees to the extent that I needed surgery. I got a shock when the surgeon (after a series of x-rays and checks) said to me: "Of course, we’ll only know once we’re in there".

So here’s a body part (a knee) that’s had hundreds of thousands of years to evolve, so you’d expect that knees are pretty much the same world wide, yet an experienced and qualified surgeon puts the "we cant be 100% sure" caveat before chopping me open.

I wish we could apply the same process to testing of IT software.  I remember reading about the mantra of "its harder to fix bugs once they’re in production" and that’s certainly true.  However, somewhere along the way, that became the justification for test cycles being incredibly long and mind-bogglingly detailed.  If my Finance software can’t balance the books, then yes, that’s a big drama.  But if the "Save" button is a shade of blue that didn’t match the design screen shots – is it really worth holding back the production implementation? There are two problems with a mantra of "we will find every possible defect":

1) You can find defects in software almost ad infinitum.  They just get less and less severe, and your testing cycle bottlenecks you entire IT department.

2) You create a false confidence in the testing.  "Hell, if we spent 12 months in testing, then we’ll never find any bugs once we go to Production." 

So I say – Why not take the surgeons approach?  Sure, we’ll test the product to a reasonable level of satisfaction, but we’ll readily accept the fact that the ultimate test ground is only Production.  "We’ll only know when its in there". 

Upgrade to 12c … credentials

Filed under: Uncategorized — connormcdonald @ 3:56 pm

We did a "real" upgrade to 12c this weekend, where "real" means a production system, as opposed to my laptop, a play VM etc etc :-)

It all went relatively smoothly except for one interesting thing, that I can’t 100% say was caused by the upgrade, but it would appear to be the case.

After the upgrade, our scheduler external jobs started failing.  A quick look in the alert log revealed:

Sun Jun 29 09:26:11 2014
ORA-12012: error on auto execute of job "FIRE_SCHED_DAILY"
ORA-27496: credential "LOCAL_OS_ACCT" is disabled

So its possible (not proven) that upgrading to 12c might disable credentials. In this particular case, the database went from standalone to a pluggable database.

The remedy was the simply drop and recreate the credential

April 26, 2014

Dumb triggers part 2 – session level control

Filed under: Uncategorized — connormcdonald @ 12:10 pm

In the previous post, I pontificated about triggers that "lock you in" to having them fire, which can create dramas when it comes to doing data patching.

Maybe you can design your application around this, but if you cant, the last thing you want to be doing is having to take an outage so that you can disable the trigger whilst you do your data maintenance.  Ideally you want the trigger to fire as per normal, except in your session. 

And that’s actually pretty easy to do.  A simple example is below

SQL> create or replace
  2  package TRIGGER_CTL is
  3
  4  --
  5  -- Session level control of triggers for data patching etc
  6  --
  7
  8    -- add a trigger to NOT fire for this session
  9    procedure disable(p_trigger_name varchar2);
 10
 11    -- reinstate normal trigger operation for this session
 12    procedure enable(p_trigger_name varchar2);
 13
 14    -- reinstate all triggers for this session
 15    procedure enable_all;
 16
 17    -- return if trigger is active in this session (which of course is the default)
 18    function enabled_in_session(p_trigger_name varchar2) return boolean;
 19
 20  end;
 21  /

Package created.

SQL> create or replace
  2  package body TRIGGER_CTL is
  3
  4    type t_disabled_triggers is table of number
  5      index by varchar2(30);
  6
  7    g_disabled_triggers t_disabled_triggers;
  8
  9  procedure disable(p_trigger_name varchar2) is
 10  begin
 11    g_disabled_triggers(upper(p_trigger_name)) := 1;
 12  end;
 13
 14  procedure enable(p_trigger_name varchar2) is
 15  begin
 16    if g_disabled_triggers.exists(upper(p_trigger_name)) then
 17      g_disabled_triggers.delete(upper(p_trigger_name));
 18    end if;
 19  end;
 20
 21  procedure enable_all is
 22  begin
 23    g_disabled_triggers.delete;
 24  end;
 25
 26  function enabled_in_session(p_trigger_name varchar2) return boolean is
 27  begin
 28    return not g_disabled_triggers.exists(upper(p_trigger_name));
 29  end;
 30
 31  end;
 32  /

Package body created.

Once we’ve got that little utility coded up, its easy to get session level control over triggers, simply by adding a check

CREATE OR REPLACE TRIGGER MY_TRG
BEFORE INSERT OR UPDATE ON MY_TABLE
FOR EACH ROW
BEGIN
if trigger_ctl.enabled_in_session('MY_TRG') then
   <my original trigger body code>
end if;   
END;
/

And to control the trigger at session level, its then just a call to your API

SQL> exec trigger_ctl.disable('MY_TRG')
SQL> -- then your data maintenance
SQL> exec trigger_ctl.enable('MY_TRG')
Older Posts »

The WordPress Classic Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 79 other followers