Learning is not a spectator sport

October 5, 2014

Openworld

Filed under: Uncategorized — connormcdonald @ 9:20 pm

Well, the annual spectacle of enormous proportions has come to a conclusion again. And thats probably the first reason I’d recommend OpenWorld to anyone who works with Oracle who has never been to it. It’s a jaw dropping moment just to see the scale of the event, and how impressively its organised in terms of facilities, lunches, registration and the like.

But onto the technical elements, here’s my impressions of this years conference:

1) Big data, Big data, Big data, Big data

Somewhere in the conference I think there might have been some coverage of database, middleware and cloud :-) but it was dwarfed by Big Data (no pun intended). From an australian perspective, I found this quite interesting, because whilst there is a groundswell of interest in the philosophies and tools relating to big data, I’d hardly say its taken the nation by storm. Maybe things will change in the coming years.

Having said that, one thing I was pleased to see (as an Oracle dinosaur) was a realisation that SQL still persists as the dominant query language, and hence the clamour by vendors to provide a good SQL interface layer on top of their respective big data products. The promise of running SQL over some, any or all data sources, whether they be RDBMS or any other data structure sounds very cool.

2) In-memory

Lots and lots of presentations on the new in-memory database option, and its very hard to find anything bad to see about the feature. It just looks like a wonderful feature although you have to pay for all that wonderment :-) My only disappointment from the conference was that each session was done “standalone” which means the first 15 mins of each precious 45min slot, was the same 10 slides describing the fundamentals. I would have preferred a single intro session, with then other sessions going more into the nitty gritty.

3) Passion matters

A number of presentations on topics close to my heart did little to inspire me, whereas others on topics that (I thought) had no interest to me were riveting, and it all comes down to the enthusiasm of the presenter for the technology they were talking about.

4) The app

Sadly….the OpenWorld app was a PR disaster for Oracle based on the tweets I saw about it (some of which were my own). I’m mentioning this not to unload a fresh bucket of invective in the public arena, but to encourage people to provide feedback to the conference organisers. I was as guilty as anyone else in terms of getting more and more emotive with my tweets as I got more and more frustrated with the app as the week went on. But I also tabulated my issues and sent them off to the organisers, trying to be as objective as possible as well as providing suggested fixes. I recommend you do the same.

All up, another great week, with lots of cool new things to explore and blog about :-) Now onto downloading all the slides…

October 1, 2014

Slides from OakTableWorld and OpenWorld

Filed under: Uncategorized — connormcdonald @ 8:13 am

Thanks to anyone that came along to my sessions at OpenWorld.

You can review the slides via the links below:

http://www.slideshare.net/hamcdc/oaktable-world-sep14-clonedb

http://www.slideshare.net/hamcdc/openworld-sep14-12c-fordevelopers

 

September 26, 2014

Openworld day minus 3

Filed under: Uncategorized — connormcdonald @ 6:50 am

Its day minus 3, or day +1 depending on your point of view at Openworld 2014.

I’m here a few days early for the Ace Director briefings, where a selection of product managers provide an insight into what is coming in the Oracle world, either at Openworld or in the coming year.

I must admit, I arrived to the briefings somewhat sceptical because a couple of years ago, when I was last at the briefings, the managers seemed very hesitant to share anything with us, generally waving us off with “you’ll need to wait until the conference”, which of course defeats the entire purpose.

However, so far, now that I’m at the conclusion of day 1, I’ve been very impressed. The product managers were very open and honest, happy to inform us of the things they’re very proud of, and also, the things that still need more work, or have not panned out as well they expected. Nice and genuine. And perhaps even more important was that there was a good amount of passion for product this year. People that are excited about their products are much more interesting to listen to.

Today has mainly been about middleware and client, which aren’t my area, but even so, its been good to see whats been happening in that area. Obviously, its all about smart phones and tablets, but there was some very impressive stuff on the UI front. Apologies, but non-disclosure agreements prohibit us from sharing too much content, but a lot of it you’ll be able to see at Openworld, or in the presentations that come later.

Thomas Kurian will be givin us an address shortly – it will be interesting to hear what’s floating his boat (so to speak).

[Disclosure: The Oracle Ace Director program has paid for my travel to Openworld]

September 6, 2014

Openworld bound..

Filed under: Uncategorized — connormcdonald @ 8:18 pm

I’ll be in Openworld again this year, and have a couple of speaking slots…

I’ll talk about new features for developers in 12c on Sunday, Sep 28, 2:30 Moscone South 303

https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=2244

and of course, at the awesome Oaktable world..

http://www.oraclerealworld.com/oaktable-world/agenda/

Drop in, learn some cool things, or just pop up and say Hello!

August 17, 2014

MERGE and IOT’s ….. unhappy bedfellows

Filed under: Uncategorized — connormcdonald @ 5:32 pm

Anyone who has used Oracle for a while will be familiar with the Parent/Child locking "issue" when it comes to tables and indexes on foreign keys.  For many years you’d hear people crying "bug" etc but thankfully most now know the reason, and accept it as sensible behaviour.

But lets take a look at a slight variation on that theme.

Lets start with a table called "LOC" which will be our parent table in this example. Note that it is an IOT, and we’ll also have a child table "LOC_CHILD", which is also an IOT.

SQL> CREATE TABLE LOC
  2  (
  3    LOC_ID              NUMBER(4)            NOT NULL,
  4    DATA                     NUMBER(6),
  5    CONSTRAINT LOC_PK
  6    PRIMARY KEY
  7    ( LOC_ID)
  8    ENABLE VALIDATE
  9  )
 10  ORGANIZATION INDEX
 11  /

Table created.

SQL> CREATE TABLE LOC_CHILD
  2  (
  3    CHILD_SEQ                  NUMBER(12)          NOT NULL,
  4    LOC_ID               NUMBER(4)           NOT NULL,
  5    CHILD_DATA  NUMBER(15,6),
  6    CONSTRAINT LOC_CHILD_PK
  7    PRIMARY KEY
  8    (CHILD_SEQ, LOC_ID)
  9    ENABLE VALIDATE
 10  )
 11  ORGANIZATION INDEX
 12  /

Table created.

SQL> insert into LOC
  2  select rownum,50
  3  from dual
  4  connect by level <= 5
  5  /

5 rows created.

Now being a good DBA :-) we’ve read all the "gloom and doom" nonsense about foreign keys being indexed, so just to be careful, we’ll add that index onto our child table before adding our foreign key back to LOC.

SQL> CREATE INDEX LOC_CHILD_IX ON LOC_CHILD
  2  (LOC_ID)
  3  /

Index created.

SQL> ALTER TABLE LOC_CHILD ADD (
  2    CONSTRAINT LOC_CHILD_FK
  3    FOREIGN KEY ( LOC_ID)
  4    REFERENCES LOC (LOC_ID)
  5    ENABLE VALIDATE)
  6  /

Table altered.

SQL> insert into LOC_CHILD
  2  select rownum,mod(rownum,5)+1,dbms_random.value(1000,5000)
  3  from dual
  4  connect by level <= 2000
  5  /

2000 rows created.

SQL> commit;

Commit complete.

So the scene is set..we’ve got our tables seeded with some data, and ready to go..

Lets update a row in the parent table LOC:

SQL> UPDATE loc
  2  SET    DATA = 99
  3  WHERE  LOC_ID = 2;

1 row updated.

Now we’ll pop into a new session and update the child table LOC_CHILD:

SQL> MERGE 
  2       INTO  LOC_CHILD
  3       USING (SELECT 500 CHILD_SEQ,
  4                     2 LOC_ID,
  5                     1000 CHILD_DATA
  6                FROM DUAL) M
  7          ON (    LOC_CHILD.CHILD_SEQ = M.CHILD_SEQ
  8              AND LOC_CHILD.LOC_ID = M.LOC_ID)
  9  WHEN MATCHED
 10  THEN
 11     UPDATE SET
 12        LOC_CHILD.CHILD_DATA =  NVL (LOC_CHILD.CHILD_DATA, 0) + M.CHILD_DATA
 13  WHEN NOT MATCHED
 14  THEN
 15     INSERT     (CHILD_SEQ,
 16                 LOC_ID,
 17                 CHILD_DATA)
 18         VALUES (M.CHILD_SEQ,
 19                 M.LOC_ID,
 20                 M.CHILD_DATA);

[stuck]

And splat…we’re stuck.  (Without evidence to support it) I’d hypothesize its due to the flexibility of the merge command.  A single merge can insert, update and even delete rows, so I’m guessing that to handle this flexibility (in particular, the DELETE option) then the locking errs on the side of safety.

August 9, 2014

12c – Nested tables vs Associative arrays

Filed under: Uncategorized — connormcdonald @ 9:31 am

This was going to the be the immediate follow up to my previous post, but 12.1.0.2 came out and I got all excited about that and forgot to post this one :-)

Anyway, the previous post showed how easy it is to convert between nested tables and associative arrays.  The nice thing in 12c is that this is no longer needed – you can query the associative arrays directly

SQL> create or replace package PKG as
  2
  3    type num_list is table of number index by pls_integer;
  4
  5  end;
  6  /

Package created.

SQL>
SQL> declare
  2    v pkg.num_list;
  3  begin
  4    v(1) := 10;
  5    v(2) := 20;
  6
  7    for i in ( select * from table(v) ) loop
  8      dbms_output.put_line(i.column_value);
  9    end loop;
 10  end;
 11  /
10
20

PL/SQL procedure successfully completed.

12.1.0.2 security grrr…

Filed under: Uncategorized — connormcdonald @ 9:25 am

One of my favourite security "tricks" used to be the following:

SQL> [create|alter] user MY_USER identified by values 'impossible';

Looks odd, but by setting the encrypted value of someone’s password to something that it is impossible to encrypt to, means you’ll never be able to connect as that account.  (Think schema’s owning objects etc).

I hear you ask: "Why not just lock the account?"

Well…in my opinion, that’s a security hole.  Let’s say Oracle publishes a security bug concerning (say) the MDSYS schema.  As a hacker, I’d like to know if a database has the MDSYS schema.  All I need do is:

SQL> connect MDSYS/nonsense

Why is that a security hole ?  Because I wont get "Invalid username or password".  I’ll get "ORA-28000: the account is locked" and voila…Now I know that the MDSYS user is present in that database.

Setting a user password to a impossible encrypted value, means a hacker will never know if the user account exists at all, because the error returned is the familiar "ORA-01017: invalid username/password; logon denied"

But looked what’s happened in 12.1.0.2 :-(

SQL> create user MY_USER identified by values 'impossible';
create user MY_USER identified by values 'impossible'
*
ERROR at line 1:
ORA-02153: invalid VALUES password string
 

A backward step in my opinion…

July 14, 2014

Associative arrays and Nested tables

Filed under: Uncategorized — connormcdonald @ 7:52 pm

A common criticism of PLSQL is that the "original" array datatype, now called associative arrays are perfect for passing stuff back and forth to 3GL environments (for example .Net),  but canno be used within SQL natively, for example:

SQL> create or replace
  2  package BLAH is
  3    type num_list is table of number index by pls_integer;
  4    type str_list is table of varchar2(30) index by pls_integer;
  5
  6  procedure ODP_PROC(n out num_list, s out str_list);
  7
  8  procedure USE_IN_QUERY(n num_list);
  9
 10  end;
 11  /

Package created.

SQL> create or replace
  2  package body BLAH is
  3
  4  procedure ODP_PROC(n out num_list, s out str_list) is
  5  begin
  6    select rownum, 'mystring'||rownum
  7    bulk collect into n,s
  8    from dual
  9    connect by level < 100;
 10  end;
 11
 12  procedure USE_IN_QUERY(n num_list) is
 13    x int;
 14  begin
 15    select count(*)
 16    into   x
 17    from   user_objects
 18    where object_id in (
 19       select column_value
 20       from   table(n)
 21       );
 22  end;
 23
 24  end;
 25  /

Warning: Package Body created with compilation errors.

SQL> sho err
Errors for PACKAGE BODY BLAH:
LINE/COL ERROR
-------- -----------------------------------------------------------------
14/3     PL/SQL: SQL Statement ignored
19/13    PL/SQL: ORA-22905: cannot access rows from a non-nested table
         item
19/19    PLS-00382: expression is of wrong type

So we then look at using nested tables, but of course, a lot of 3GL’s will not understand such type – sometimes needing some sort of type translation.  Hence people complain about the incompatibility etc.  But its not all that hard – why not use both to have the best of both worlds….Use associative arrays to pass externally to 3GL’s, and convert to a nested table if you want to use within SQL, eg

SQL> create or replace
  2  type  nt_num_list is table of number;
  3  /

Type created.

SQL> create or replace
  2  package BLAH is
  3    type num_list is table of number index by pls_integer;
  4    type str_list is table of varchar2(30) index by pls_integer;
  5
  6  procedure ODP_PROC(n out num_list, s out str_list);
  7
  8  procedure USE_IN_QUERY(n num_list);
  9
 10  end;
 11  /

Package created.

SQL> create or replace
  2  package body BLAH is
  3
  4  procedure ODP_PROC(n out num_list, s out str_list) is
  5  begin
  6    select rownum, 'mystring'||rownum
  7    bulk collect into n,s
  8    from dual
  9    connect by level < 100;
 10  end;
 11
 12  procedure USE_IN_QUERY(n num_list) is
 13    x int;
 14
 15    local_copy nt_num_list := nt_num_list();
 16  begin
 17    local_copy.extend(n.count);
 18    for i in 1 .. n.count loop
 19       local_copy(i) := n(i);
 20    end loop;
 21
 22    select count(*)
 23    into   x
 24    from   user_objects
 25    where object_id in (
 26       select column_value
 27       from   table(local_copy)
 28       );
 29  end;
 30
 31  end;
 32  /

Package body created.

That wasn’t that hard was it ? :-)

July 6, 2014

No Hakan factor for IOT

Filed under: Uncategorized — connormcdonald @ 5:24 pm

Sadly there seems to be no concept of the Hakan factor for an IOT.

I have an application which merges into an IOT, the merge incrementally populating a swag of initially null columns, hence growing the rows in size.  Some simple benchmarking shows the overhead of this versus merging into a table with pre-populated values:

SQL> create table T1
  2   ( x int primary key,
  3     y1 number(10),
  4     y2 number(10),
  5     y3 number(10),
  6     y4 number(10),
  7     y5 number(10),
  8     y6 number(10),
  9     y7 number(10),
 10     y8 number(10),
 11     y9 number(10),
 12     y0 number(10)
 13  )
 14  organization index
 15  /

Table created.

SQL> create table T2
  2   ( x int primary key,
  3     y1 number(10),
  4     y2 number(10),
  5     y3 number(10),
  6     y4 number(10),
  7     y5 number(10),
  8     y6 number(10),
  9     y7 number(10),
 10     y8 number(10),
 11     y9 number(10),
 12     y0 number(10)
 13  )
 14  organization index
 15  /

Table created.

SQL> insert into t1 (x) select rownum from dual connect by level <= 100000;

100000 rows created.

SQL> insert into t2 select rownum,123,123,123,123,123,123,123,123,123,123 from dual connect by level <= 100000;

100000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(user,'T1')

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'T2')

PL/SQL procedure successfully completed.

SQL> alter session set events = '10046 trace name context forever, level 8';

Session altered.

SQL> merge into t1
  2  using ( select rownum x,
  3                        12 y1,
  4                        23 y2,
  5                        34 y3,
  6                        45 y4,
  7                        56 y5,
  8                        67 y6,
  9                        78 y7,
 10                        89 y8,
 11                        90 y9,
 12                        100 y0
 13          from dual
 14          connect by level <= 20000
 15        ) m
 16  on ( t2.x = m.x )
 17  when matched then
 18  update
 19  set
 20  t2.y1 = m.y1,
 21  t2.y2 = m.y2,
 22  t2.y3 = m.y3,
 23  t2.y4 = m.y4,
 24  t2.y5 = m.y5,
 25  t2.y6 = m.y6,
 26  t2.y7 = m.y7,
 27  t2.y8 = m.y8,
 28  t2.y9 = m.y9,
 29  t2.y0 = m.y0;

20000 rows merged.

SQL> merge into t2
  2  using ( select rownum x,
  3                        12 y1,
  4                        23 y2,
  5                        34 y3,
  6                        45 y4,
  7                        56 y5,
  8                        67 y6,
  9                        78 y7,
 10                        89 y8,
 11                        90 y9,
 12                        100 y0
 13          from dual
 14          connect by level <= 20000
 15        ) m
 16  on ( t1.x = m.x )
 17  when matched then
 18  update
 19  set
 20  t1.y1 = m.y1,
 21  t1.y2 = m.y2,
 22  t1.y3 = m.y3,
 23  t1.y4 = m.y4,
 24  t1.y5 = m.y5,
 25  t1.y6 = m.y6,
 26  t1.y7 = m.y7,
 27  t1.y8 = m.y8,
 28  t1.y9 = m.y9,
 29  t1.y0 = m.y0;

20000 rows merged.

SQL> disc

So T1 was a table with mainly null columns. The trace file shows this result

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.57       0.57          0        770      56302       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.57       0.58          0        770      56302       20000

whereas when you compare it to T2 with the pre-populated "junk"

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.49       0.49          0       1046      20884       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.49       0.49          0       1046      20884       20000

Now of course you can get those better results with T1 by rebuilding the IOT with plenty of free space to hold the row expansion. Repeating the merge after a "alter table move pctfree 90" gives a trace result of:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.51       0.54        367       6396      20418       20000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.53       0.56        367       6396      20418       20000

but of course, I’ve also just smashed a PCTFREE 90 growth into all of the already merged (or "filled") rows.

The Hakan factor would be a nice fix….but alas

SQL> alter table T1 minimize records_per_block;
alter table T1 minimize records_per_block
*
ERROR at line 1:
ORA-28601: invalid [no]MINIMIZE option 

(Tested on v10, 11 and 12)

June 29, 2014

12c nasty with remote query optimization

Filed under: Uncategorized — connormcdonald @ 6:01 pm

We have a fairly common query process, where we run a MERGE command to compare a remote table to a local copy of it, as "poor mans" Golden Gate to bring that table up to date on a regular basis.  [Editors note: Writing MERGE's is more complicated but a lot cheaper than Golden Gate :-)]

After an upgrade to 12c, the performance of some of the MERGE’s went very bad…and you can see what happened with the (sanitised) example below:

The bold part is a join that we’ll be executing on the remote database (mydb). It’s been hinted to run in a particular way.

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

Now we can’t see directly from the explain plan how the query will be run on the remote database – we just get a "REMOTE" line in the plan. However, the additional data in the plan reveals a problem

------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT      |             |  1068K|   696M|       |  2492K  (1)| 00:01:38 |        |      |
|   1 |  MERGE               | LOC_T1      |       |       |       |            |          |        |      |
|   2 |   VIEW               |             |       |       |       |            |          |        |      |
|*  3 |    HASH JOIN OUTER   |             |  1068K|   298M|   210M|  2492K  (1)| 00:01:38 |        |      |
|   4 |     REMOTE           |             |  1068K|    90M|       | 50193   (2)| 00:00:02 | PORAI~ | R->S |
|   5 |     TABLE ACCESS FULL| T1          |    38M|  3625M|       | 91205   (2)| 00:00:04 |        |      |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("P"."SEQ"="W"."SEQ"(+))

Look very closely at what query Oracle will be throwing at the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("T1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

The two tables have been re-aliased as A1 and A2, but notice that one of the hints did NOT get corrected.  The lack of a (valid) leading hint led to a bad plan on the remote database, and performance problems as a result.  For our case, the solution was to explictly add a NO_MERGE hint into the original statement:

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ no_merge leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

which yielded a correctly hinted SQL on the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("A1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

This only has come to light on 12c – the previous version we were on (11.2.0.3) was unaffected.

Older Posts »

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 80 other followers