UKOUG 2016

Just a little video montage of the fun and learning from UKOUG.  A great conference every year.

I was fortunate enough to receive a Best Speaker award (from a 12c features talk given at the 2015 conference). I gave three talks, one of which was the Database Keynote for 2016 – I felt very privileged for the invitation to do so.  The slides for the talks can be found on the AskTom site under the Resources tab.

But for me, there were two highlights – one professional and one personal.

  • The professional one was the same as per most conferences – the ability to catch up face to face with IT professionals from both inside Oracle Corporation and the greater Oracle community.  Whilst we all “blog”, and “tweet”, and “discuss” in some way shape or form, you can’t still beat banter over a beer Smile I love it when casual conversation often leads to insights into technical areas that you hadn’t really even planned on chatting about.
  • The personal one was a little surprise my partner Genevieve sprung on me.  Secretly she had booked flight tickets to align with mine, and hence there she was at the airport at 3am ready to jump on board the flight with me to the UK.  It was an awesome early Christmas gift !

If I don’t get a chance to blog again in the next 48 hours, have a great New Year’s eve celebration wherever you are in the world.

Multi-table insert

An interesting question came through on AskTom recently.  The requirement was to perform a single pass through a source table, and load the data into three target tables.

Now that’s trivially achieved with a multi-table insert, but there was a subtle “twist” on this requirement.  Each of the three target tables may already contain some, none or all of the rows from the source table.  Hence the requirement was to “fill in the blanks”.

So here’s a little demo of one way we could achieve this.

First, here is our source table with 10 rows (1 through 10)


SQL> create table t_source as select rownum s from dual connect by level <= 10;

Table created.

SQL>
SQL> select * from t_source;

         S
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

And here are our three target tables, T1, T2 and T3, each with a subset of the rows already


SQL> create table t1 as select rownum x from dual connect by level <= 5;

Table created.

SQL> create table t2 as select rownum y from dual connect by level <= 3;

Table created.

SQL> create table t3 as select rownum z from dual connect by level <= 6;

Table created.

SQL> select * from t1 order by 1;

         X
----------
         1
         2
         3
         4
         5

5 rows selected.

SQL> select * from t2 order by 1;

         Y
----------
         1
         2
         3

3 rows selected.

SQL> select * from t3 order by 1;

         Z
----------
         1
         2
         3
         4
         5
         6

6 rows selected.

Now obviously we could perform a simple insert-select-where-not-exists style operation for each table, but we need to meet our poster’s requirement of a single pass through the source table. So we will take advantage of an outer join to pick up just those rows that do not already match.



SQL> insert all
  2    when in_tab1 is null then
  3      into t1 (x ) values (s )
  4    when in_tab2 is null then
  5      into t2 (y ) values (s )
  6    when in_tab3 is null then
  7      into t3 (z ) values (s )
  8  select
  9    t_source.s,
 10    t1.x in_tab1,
 11    t2.y in_tab2,
 12    t3.z in_tab3
 13  from t_source, t1, t2, t3
 14  where t_source.s = t1.x(+)
 15  and t_source.s = t2.y(+)
 16  and t_source.s = t3.z(+)
 17  /

16 rows created.

SQL>
SQL> select * from t1 order by 1;

         X
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

SQL> select * from t2 order by 1;

         Y
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

SQL> select * from t3 order by 1;

         Z
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

SQL>

And the job is done. Our poster never really elaborated on why a single pass was necessary – but let’s assume it was due to the source table being large. If we look at the execution plan, we see a swag of cascading hash joins, so whilst a single pass of the source table has been achieved, there is no guarantee that we’re not going to end up with other issues in processing all of those “concurrent” joins.



---------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    10 |   120 |     8   (0)| 00:00:01 |
|*  1 |  HASH JOIN OUTER     |          |    10 |   120 |     8   (0)| 00:00:01 |
|*  2 |   HASH JOIN OUTER    |          |    10 |    90 |     6   (0)| 00:00:01 |
|*  3 |    HASH JOIN OUTER   |          |    10 |    60 |     4   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| T_SOURCE |    10 |    30 |     2   (0)| 00:00:01 |
|   5 |     TABLE ACCESS FULL| T2       |     3 |     9 |     2   (0)| 00:00:01 |
|   6 |    TABLE ACCESS FULL | T1       |     5 |    15 |     2   (0)| 00:00:01 |
|   7 |   TABLE ACCESS FULL  | T3       |     6 |    18 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T_SOURCE"."S"="T3"."Z"(+))
   2 - access("T_SOURCE"."S"="T1"."X"(+))
   3 - access("T_SOURCE"."S"="T2"."Y"(+))

But that’s often life on AskTom.We only get half the story Smile

A 12.2 treat for the festive season

We’re being asked to store more and more data, yet keep backup windows, query performance and the like unchanged, no matter how much we store. As a result, more and more database shops are needing to partition their data. The problem is – partitioning data is a significant restructure of the data, which thus incurs a large outage and the accompanying planning and coordination.

Unless you’re on 12.2.

Here’s a demo where we can take an existing table and

  • convert it to partitioned on a range scheme, also using intervals so we don’t need to worry about ongoing range creation maintenance, and
  • take one of the existing indexes, and convert it to a local index to mirror the table partitioning scheme, and
  • take the other existing index, and globally partition it.

Oh…and of course, we will do the whole thing online without service disruption

Oh…and of course, we will do it with a single SQL command.

Ab…So….Lute….Ly awesome !



SQL*Plus: Release 12.1.0.2.0 Production on Thu Dec 22 09:53:37 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Dec 19 2016 13:38:54 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.0.3 - 64bit Production


SQL> create table T as
  2  select d.*
  3  from dba_Objects d,
  4       ( select 1 from dual
  5         connect by level <= 20 )
  6  where d.object_id is not null;

Table created.

SQL> create index IX on t ( object_id );

Index created.

SQL> create index IX2 on t ( created, object_name );

Index created.

SQL> alter table T modify
  2  partition by range (object_id) interval (10000)
  3  (
  4    partition p1 values less than (20000)
  5  ) online
  6  update indexes
  7  ( ix  local,
  8    ix2 global partition by range (created)
  9    (
 10     partition ix2_p1 values less than (date '2016-08-01'),
 11     partition ix2_p2 values less than (maxvalue)
 12   )
 13  );

Table altered.

SQL> select partition_name
  2  from   user_tab_partitions
  3  where  table_name = 'T';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P1
SYS_P1145
SYS_P1146
SYS_P1147
SYS_P1148
SYS_P1149
SYS_P1150

7 rows selected.

SQL> select index_name, partition_name
  2  from   user_ind_partitions
  3  where  index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
IX                             SYS_P1145
IX                             SYS_P1146
IX                             SYS_P1147
IX                             SYS_P1148
IX                             SYS_P1149
IX                             SYS_P1150
IX2                            IX2_P1
IX2                            IX2_P2

9 rows selected.

SQL>

Advanced Replication

Long before Streams, long before Goldengate, if you want to keep data between sites synchronised in some fashion, or even allow sites to independently update their data, there was the Advanced Replication facility in Oracle.  An “extension” of the concept of simple materialized views (or snapshots as they were called then), you could design complete replicated environments across Oracle databases.

But it was a non-trivial exercise to do this.  You had to be familiar with replication groups, replication objects, replication sites, master groups, master sites,  master definition sites, deferred transactions, quiescing, updatable materialized views, replication catalogs, conflict resolution…the list goes on an on.

Much (all?) of this has been obsoleted in one way or another by Goldengate, Active DataGuard, and other simpler more powerful technologies that the Oracle database offers today. It is probably for this reason that Advanced Replication has come to a natural end of life, as noted in the 12.2 Documentation.

“Desupport of Advanced Replication

Starting in Oracle Database 12c release 1 (12.1) the Advanced Replication feature of Oracle Database is desupported.

The Oracle Database Advanced Replication feature is desupported in its entirety. The desupport of this feature includes all functionality associated with this feature: multimaster replication, updateable materialized views, hierarchical materialized views, and deployment templates. Read-only materialized views are still supported with basic replication.

Oracle recommends that you replace your use of Advanced Replication with Oracle GoldenGate.”

So if you are still hanging old to all of those old complicated Advanced Replication mechanisms…it’s time let them go.

max_enabled_roles – nice touch

Some people get very excited with roles, and quickly the number of roles proliferates to huge numbers…Until this happens

ORA-28031: maximum of 148 enabled roles exceeded

But in helping someone out on AskTom, I just found a nice touch in 11.2.0.4.  I had granted my account 200 roles (since the max_enabled_roles parameter is capped at 148), fully expecting to get the standard error when attempting to connect.  But I didn’t – all was fine.  You can see what is going on via the following demo.


SQL> begin
  2  for i in 1 .. 200 loop
  3    execute immediate 'create role rr'||i;
  4    execute immediate 'grant rr'||i||' to scott';
  5  end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> conn scott/tiger
Connected.

SQL> select count(*) from session_roles;

  COUNT(*)
----------
       148

So even though I granted 200 roles, when I queried SESSION_ROLES, I did not see all 200 , I only saw 148.

My alert.log shows this

Wed Nov 30 10:32:35 2016
Maximum of 148 enabled roles exceeded for user SCOTT. Not loading all the roles.

Some people might prefer an explicit error, but hopefully any good alert log monitoring would quickly pick this up.  And since if you exceed 148 there is no workaround (except to reduce the number of roles granted), I think avoiding the error is a better strategy.

Simple demo of message propagation

If you’re using AQ, then it’s simple to setup simple enqueue and dequeue facilities on your local database to provide all sorts of asynchronous style processing in your applications.  As long as you’re applications are designed and built to handle it, the “fire and forget” model to keep user applications responsive, and all of the “heavy lifting” done in the background is a very attractive one.

You can also use AQ to achieve the same concept across multiple databases, and the database will take care of propagating the messages from one database to the other.  Here’s a simple demo of that.

Database 1




SQL> connect / as sysdba
Connected.

--
-- A user to hold all of our AQ stuff
--
SQL> create user aqtest identified by aqtest;

User created.

SQL> grant connect, resource, aq_administrator_role to aqtest;

Grant succeeded.

SQL> grant execute on dbms_aq to aqtest;

Grant succeeded.

SQL> grant execute on dbms_aqadm to aqtest;

Grant succeeded.

SQL> alter user aqtest quota unlimited on users;

User altered.

SQL> grant create database link to aqtest;

Grant succeeded.

SQL> begin
  2    dbms_aqadm.grant_system_privilege('ENQUEUE_ANY','AQTEST',FALSE);
  3    dbms_aqadm.grant_system_privilege('DEQUEUE_ANY','AQTEST',FALSE);
  4  end;
  5  /

PL/SQL procedure successfully completed.

--
-- Now we connect as AQTEST and construct our message payload types and our local queues
--

SQL> connect aqtest/aqtest
Connected.

SQL> create type aqtest.message_typ as object(subject varchar2(30), text varchar2(80));
  2  /

Type created.

SQL> begin
  2    dbms_aqadm.create_queue_table(queue_table => 'aqtest.messages_qtab',
  3                                  queue_payload_type =>  'aqtest.Message_typ',
  4                                  multiple_consumers => TRUE);
  5    dbms_aqadm.create_queue(queue_name => 'MSG_QUEUE',
  6                            queue_table => 'aqtest.messages_qtab');
  7    dbms_aqadm.start_queue(queue_name => 'MSG_QUEUE');
  8  end;
  9  /

PL/SQL procedure successfully completed.

--
-- And here is a basic enqueue routine.  If a remote address is specify, then we will propagate
-- the message to that address.  Otherwise the message will stay in the local queue.
--
SQL> create or replace procedure enqueue_msg(p_msg in varchar2,
  2                                          p_remote_address in varchar2 default null)
  3  as
  4    l_enqueue_options    dbms_aq.enqueue_options_t;
  5    l_message_properties dbms_aq.message_properties_t;
  6    l_message_handle     raw(16);
  7    l_message            aqtest.message_typ;
  8    l_recipients         dbms_aq.aq$_recipient_list_t;
  9  BEGIN
 10    l_recipients(1) := SYS.aq$_agent('RECIPIENT', p_remote_address, null);
 11    l_message_properties.recipient_list := l_recipients;
 12
 13    l_message := message_typ('NORMAL MESSAGE',  p_msg );
 14    dbms_aq.enqueue(queue_name => 'msg_queue',
 15                    enqueue_options => l_enqueue_options,
 16                    message_properties => l_message_properties,
 17                    payload => l_message,
 18                    msgid => l_message_handle);
 19  end;
 20  /

Procedure created.

Database 2, we do the exact same setup



SQL> connect / as sysdba
Connected.

--
-- A user to hold all of our AQ stuff
--
SQL> create user aqtest identified by aqtest;

User created.

SQL> grant connect, resource, aq_administrator_role to aqtest;

Grant succeeded.

SQL> grant execute on dbms_aq to aqtest;

Grant succeeded.

SQL> grant execute on dbms_aqadm to aqtest;

Grant succeeded.

SQL> alter user aqtest quota unlimited on users;

User altered.

SQL> grant create database link to aqtest;

Grant succeeded.

SQL> begin
  2    dbms_aqadm.grant_system_privilege('ENQUEUE_ANY','AQTEST',FALSE);
  3    dbms_aqadm.grant_system_privilege('DEQUEUE_ANY','AQTEST',FALSE);
  4  end;
  5  /

PL/SQL procedure successfully completed.

--
-- Now we connect as AQTEST and construct our message payload types and our local queues
--

SQL> connect aqtest/aqtest
Connected.

SQL> create type aqtest.message_typ as object(subject varchar2(30), text varchar2(80));
  2  /

Type created.

SQL> begin
  2    dbms_aqadm.create_queue_table(queue_table => 'aqtest.messages_qtab',
  3                                  queue_payload_type =>  'aqtest.Message_typ',
  4                                  multiple_consumers => TRUE);
  5    dbms_aqadm.create_queue(queue_name => 'MSG_QUEUE',
  6                            queue_table => 'aqtest.messages_qtab');
  7    dbms_aqadm.start_queue(queue_name => 'MSG_QUEUE');
  8  end;
  9  /

PL/SQL procedure successfully completed.

Back to Database 1


--
-- We need a database link to the remote database, plus a quick query to test that its working
--

SQL> create database link remote_db connect to aqtest identified by aqtest using 'db11';

Database link created.

SQL> select * from tab@remote_db;

no rows selected


--
-- We get our propagation schedule running, and we're ready to go.   
--

SQL> begin
  2    dbms_aqadm.schedule_propagation(queue_name  => 'MSG_QUEUE',
  3                                        destination => 'remote_db',
  4                                        start_time  => sysdate,
  5                                        latency     => 0);
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select * from user_queue_schedules
  2  @pr
==============================
QNAME                         : MSG_QUEUE
DESTINATION                   : REMOTE_DB
START_DATE                    :
START_TIME                    : 14:16:01
PROPAGATION_WINDOW            :
NEXT_TIME                     :
LATENCY                       : 0
SCHEDULE_DISABLED             : N
PROCESS_NAME                  : J000
SESSION_ID                    : 400, 38936
INSTANCE                      : 1
LAST_RUN_DATE                 : 28-NOV-16 02.16.01.283000 PM +08:00
LAST_RUN_TIME                 : 14:16:01
CURRENT_START_DATE            : 28-NOV-16 02.16.01.283000 PM +08:00
CURRENT_START_TIME            : 14:16:01
NEXT_RUN_DATE                 : 28-NOV-16 02.16.01.280000 PM +08:00
NEXT_RUN_TIME                 : 14:16:01
TOTAL_TIME                    : 0
TOTAL_NUMBER                  : 0
TOTAL_BYTES                   : 0
MAX_NUMBER                    : 0
MAX_BYTES                     : 0
AVG_NUMBER                    : 0
AVG_SIZE                      : 0
AVG_TIME                      : 0
FAILURES                      : 0
LAST_ERROR_DATE               :
LAST_ERROR_TIME               :
LAST_ERROR_MSG                :
MESSAGE_DELIVERY_MODE         : PERSISTENT
ELAPSED_DEQUEUE_TIME          :
ELAPSED_PICKLE_TIME           :
JOB_NAME                      : AQ_JOB$_6438

PL/SQL procedure successfully completed.

--
-- a message that will not be propagaged, because remote recipient is not specified
--
SQL> begin
  2    enqueue_msg('This message will stay local');
  3    commit;
  4  end;
  5  /

PL/SQL procedure successfully completed.

--
-- a message that WILL be propagated, because remote recipient is specified
--
SQL> begin
  2    enqueue_msg('This message will be propagated.',
  3                 'aqtest.msg_queue_other@remote_db');
  4    commit;
  5  end;
  6  /

PL/SQL procedure successfully completed.

--
-- So if everything is working correctly, we have 2 messages on the local queue, and 1 message on the remote queue
--

SQL>
SQL> select t1.cnt,
  2         t2.cnt
  3  from (select count(*) cnt from messages_qtab) t1,
  4       (select count(*) cnt from messages_qtab_other@remote_db) t2
  5  /

       CNT        CNT
---------- ----------
         2          1

1 row selected.

SQL>
SQL>

And there you go.  Messages between databases using the in-built propagation mechanisms.

Just one more week

image

I’m off to UKOUG again this year.  It’s an awesome conference, and I’ll be doing three talks there.

On Monday at 3:30 it’s my first keynote talk Smile  “12 Things You’ll Love About the Oracle Database 12.2”, followed up at 6pm by “AskTom – One Year On”.

On Tuesday, at 3:30 I’ll be doing a talk for anyone want to come up to speed on all of the partitioning features with “Partitioning 101”.

A couple of the talks are in Hall 1, which is always nice for the attendees, but as a speaker, you can hardly see the audience since they are shrouded in darkness Smile

image

Head on over to http://tech16.ukoug.org/ for all of the details.  If you’re coming the conference, feel free to pop over and say “Hi”.