End of an era …

Four years ago I wrote about a little volunteer project that my partner did.  A small association that provided outdoor experiences and facilities for kids with physical impairments needed a system to record member and volunteer details, plus a few other bits and pieces.  We built an Apex solution running on XE.  This week, they became part of a larger government initiative, and thus their Apex application was no longer needed and the information migrated to a centralised service.  There was a tinge of sadness about that, but I also was pleased with the outcomes of this “project” namely:

  • It ran for 4 years with virtually never an outage besides those related to power etc.  (After all, their “server” was just a PC in the secretary’s office Smile)
  • Their PC’s etc went through several iterations of patching, upgrades, replacements etc and the system was unaffected because it was entirely run in the browser
  • We never had a single issue with the database
  • Minimal maintenance needed.  In fact, the only “serious” bit of work needed after go live was when we discovered that their external drive (where we stored our database backups) was from time to time removed to be used for offsite file transfers, and when it was re-attached they would assign it a new drive letter.  So we adjusted our backup script to cycle through drive letters to “find” the disk and adjust the RMAN backup details accordingly.

That’s one of the great things with Apex, and a database-centric model.  It is just so well insulated from all the things that change most frequently, that is, those elements closest to the client.

So yesterday I took a final datapump export of the system as a “just in case” measure, and uninstalled the application and its dependencies from the PC.  But for four years, they had a successful application that provided all of their data entry needs and all of their reporting needs, and besides checking an occasional email to ensure the backups were working ok, took very little of my time. And surely that’s what all IT applications “aspire” to be – stuff that just plain works.

It never let them down and never cost them a cent.  You can’t tick any more boxes than that Smile

12c Release 2, set feedback enhancement in SQL PLus

There’s a nice little touch to the “set feedback” command in SQL PLus in 12.2. There is a new “only” clause in the SET FEEDBACK command, so you can run queries but the resulting rows are not shown in the display. They are still queried, fetched and “returned”, but just rendered to the screen. Here’s an example


$ sqlplus hr/hr

SQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 14 22:59:15 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Sat Mar 11 2017 01:59:20 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select * from regions;

 REGION_ID REGION_NAME
---------- -------------------------
         1 Europe
         2 Americas
         3 Asia
         4 Middle East and Africa

So that’s normal behaviour. Let’s now use the new ONLY option.


SQL> set feedback only 
SQL> select * from regions;

4 rows selected.

So why would you want that ? Well, sometimes you just want to run the query so that you can use a subsequent DBMS_XPLAN.DISPLAY_CURSOR call to see the true execution plan. Or perhaps, you just to want to examine some options with regard to the fetch performance. For example, here’s a demo of fetching from a large table called EMP


SQL> show arraysize
arraysize 10

SQL> set timing on
SQL> select * from emp;

1753088 rows selected.

Elapsed: 00:00:26.27

So that took 26 seconds, with an arraysize of 10. Let’s see if we can do better than that – we’ll bump up the arraysize to 200


SQL> set arraysize 200
SQL> select * from emp;

1753088 rows selected.

Elapsed: 00:00:04.65

Wow, thats pretty cool. Six times faster just by tweaking the batch size of the fetch. Surely then we can just keep bumping it up.


SQL> set arraysize 5000
SQL> select * from emp;

1753088 rows selected.

Elapsed: 00:00:04.43

Apparently not. There is a “sweet spot” for arraysize, and diminishing returns on performance when you go higher and higher (at the cost of consuming memory and resources on your client machine to drag all those rows down at once). But this post isn’t about arraysize, it’s merely a conduit for the nice cool feature SET FEEDBACK ONLY which lets us now test out such things without having to see all the rows presented back.

Pi Day, March 14

Geeks around the world will be celebrating Pi day, a shameless excuse to put the month before the day like the Americans do so we can talk Mathematics on “3.14” day Smile

So what better way to show how cool SQL can be with some expressions to approach Pi with continuous series

All we really need to know by way of background is the CONNECT BY LEVEL trick to generate arbirtary sequences of row, eg


SQL> select level from dual connect by level <= 5;

     LEVEL
----------
         1
         2
         3
         4
         5

Now we can get some approximations to Pi using the some of the standard SQL functions SQRT and POWER



SQL> select sqrt(8*sum(1/power((level*2-1),2))) pi
  2  from dual
  3  connect by level <= 100000
  4  order by 1 desc;

        PI
----------
3.14158947

1 row selected.

SQL>
SQL>
SQL> select sqrt(6*sum(1/(level*level))) pi
  2  from dual
  3  connect by level <= 100000
  4  order by 1 desc;

        PI
----------
 3.1415831

1 row selected.

SQL>
SQL> select 8*sum(1/((4*(level-1)+1)*(4*(level-1)+3))) pi
  2  from dual
  3  connect by level <= 100000;

        PI
----------
3.14158765

1 row selected.

SQL>
SQL>
SQL> select
  2   4*sum(
  3     power(-1,level-1)/(level*2-1)*
  4        ( 12*power(1/18,level*2-1)+
  5           8*power(1/57,level*2-1)-
  6           5*power(1/239,level*2-1))) pi
  7  from dual
  8  connect by level <= 100;

        PI
----------
3.14159265

1 row selected.

SQL>

Some methods to calculate Pi need factorials, but there isn’t a native SQL function for that. But in 12c, that’s no problem, we can define SQL functions on the fly directly inside our SQL statement !


SQL>
SQL> with
  2    function factorial(n int) return int is
  3      f int := 1;
  4    begin
  5      for i in 1 .. n loop
  6        f := f * i;
  7      end loop;
  8      return f;
  9    end;
 10  select  2*sum(
 11    power(2,level-1)*power(factorial(level-1),2)/factorial(2*level-1)
 12    ) pi
 13  from dual
 14  connect by level <= 17;
 15  /

        PI
----------
 3.1415864

1 row selected.

But what if you’re not on 12c yet ? Well, you should be! You can check out why it’s time to upgrade from Maria and myself chatting about it over coffee

But whilst you’re on 11g, there is still plenty of cool SQL options to play with Pi. Here we can use recursive SQL to generate Pi, based on the formula:

 

image



SQL> with term(numerator,product,seq) as
  2  ( select sqrt(2) numerator, sqrt(2)/2 product , 1 seq from dual
  3    union all
  4    select sqrt(2+numerator), sqrt(2+numerator)*product/2 , seq+1
  5    from   term, dual
  6    where  term.seq <= 16
  7  )
  8  select 2/product pi
  9  from term
 10  where seq = 16;

        PI
----------
3.14159265

1 row selected.

 

SQL…still the most awesome language out there!

12c Release 2 – Transparent Data Encryption online !

It doesn’t take a rocket scientist to know that even if you have incredibly stringent controls on user authentication, user authorisation etc, that wont save you if your data on disk is not encrypted.  All you need is an errant tape, a missing disk, a misplaced flash stick…and kersplat, someone has a copy of your datafiles from your Oracle database.

Data at rest should be encrypted, but that often meant taking applications offline to do so.

I’ve put my Speed Racer hat on Smile and here’s a video on a new 12c Release 2 feature covered in 60 seconds !

Oracle Database 12c Release 2 is here !

image

Since OpenWorld 2016 when we first saw some of the cool features in Oracle Database 12c Release 2, many IT professionals out there have been exploring the release via our various cloud offerings, but if your organization has not yet embraced the cloud, then March 2017 is a great month for you !  Because you can now download the latest and greatest release of our database from the usual downloads page, and run it on your own servers in your own data centre.

Of course, there’s a difference between downloading and installing the software, and being up to speed with all of the great features that have come in 12c and 12c Release 2 so keep an eye on my blog, and on my YouTube channel.

Over the coming weeks I’ll be talking about lots of the new features in 12.2 in a short easy to digest videos to help you appreciate all of the goodness that is 12c Release 2 Smile

To whet your appetite, how cool is this simple enhancement to SQL Plus – command history !



SQL> history
  1  select * from dba_users where username = 'SYSTEM';
  2  select count(*)
     from dba_tables
     where table_Name like 'A%';
  3  select * from tab;
  4  show sga
  5  set timing off
  6  select * from dba_tablespaces;

SQL> history 4 run

Total System Global Area 1048576000 bytes
Fixed Size                  8795840 bytes
Variable Size             251660608 bytes
Database Buffers          784334848 bytes
Redo Buffers                3784704 bytes

Better to be safe than sorry…

I’ve always been worried about taking a script that is fine to run in my non-production environments (in particular a DROP script) and accidentally running it in a Production environment, shortly followed by the typing up of a fresh resume to look for a new job once the mistake is discovered Smile

The standard means of a “least privilege required” model when it comes to Production access is a good step along this road to protecting the database. But ultimately, at one time or another, an administrator with appropriate privileges, good intentions, but (say) bad cut-paste skills might inadvertently run a drop command against a database they did not want to.  (A well publicised example of this happened quite recently https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/)

So it would be nice to have a method of adding a ‘double check’ facility so that we can guard against a DROP being run on your production system inadvertently. That way, even if you did  run that drop script against Production, you would be “saved from yourself”. Here’s one such possible means you might want to consider. We’ll use a context variable that has to be set in order for DROP commands to be processed.


--
-- Our context
--
SQL> create context DDL_CHECK using ddl_checker;

Context created.

--
-- A routinte to set it
--
SQL> create or replace
  2  procedure ddl_checker(p_val varchar2) is
  3  begin
  4    dbms_session.set_context('DDL_CHECK','ENABLED',p_val);
  5  end;
  6  /

Procedure created.

--
-- If we are on the Production database, DROP DDL's only proceed is the context variable is set
--
SQL> create or replace
  2  trigger ddl_checker_before
  3  before drop on scott.schema
  4  begin
  5    if sys.database_name like '%PROD%' and sys_context('DDL_CHECK','ENABLED') is null
  6    then
  7      raise_application_error(-20000,'DDL confirmation context is not set');
  8    end if;
  9  end;
 10  /

Trigger created.

--
-- And after every DROP, the context is reset for the next use so it cannot be left on inadvertently
--
SQL> create or replace
  2  trigger ddl_checker_after
  3  after drop on scott.schema
  4  begin
  5    ddl_checker(null);
  6  end;
  7  /

Trigger created.

SQL>
SQL>
SQL> grant execute on ddl_checker to scott;

Grant succeeded.

SQL>

So now we can see what happens when SCOTT attempts some DDL in Production by mistake.


SQL> conn scott/tiger
Connected.

SQL> drop table t purge;
drop table t purge
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: DDL confirmation context is not set
ORA-06512: at line 4

--
-- We must explicitly express our intention to perform a DDL
--
SQL> exec admin.ddl_checker('true')

PL/SQL procedure successfully completed.

--
-- and only then are we allowed to run a DROP command
--
SQL> drop table t purge;

Table dropped.

--
-- and in doing so, we don't have any lingering access
--
SQL> drop table t1 purge;
drop table t1 purge
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: DDL confirmation context is not set
ORA-06512: at line 4


SQL>

I stress – this is not about securing administrator access or replacing your existing processes for auditing or locking down the privileges on your production systems. This is just about that extra check to help your over-worked and/or over-tired administrators from committing a small mistake that becomes a catastrophic error.