DIY parallel task execution

We had a question on AskTOM recently, where a poster wanted to rebuild all of the indexes in his schema that had a status of UNUSABLE.  Running the rebuild’s in serial fashion (one after the other) seemed an inefficient use of the server horsepower, and rebuilding each index with a PARALLEL clause also was not particularly beneficial because the issue was more about the volume of indexes rather than the size of each index.

An obvious solution was to use the DBMS_PARALLEL_EXECUTE facility.  Our poster was pleased with our response but then came back asking for help, because they were stuck languishing on an old release for which DBMS_PARALLEL_EXECUTE was not yet present.

If you’re in this situation, it’s surprisingly easy to throw together your own rudimentary version using DBMS_JOB (or DBMS_SCHEDULER). Here’s a simple demo (using our poster’s original issue of rebuild indexes)

Step 1

First we’ll identify the list of operations needed.  In our case, that’s trivial – just a query to the dictionary.  But we will store that list in table because as we rebuild indexes, the list changes.  We want the initial static list of indexes so that we can spread this over a number of “worker” processes.


SQL> create table ind_rebuild as
  2  select
  3    rownum seq,
  4    owner,
  5    index_name,
  6    'NOT STARTED' job_status
  7  from
  8    dba_indexes
  9  where status = 'UNUSABLE'
 10  and ...

Table created.

SQL> create index ind_rebuild_ix on ind_rebuild ( owner, index_name );

Index created.

SQL> select * from ind_rebuild;

       SEQ OWNER                          INDEX_NAME                               JOB_STATUS
---------- ------------------------------ ---------------------------------------- -----------
         1 SCOTT                          EMP_PK                                   NOT STARTED
         2 SCOTT                          DEPT_PK                                  NOT STARTED
         3 MCDONAC                        SYS_C0013656                             NOT STARTED
         4 MCDONAC                        PK_CONTAINER                             NOT STARTED
         5 MCDONAC                        UN_CONTAINER                             NOT STARTED
         6 MCDONAC                        PK_ELEMENTS                              NOT STARTED
         7 MCDONAC                        UN_ELEMENTS                              NOT STARTED
...
...
...

Step 2

Now we’ll create a simple procedure that will perform the rebuild.  This procedure will be run from several worker sessions concurrently, so we need a way splitting the workload out.  For any sequenced list, MOD will do the job quite nicely.  In a real production scenario, you might have more sophisticated requirements (for example, you might have some sort of weighting system so that there is an estimate of how much effort each rebuild would need, and split rebuilds out accordingly in order for all processes to finish at approximately the same time).


SQL> create or replace
  2  procedure worker_bee(p_jobs pls_integer default 4, p_this_job pls_integer) is
  3    l_progress     pls_integer := 0;
  4    l_err          int;
  5  begin
  6    if p_this_job not between 0 and p_jobs-1 then
  7       raise_application_error(-20000,'0 and '||p_jobs||' for modulo');
  8    end if;
  9
 10    for i in ( select * from ind_rebuild )
 11    loop
 12        if mod(i.seq,p_jobs) = p_this_job and
 13           i.job_status != 'DONE'
 14        then
 15          l_progress := l_progress + 1;
 16          dbms_application_info.set_client_info('Task '||l_progress||','||i.owner||'.'||i.index_name);
 17          begin
 18            execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild';
 19            update ind_rebuild
 20            set    job_status = 'DONE'
 21            where  owner = i.owner
 22            and    index_name = i.index_name;
 23          exception
 24            when others then
 25              l_err := sqlcode;
 26              update ind_rebuild
 27              set    job_status = to_char(l_err)
 28              where  owner = i.owner
 29              and    index_name = i.index_name;
 30          end;
 31          commit;
 32        end if;
 33    end loop;
 34  end;
 35  /

Procedure created.

Line 12 shows we pick up only those rows that should be picked up by this particular worker process (as nominated by parameter “p_this_job”).  This of course could have been in the WHERE clause itself, but I’ve taken this example from one where we did instrumentation for all rows, including those rows which were skipped as not being appropriate for a particular worker process.  As we rebuild each index, we pop a message into V$SESSION.CLIENT_INFO we can monitor activity, and if the index rebuild fails, we’ll store the sqlcode in the job status.  I’ve kept the logic pretty simple to assist with understanding the demo, but it would relatively straightforward to extend it to handle index partitions as well.

Step 3

Now it just a simple matter of submitting jobs up to the number of concurrent workers you want. 


SQL> variable j number
SQL> declare
  2    c_jobs pls_integer := 4;
  3  begin
  4    for i in 0 .. c_jobs-1
  5    loop
  6      dbms_job.submit(:j,'worker_bee('||c_jobs||','||i||');');
  7    end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

Once we commit, the jobs will commence, and we can easily monitor the jobs either by polling our candidate indexes static table, or by monitoring the job queue itself.


SQL> select * from ind_rebuild;

       SEQ OWNER                          INDEX_NAME                               JOB_STATUS
---------- ------------------------------ ---------------------------------------- -----------
         1 SCOTT                          EMP_PK                                   DONE
         2 SCOTT                          DEPT_PK                                  NOT STARTED
         3 MCDONAC                        SYS_C0013656                             DONE
         4 MCDONAC                        PK_CONTAINER                             NOT STARTED
         5 MCDONAC                        UN_CONTAINER                             -1418
         6 MCDONAC                        PK_ELEMENTS                              DONE
         7 MCDONAC                        UN_ELEMENTS                              NOT STARTED
...
...
...

So if you are on a current release, DBMS_PARALLEL_EXECUTE can do this (and more) without much fuss, but even without that facility, using the job / scheduler queue to parallelise tasks is simple to do.

It’s just bad code or bad design … most of the time

Some years ago I wrote an article for the UKOUG magazine called “Want a faster database – Take a drive on the M25”.  For those not familiar with the United Kingdom, the M25 is one of its busiest roads (M = “motorway”) and because it moves so much traffic, and runs so close to capacity, it has often been referred to as “the world’s largest car park”.  Many people have probably spent a good part of their lives on the M25 Smile  I used the M25 as a metaphor for how database professionals can focus on the wrong things when trying to solve a performance problem, such as:

“I’m stuck in traffic…perhaps a faster car will help”

ie, throwing CPU at a problem that is not CPU bound will not help things, or

“I’m stuck in traffic…it must be the width of the paint on the lane markings”

ie, looking at the mutex hit rates is perhaps not the first place you should look at to solve a performance issue.

I concluded the article with the statement:

“…many Oracle professionals have forgotten the most important weapon in planning, designing, and implementing a successful Oracle database – that is, common sense.”

 

I wrote the article (I think) around 15 years ago, and yet our zeal for digging around metrics and internals when trying to solve performance issues before we examine the very basic components of a problem remain.

I saw a tweet recently that led to a thread on the database community forum recently that demonstrates this most effectively.  The were very early warning signs that the poster had jumped to the detail before looking holistically at the problem , simply due to the title of the post – “AWR Report High DB CPU Time and low SQL execute elapsed time”. 

Because that is not a problem. 

Namely, it is not a problem  in the sense that, I’ve never heard a customer call up a support department and say “I have some concerns over the ratio of DB CPU time to my SQL elapsed time” Smile  Customers simply say “My application is too slow”, or if that “customer” is the operations department, they will say “Our server is reaching its capacity limits” or similar.  In fact, it took 9 more posts before our poster finally described what the problem was: “We have a billing time approx. 20 hours currently. If we’ll reduce billing time to 8 hours, the problem will be solved”.  And when people start their description of an issue with the too low level of detail, it is like click-bait to the well intentioned people in the community that feel inspired to help.  What followed in the thread was a series of interactions along the lines of (paraphrased):

  • “What is the latch activity like?”
  • “What background waits are dominating?”
  • “What is the CPU consumption like?”
  • “What is the Oracle version to 4 decimal places?”
  • “How many cores?”
  • “Bare metal or virtualised”

And such detailed questions are then just catalysts for low level options being proposed as solutions.  In this case, there were discussions such as to whether the parameter filesystemio_options be altered, or whether some asynchronous I/O tunables should be examined.  And then discussions about whether the database needed to be patched or upgraded.  As I mentioned, it took some time before someone finally asked what the actual problem they were trying to solve was, and a full AWR report was made available.  Once the AWR report was visible, some people posted on the fairly obvious issue, an average over 25,000 calls to the database per second.  That might be reasonable depending on the size and activity on the database, but when combined with another statistic, namely 44,000 calls per transaction,  it starts to sing out as poor application design.  Because if the application was a busy but efficient OLTP system, we might see 25,000 calls per second, but small calls per transaction.  And if it was a busy but efficient batch system, we might see larger calls per transaction but unlikely to see so many calls per second.  Of course, maybe this system is both, but later in the AWR report we could see that a single SQL was executed over 75,000,000 times in the sample period so the evidence of poor design was mounting up fast.

Eventually some learned minds contributed to the thread pointing out the strong likelihood of an application doing row by row processing (or “slow by slow” as its often called) to achieve the requirements of the batch run.  It’s probably the most inefficient way of getting work done in an Oracle database.  The database is not the problem.  Now people may accuse me of bias on that statement (because I work for Oracle) but it is true.  The Oracle database software is astoundingly good at getting every last drop of performance out of a server.  And I’ve had that view for the 20 years I’ve been involved with the Oracle database software before I joined the company Smile Yes, there are some times when the core issue is due to a bug or defect in the product, but in the overwhelmingly majority of cases – if you write good code on top of a good design, the database layer will meet or exceed your most demanding of requirements.  Especially in these days where modern hardware is just so incredibly quick and scalable, both in the computing tier and storage tier, there are very few applications that will ever need any bespoke facilities over and above what comes with the delivered database product.

Similarly, I want to stress – I’m not trying to be critical of the the original poster of the thread, and the subsequent replies.  I’m just as guilty as anyone, in that it’s tremendously interesting to dive down into the nitty gritty, to swim in the mire of latches, buffers, mutexes, CPU cycles and wait events looking for some nugget of gold.  And I’m also not dismissing the importance of being knowledgeable in these detailed lower level areas of the database technology, because ultimately we may need to perform investigations at this level to solve some of the more niche problems we encounter in our jobs.  But it is a question of timing – these are not the first port of call.  We should always start with the simple:

  • What are you trying to achieve?
  • How are you trying to achieve it?
  • Are you doing so in a sensible way?
  • Do you even know the way in which you are trying to do it?

We didn’t need to look at the DB CPU to SQL elapsed ratio in this example.  We didn’t need to look at latches.  We just needed to consider the task that was being attempted (billing run, which by its very name sounds like a batch-style process) and then look at the how the task was being tackled, and 25,000+ user calls per second doesn’t sound “batch-like”.

So please, step back from the detail, at least initially. 

Index compression–quick tip

If you’re appropriately licensed and want to use advanced index compression, you can take advantage of the setting a tablespace to automatically add compression as a default at a nominated level in the database.  From the docs:

Here is an example of that in action.   We’ll set our tablespace default accordingly


SQL> create tablespace demo
  2  datafile 'C:\ORACLE\ORADATA\DB122\DEMO.DBF'
  3  size 100M
  4  default index compress advanced high;

Tablespace created.

Now we’ll create a table and an index on it in that tablespace


SQL> column index_name format a10
SQL> column tablespace_name format a15
SQL> create table t tablespace demo as
  2  select *
  3  from dba_objects;

Table created.

SQL> create index t_idx on t(owner) tablespace demo;

Index created.

SQL> select index_name,compression
  2  from user_indexes
  3  where table_name ='T';

INDEX_NAME COMPRESSION  
---------- ------------- 
T_IDX      ADVANCED HIGH   

1 row selected.

You can see that even though we did not specify any kind of compression for the index, it picked up the setting from the tablespace. Thus existing DDL scripts you have for indexes et al will not need to be changed.

Similarly, rebuilding the index does not lose the setting


SQL> alter index t_idx rebuild tablespace demo;

Index altered.

SQL> select index_name,compression
  2  from user_indexes
  3  where table_name ='T';

INDEX_NAME COMPRESSION   
---------- ------------- 
T_IDX      ADVANCED HIGH 

1 row selected.

You can see what the defaults are for a tablespace by querying DBA_TABLESPACES as per normal


SQL> select tablespace_name,compress_for,index_compress_for
  2  from user_tablespaces where tablespace_name = 'DEMO';

TABLESPACE_NAME COMPRESS_FOR                   INDEX_COMPRES
--------------- ------------------------------ -------------
DEMO                                           ADVANCED HIGH

Instrumentation … not just for debugging

Yeah I know.  You’re probably thinking “Here’s another blog post from someone telling us how important it is to instrument our code, so we can get better debugging, better performance, blah blah blah”.

If that is your sentiment, then I’m thrilled because it means the instrumentation message has got through to the extent that it is now considered just a statement of the obvious.  As many will know, my predecessor was a huge advocate of extensive instrumentation in your application code.  And there is no shortage of examples out there of other Oracle professionals who similarly wax lyrical on the incredible importance of well instrumented code.  For the record, I’m in that camp as well, and we now even have contributions to the Oracle community for everyone to have a robust instrumentation framework in their code.  So there is plenty of information out there, and it is pretty safe to say that there are no more excuses left for un-instrumented or poorly instrumented code.

So why the blog post ?  Well…I’ve got a slightly different assertion to make when it comes to instrumentation, namely, instrumentation can help your user interface.  You will be able to provide a better experience to the users of your application by having your code instrumented.  Now you might be thinking that I’m just banging the same drum here, that is, by instrumenting I’ll be fixing bugs faster, or picking up performance issues more easily, or collecting metrics to help improve application quality.  These are of course true benefits of instrumentation, but we all know that right ?

No…my motivation for this blog post stems from an activity on one of my laptop’s last week.

I went to upgrade it to Windows 10.  I fired off the process and after a short while, the following window appeared

image

and the familiar “something is happening” feedback swirling dots…

image

I have several laptops at home, so there was no need to sit and watch this one, so I left it and worked on another tackling some AskTom questions.  The laptop I upgrading just sits on the shelf, and occasionally plays Netflix for my children.  Hence, naturally enough… I totally forgot about it!

That was Saturday.  Now on Monday… 48 hours later, I’ve just been over to and waved the mouse to restore the display.  And this is what is on the screen:

image

My oh my …. there must be a lot of things to get ready Smile

The obvious question now is – has anything actually happened ?  Has the program hung ?  Or is it downloading a huge Windows 10 OS bundle ?  As a consumer of this “application” (the upgrade process) I have no idea.  There is nothing that tells me what state the application is in.  Now let me give the Microsoft developers the benefit of the doubt here, and make the assumption that the upgrade is extensively instrumented, and that if I looked in a logging file somewhere on the laptop hard drive, there would be a capture of the current state which would tell me whether any true progress is being made.  And that is typically how most us of implement instrumentation – a means to capture information for diagnosis.

But to do so is to miss an opportunity to directly assist the user.  Even if the content of the instrumentation would appear to be gibberish to a typical Windows consumer, just the presence of that detail on the user interface  becomes an improvement to the user experience.  If my upgrade dialog box had looked something like this:

image

then even if I have no idea what “getting platform details” means, the fact that I was on Step 17 at 9:00:12 helps me as a consumer, because I have immediate feedback as to whether progress is being made or not.  If I see that message at 9:05am, then I’m likely to let it sit a little longer to see if a “Step 18” ever eventuates.  But if I’m seeing that message at 5pm, then I can be reasonably sure that all is not well.

And no, I don’t think a swirling icon is sufficient.  We all used to criticize the “Progress Bars” on older versions of Windows and the like, when it would either exceed 100% or never get there, or jump back and forth.  So I can understand the motivation for the swirly icons we see in all modern devices and operating systems, because they avoid the task of giving any true prediction of completion.  And don’t get me wrong, I’m not saying that a return to progress bars is the solution, because that would be understating the difficultly of being able to know how long a task will take.  But simply taking advantage of the instrumentation code that I know you already have Smile, and presenting that in some way to the consumers of your application can be a better user experience for them.

Keep that in mind.

The village idiot

If you are not familiar with the term Village Idiot, then Wikipedia provides a sufficient definition from which I can base this blog post.

https://en.wikipedia.org/wiki/Village_idiot

“The village idiot … is a person known for ignorance or stupidity”

Over the past couple of weeks I’ve been flying a bit.  First was OpenWorld and OracleCode in New Delhi in India, and from there, I was heading straight from there to Cleveland, Ohio for the GLOC users conference for the first time.  Being a fairly seasoned traveller, this should have been a relatively straightforward affair.

Well… things didn’t turn out that way. 

To get to Cleveland, first I had to get to the United States, so I had two flights, as you can see from the picture  – one from New Delhi to a transfer in Shanghai, and then from Shanghai to San Francisco. 

village_idiot_map

 

 

The problems started due to the simplest of issues.  The night before I was due to fly, I picked up my itinerary, saw “11:40pm” and thought “Cool…I can sleep in tomorrow, do some AskTOM, get some work done, and stroll out to the airport after the evening traffic has subsided”.  And I did precisely that.  I got to the airport at about 9pm, paid my driver, collected my suitcase and headed over to the airport entrance.

For those of you that have not been the India, the airports work a little differently.  You cannot actually enter the airport unless you have a valid ticket.  I assume this is both for security reasons and due simply to the volume of people that pass through the doors.  At the entrance, security personnel check your passport and ticket and let you in.  I produced my details and after a short pause, the officer looked at me and said:

“You cannot come in”

I asked why ?  I had given him my passport and all my flight details. 

He said “Your ticket is no longer valid”

And that’s when it hit me….The boarding time for both of my flights was 11:40… But for the first flight, it was 11:40am, and for the second flight was 11:40pm.  The night before, I had checked the wrong itinerary – I was mentally cued in to the time being “11:40” so when I saw “11:40pm” I assumed I was looking at the right document.

So this was the start of problems.  Here I was, on the sidewalk, not even being able to enter the airport, having missed a flight that left some 8 hours before I even got to the airport.  This first leg was with Air India, and the second leg was with United, so I was already thinking – how am I going to re-arrange these flights and get everything coordinated across two carriers. 

But first things first…I wandered down to the one part of the airport where you are allowed to enter – the ticketing section.  And, now nearly 10pm, no-one was manning the Air India desk.  So I tried calling them, and even whilst on hold, I realised the futility of this, because if you’ve ever stood outside in Indian traffic, you cannot hear a single thing.

I knew I would not be flying anywhere tonight so the next job was to get to an airport hotel….and of course, there are no onsite airport hotels at New Delhi airport Smile  So then it was a case of walking from taxi to taxi outside the airport trying to see if any of them take a credit card (which is rare for taxis in India) because, like most people, I had carefully spent all of my Indian currency because I thought I was about to leave the country !  Finally a car that looked less like a taxi and more like a hotel car was driving past, so I flagged him down, and asked him which hotel he was from and how far away it was.  5km later I was at a hotel, now nearing midnight, and the job of sorting out flights commenced.

Sorting out 2 flights with different vendors is not pleasant.  To try avoid the hassles, I phoned my travel agency because they’d have access across carriers.  After 20 mins (on mobile phone international roaming rates!) of “Please hold, your call is important to us” I gave up.  So first it was a call to Air India to see if I could get on the same flight tomorrow, but I could book nothing because then it was a call to United to see if I could get the next day flight from Shanghai. Then back to Air India to actually book, and then back to United and book with them.  Rest assured, on a mobile phone with brittle coverage, nothing is more annoying that voice-controlled automation ! 

Bot: “In a few words, tell us how we can direct your call”…
Me:  “Flight Reservation”
Bot: “I think you said ‘Cargo’. Is that right?”

I dont know why…but it does indeed make you feel better to swear at a bot Smile
So after a couple of hours of sweat and tears (and expense) I have more or less the same flights booked for 24 hours later.  I get some much needed sleep to let the adrenalin seep out…

Next morning, I’m back where I started – at the airport entrance, but this time, I’m straight through the entrance with no difficulties…phew.  After the standard 30min queue to check in, when I get to the counter, the Air India agent says to me:

“I’m sorry…We cannot check you in”

I go pale…. “WHY ?!?!?!”

“You do not have a visa for China.”

I tell him I do not need one, because I’m not staying there – it is just transit.  But apparently with the re-booking of the flights, they are no longer “connected”.  So now I have to prove that I indeed have a connecting flight out of China, and my existing hard copy printouts are useless, because they refer to a flight that already left yesterday !  And thus, here I am, at the checkin desk, trying to once again navigate the stupid voice bot as I try to contact United over shoddy cell service with international roam, so they can tell my Air India checkin agent about my flight.  I’ve discovered this is not a good way to be popular in an indian airport, with 500 people queued up behind you because you have become a bottleneck.

After 10mins on the phone, passing it back and forth to the agent and myself, I am finally allowed to check in.  My Air India agent is very apologetic and offers to check my bags all the way through to San Francisco to make transit more convenient.  I am thankful for small mercies at this point.

8 hours later and touchdown…I have made it to Shanghai.  Unsurprisingly, all of the signage is in Chinese, so navigating my way around is not easy.  For the life of me, after much wandering, I cannot find the International Transfer. Eventually I give up and figure, even if I go out through Customs, I can just come back through security in the normal way.  After another long queue, I get to the front of the line at Customs, and the two officials, resplendent in their semi-automatic machine guns, look at my passport, and my ticket, pause, and just shake their head.

That’s all.  Just a shake of the head, and I’m not allowed to pass.

Now I’m panicking.  I’m starting to think of that movie where the guy could not get out of the airport for months. 

I ask why, but their English is just as good my Chinese, ie, non-existent.  I can feel the sweat on forehead, so I’m sure they are starting to think I’m a terrorist or threat to the nation in some way.  And they are mighty big machine guns.

Some feeble gesturing and “sign language” from me doesn’t seem to be helping but at least I’m not being locked up yet.  Eventually one of the officials gets my onward boarding pass to San Francisco, and points at the sign on the wall, which is mainly Chinese but I can make out that it is referring to “24 hours” being the limit you can stay in the country without having a visa.

Now I’m very stressed and very confused.  I’ve given them my onward boarding pass, which shows I’m (hopefully) out of the country in 5 hours…so what could possibly be the issue. More desperate hand waving from me.  The official gets my boarding pass and circles two items:

Departure Date: May 12
Boarding Time: 23:40

and then points to the “24 hour” sign again.  And then … the penny drops.  My flight leaves at 00:30 on May 12, but the boarding pass says “Boarding at 23:40” (which is actually boarding on May 11 for a May 12 flight!), but the Customs person is interpreting this to be late at night on the 12th, which is more than 24 hours…hence violating their entry rules.

So now it’s me drawing pictures of clock hands, and calendars, and departure boards …. and after 10 terrifying minutes, we finally are in agreement – I can indeed pass through and pass back to leave the country !  At this point, I’m still not even sure if I was meant to come through Customs but at least I’m seem to be making some progress. 

It’s at this moment I realise that Friday night is peak hour at Shanghai airport.  Often as travellers, we’re critical of security checks when the queues are long and only a handful of staff are working.  I cannot say this about Shanghai.  They had every single departure check open, and equally as many X-ray machines all going concurrently – perhaps as many as 15.  But it counts for nought when a bazillion people are trying to fly somewhere.  I spent 2 hours in the familiar snake lines going through the standard departure checks and x-ray screening before finally getting to the United departure gate with about 40mins to spare.

The stress finally seems to be over…I sit and relax.  And then…

“Paging Mr Connor McDonald…can you come to the gate desk urgently”

By this point, I’m convinced that I will never be seen again by family and friends.  I trudge up to the desk. 

“We’re sorry sir, but the Air India checkin agent should not have checked your bags through to San Francisco.  As you’ve seen, there is no international transfer in Shanghai – all passengers must retrieve their bags when transferring”.

Well…this explains why I could not find the international transfer….there isn’t one !  And then comes the kicker..

“…So we’ll need to go back through Customs and collect your suitcase and come through security again”

There was long pause at this moment… a long pause where I considered the well being of the person telling me this, and what the implications would be if I were to remove their spleen with the plastic fork I had been eating my salad with Smile

“NNNNNNOOOOOOOOOOOO!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!”  I bellowed at them, knowing that I could no way make it back in time, and moreover, there was no way I was going through the explanations of 24hour clocks again.

The result ?  I did get on my plane and I did land in San Francisco…albeit a day later than originally planned.

The suit case ?  Well… it didn’t.  So the next day, I was back in a taxi, heading out to San Francisco airport to pick up the suitcase that made its own way on its own schedule to San Francisco.

So there you have it.  All of this grief…and why ?  Because I could not read a piece of paper correctly.  I’m not just the village idiot.  I think if you took the village idiot from the all villages, and then made a village of those idiots…then I’d be the village idiot in that village !

So I’m writing this post somewhat as part of my “penance”.  After all, if you can do something as silly as I did, then it only seems an appropriate punishment to share it with one’s peers 🙂

Quick tip–identity columns

Lets say I’ve been reading about schema separation, and thus I am going to have a schema which owns all of my objects, which I’ll call APP_OWNER, which will have no connection privilege and a separate schema called APP_ADMIN which will take care of all of the DDL tasks.

Here’s my setup:


SQL> create user app_owner identified by app_owner;
 
User created.
 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5  to app_admin identified by app_admin;
 
Grant succeeded.

I have granted APP_ADMIN the ability to create tables in any schema, and create sequences in any schema. I did the latter, because I know that if I use the “IDENTITY” clause for a column, then behind the scenes I’ll be creating a sequence to populate those ascending values.  So it looks like I am ready to go and create my objects.  Let’s create that first table


SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
create table app_owner.t(pk integer generated always as identity)
*
ERROR at line 1:
ORA-01031: insufficient privileges

Hmmm….that is not what we were expecting. It turns out that to create an identity column in another schema you need more than just CREATE ANY SEQUENCE. You also need SELECT ANY SEQUENCE.


SQL> conn / as sysdba
Connected. 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5    select any sequence
  6  to app_admin identified by app_admin;
 
Grant succeeded.
 
SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
 
Table created.

And there we go Smile

Footnote: If you’ve never seen the syntax “grant <privs> to <user> identified by <pass>” it is a quick shortcut to both create the user account and assign privileges in a single command