SQL> create or replace 2 function p2p_distance( 3 p_latitude1 number, 4 p_longitude1 number, 5 p_latitude2 number, 6 p_longitude2 number) return number deterministic is 7 earth_radius number := 6371; 8 pi_approx number := 3.1415927/180; 9 lat_delta number := (p_latitude2-p_latitude1)*pi_approx; 10 lon_delta number := (p_longitude2-p_longitude1)*pi_approx; 11 arc number := sin(lat_delta/2) * sin(lat_delta/2) + 12 sin(lon_delta/2) * sin(lon_delta/2) * cos(p_latitude1*pi_approx) * cos(p_latitude2*pi_approx); 13 begin 14 return earth_radius * 2 * atan2(sqrt(arc), sqrt(1-arc)); 15 end; 16 / Function created. SQL> SQL> select p2p_distance(36.12, -86.67, 33.94, -118.4) from dual; P2P_DISTANCE(36.12,-86.67,33.94,-118.4) --------------------------------------- 2886.40705
If you are on any version of the database past 10.2.0.4, then savvy DBA’s may have noticed the following message popping up occasionally in their trace files
Warning: log write time 540ms, size 444KB
In itself, that is quite a nice little addition – an informational message letting you know that perhaps your log writer performance is worth closer investigation. MOS Note 601316.1 talks a little more about this message.
So let’s say you have seen this warning, and you are interested in picking up more information. Well… you could start scanning trace files from time to time, and parsing out the content etc, or do some analysis perhaps using Active Session History, but given that these warnings are (by default) triggered at above 500ms, there’s a chance you might miss them via ASH.
In 12c, this has become a lot simpler – because it looks like the same data is now exposed via a view, v$lgwrio_outlier
sql> select function_name,io_size,file_name,io_latency 2 from V$LGWRIO_OUTLIER; FUNCTION_NAME IO_SIZE FILE_NAME IO_LATENCY ------------- ------- ------------------------------- ----------- LGWR 16 /u03/oradata/MYDB/control01.ctl 745 LGWR 16 /u03/oradata/MYDB/control01.ctl 506 LGWR 12 /u02/oradata/MYDB/redo09.dbf 705 LGWR 1 /u02/oradata/MYDB/redo08.dbf 502 LGWR 5 /u02/oradata/MYDB/redo07.dbf 538 LGWR 211 /u02/oradata/MYDB/redo10.dbf 1783 LGWR 1024 /u02/oradata/MYDB/redo10.dbf 597 LGWR 0 /u02/oradata/MYDB/redo10.dbf 597 LGWR 10 /u02/oradata/MYDB/redo10.dbf 597 LGWR 2 /u02/oradata/MYDB/redo10.dbf 775 LGWR 0 /u02/oradata/MYDB/redo07.dbf 683 LGWR 0 /u02/oradata/MYDB/redo09.dbf 704 LGWR 1 /u02/oradata/MYDB/redo10.dbf 504 LGWR 0 /u02/oradata/MYDB/redo10.dbf 690 LGWR 16 /u03/oradata/MYDB/control01.ctl 504 LGWR 58 /u02/oradata/MYDB/redo08.dbf 705 LGWR 1 /u02/oradata/MYDB/redo08.dbf 704
Obviously the trace files are also tagged with the timestamp of occurrence, whereas the view does not expose this infromation.
But let’s just say two things here – (a) where there is a will there is a way, and (b) like most V$ views, it is based on an X$ object, which might contain more information
For other types of I/O in the database, you can also take a look at V$IO_OUTLIER
Flashback Data Archive (previously called Total Recall) has been around for a long time now. For those unfamiliar with it, (and by the way, if you are on Enterprise Edition, you should be familiar with it, because its a free feature), here is a very quick primer.
Create a tablespace to hold your history, and create a flashback archive using that space allocation.
SQL> create tablespace SPACE_FOR_ARCHIVE 2 datafile 'C:\ORACLE\DB11\ARCH_SPACE.DBF' 3 size 100M; Tablespace created. SQL> CREATE FLASHBACK ARCHIVE longterm 2 TABLESPACE space_for_archive 3 RETENTION 1 YEAR; Flashback archive created.
Then add your tables to the archive. Some tables will be created automatically to support being able to record all of the changes to that table for as long as the archive duration specifies
SQL> ALTER TABLE EMP FLASHBACK ARCHIVE LONGTERM; Table altered. SQL> select table_name 2 from user_tables 3 / TABLE_NAME --------------------------- SYS_FBA_HIST_71036 SYS_FBA_TCRV_71036 SYS_FBA_DDL_COLMAP_71036 EMP
And then voila! We can track changes to our table over time, well beyond the typical undo_retention setting for the database.
SQL> select empno, ename, job, sal, comm, 2 nvl(VERSIONS_STARTTIME,LAST_MOD) TS 3 ,nvl(VERSIONS_OPERATION,'I') op 4 from EMP 5 versions between timestamp 6 timestamp '2014-02-11 20:12:00' and 7 systimestamp 8 order by empno; EMPNO ENAME JOB SAL COMM TS O ---------- ---------- --------- ---------- ---------- ------------ - 7369 SMITH CLERK 806 08.10.51 PM I 7369 SMITH SALES 8060 1000 08.12.10 PM U 7499 ALLEN SALESMAN 1606 300000000 08.10.51 PM I 7521 WARD SALESMAN 1256 500000000 08.10.51 PM I 7566 JONES MANAGER 2981 08.10.51 PM I ... 7900 JAMES CLERK 956 08.10.51 PM I 7902 FORD ANALYST 3006 08.10.51 PM I 7934 MILLER CLERK 1306 08.10.51 PM I 7934 MILLER CLERK 1306 08.12.10 PM D
But sometimes we want to know more about the history. We want to know who did the change, what machine were they on, etc etc. Auditors tend to be a little snippy about those kind of things
- You: “I can confirm that the SALARY field was updated at 9:01am on January 25th”
- Auditor: “And who did that change?”
- You: “Er….um…..someone”
But it might not be practical to augment all of your tables with WHO, WHEN, WHERE etc columns to capture that information. That might involve a lot of application changes and testing.
If you’re on 12c, you might be in luck. Flashback Data Archive can capture additional information, assuming your application is taking advantage of the myriad of instrumentation facilities offered by the database.
We simply turn on context tracking, which can capture information on a per transaction basis
SQL> begin 2 dbms_flashback_archive.set_context_level( 3 level=> 'ALL'); 4 end; PL/SQL procedure successfully completed.
Now when we perform changes to the table, we can look at the transaction ID (XID) for that change.
SQL> update EMP 2 set sal = sal*10 3 where empno = 7499; 1 row updated. SQL> commit; SQL> select XID from SYS_FBA_HIST_510592; XID ---------------- 09000B00C7080000
And using that XID, we can pick up the context information about that transaction
SQL> set serverout on SQL> begin 2 dbms_output.put_line( 3 dbms_flashback_archive.get_sys_context ( 4 '09000B00C7080000', 'USERENV', 'SESSION_USER')); 5 6 dbms_output.put_line( 7 dbms_flashback_archive.get_sys_context ( 8 '09000B00C7080000', 'USERENV', 'HOST')); 9 10 dbms_output.put_line( 11 dbms_flashback_archive.get_sys_context ( 12 '09000B00C7080000', 'USERENV', 'MODULE')); 13 end; 14 / SCOTT WORKGROUP\XPS SQL*Plus
How much more ? Well, quite a lot actually !
SQL> desc SYS.SYS_FBA_CONTEXT_AUD Name Null? Type ------------------------- -------- ------------------ XID RAW(8) ACTION VARCHAR2(256) AUTHENTICATED_IDENTITY VARCHAR2(256) CLIENT_IDENTIFIER VARCHAR2(256) CLIENT_INFO VARCHAR2(256) CURRENT_EDITION_NAME VARCHAR2(256) CURRENT_SCHEMA VARCHAR2(256) CURRENT_USER VARCHAR2(256) DATABASE_ROLE VARCHAR2(256) DB_NAME VARCHAR2(256) GLOBAL_UID VARCHAR2(256) HOST VARCHAR2(256) IDENTIFICATION_TYPE VARCHAR2(256) INSTANCE_NAME VARCHAR2(256) IP_ADDRESS VARCHAR2(256) MODULE VARCHAR2(256) OS_USER VARCHAR2(256) SERVER_HOST VARCHAR2(256) SERVICE_NAME VARCHAR2(256) SESSION_EDITION_NAME VARCHAR2(256) SESSION_USER VARCHAR2(256) SESSION_USERID VARCHAR2(256) SESSIONID VARCHAR2(256) TERMINAL VARCHAR2(256) SPARE VARCHAR2(256)
And since we can use our flashback query syntax to pick up all of the transaction ID’s
SQL> select empno, ename, job, sal, comm, 2 VERSIONS_XID 3 from EMP 4 versions between timestamp 5 timestamp '2014-02-11 20:12:00' and 6 systimestamp 7 order by empno; EMPNO ENAME JOB SAL COMM VERSIONS_XID ---------- ---------- --------- ---------- ---------- ---------------- 7369 SMITH CLERK 806 7369 SMITH SALES 8060 1000 09001C00E04A0000 7499 ALLEN CLERK 16060 300000000 0A000A0024080000 7499 ALLEN SALESMAN 16060 300000000 09000B00C7080000 7499 ALLEN SALESMAN 1606 300000000 7521 WARD SALESMAN 1256 500000000
we can also take advantage of that new dictionary view SYS.SYS_FBA_CONTEXT_AUD to get all the context information for each transaction.
SQL> select ... 2 from 3 ( select 4 empno, ename, job, 5 sal, comm, VERSIONS_XID 6 from EMP 7 versions between timestamp 8 timestamp '2014-02-11 20:12:00' and 9 systimestamp 10 ) e, SYS.SYS_FBA_CONTEXT_AUD a 11 where e.VERSIONS_XID = a.XID;
Flashback Data Archive is pretty nifty in 12c.
I was reading an article today about how 10,000+ Mongo installations that are/were openly accessible on the internet have now been captured by ransomware, with nearly 100,000 other instances potentially vulnerable to the same issue.
Now, since I’m an Oracle blogger, you may be inclined to think the post is going to jump on the “bash MongoDB” bandwagon, but it’s not. I am going to bash something…but it’s not MongoDB
I’m going to steal a quote from … of all things… Jurassic Park
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”
In this age of DevOps, Agility, reactive-this, and responsive-that, there is an understandable desire to have a goal of “deliver fast and often”. But I’m not attacking the various methodologies and processes out there that espouse the new age of fast and furious delivery. Because, to my knowledge, none of them ever had a goal of “fast delivery no matter what“…they had a goal of fast and flexible delivery whilst adhering to the unchanging fundamental principle of delivering high quality, secure and robust software. It may be functionally incomplete, or have compromises in (for example) the user experience, but we never meant to compromise on the core principles.
So when I see stories about MongoDB (or any software technology) being exposed for reasons of poor security, my lament is not for the technology but the part when a developer, or administrator, or manager, or anyone in the chain of processes that skipped an appropriate assessment of security, in effect, playing a game of Russian Roulette with their customer’s personal details. I don’t think there is even a defence of “ignorance” here, because the moment anything is deployed in an organisation, surely the first question that must be asked either before the deployment, or even in that awkward moment afterwards when you discover something was deployed without your knowledge is … Is it secured? And if no-one can answer that question, then surely that’s the immediate death knell on that application right there ?
If “hand on heart” a team can claim that security measures are in place, and then these are circumvented, then that’s defensible – perfection in security is such a moving target nowadays. But it seems a stretch to think that 100,000 Mongo teams out there all did a diligent crack at securing their system before getting all excited about having their application live
So if you’re building applications with the latest Micro-agile-internet-of-things-reactive-responsive-Java-scripted-open-source-media-rich-blahblah approach…that’s super cool. But that’s not a Leave Pass from being responsible when it comes to security.
For over 16 years, AskTom has been one of the most valuable resources available to developers and database administrators working with the Oracle Database. With over 20,000 questions tackled and answered, along with over 120,000 follow up’s to additional queries, it remains an outstanding knowledgebase of Oracle assistance.
And today, AskTom just got a whole lot better!
We’re excited to announce a new member of AskTom team…database evangelist Maria Colgan. Many of you will know Maria for her work with Optimizer and In-Memory, but in fact she brings decades of expertise across the entire database technology stack to assist you with your questions. Maria blogs regularly at sqlmaria.com so make sure you check in there regularly as well. With Maria on the team, AskTom keeps getting better and better in 2017!
Well…when I say “liked”, what I mean is “the stuff you all clicked on a lot” last year. Whether you liked it or not will remain one of those great mysteries
The top 6 posts from 2016 were:
Nice to see a 12c post in there – 2017 is the year I think most people will be moving to 12c if they have not done so already.
And of course, nice to see a SQL*Plus post in there….I love it when the old simple tools still add value for people.
Thanks for supporting the blog in 2016 … I’ll have stacks more content again for you this year.
Let’s say I’ve got a partitioned table, and because New Year’s eve is coming around, I certainly don’t want to be called out at 12:01am because I forgot to add the required partition for the upcoming year .
Since 11g, I can sleep easy at night by using the INTERVAL partition scheme. Here’s my table
SQL> create table t ( x date, y int ) 2 partition by range ( x ) 3 interval ( numtoyminterval(1,'MONTH')) 4 ( 5 partition p201412 values less than ( date '2015-01-01' ) 6 ); Table created. SQL> insert into t 2 select date '2015-01-01'+rownum, rownum 3 from dual 4 connect by level <= 330; 330 rows created. SQL> commit; Commit complete. SQL> select partition_name from user_tab_partitions where table_name = 'T'; PARTITION_NAME ---------------------------------------------------------------------------- P201412 SYS_P20234 SYS_P20235 SYS_P20236 SYS_P20237 SYS_P20238 SYS_P20239 SYS_P20240 SYS_P20241 SYS_P20242 SYS_P20243 SYS_P20244 12 rows selected.
The problem is … man, do I hate those automatic names . I suppose I can fix them up later, but in the meantime, I’ll create my local index on the table, but I’ll get the names right whilst I do
SQL> SQL> create index IX on T ( y ) 2 local 3 ( 4 partition P201412 5 partition P201501, 6 partition P201502, 7 partition P201503, 8 partition P201504, 9 partition P201505, 10 partition P201506, 11 partition P201507, 12 partition P201508, 13 partition P201509, 14 partition P201510, 15 partition P201511 16 ) 17 ; Index created.
So far so good… I created 12 partitions in my table, and I explicitly listed 12 partitions in my index. But you need to be careful with such an approach, because if you’ve got that script in (say) your source control system, then even though you’ve specified a LOCAL index, you have also (perhaps unknowingly) set of limit of 12 partitions on the index should be re-run that script. So if I was to drop that index and recreate with the unchanged script (or for example, you’ve done a datapump extraction of DDL etc), then you might get yourself into a jam if the table data changes.
SQL> SQL> SQL> drop index ix; Index dropped. SQL> SQL> insert into t 2 values (sysdate,2000); 1 row created. SQL> SQL> SQL> create index IX on T ( y ) 2 local 3 ( 4 partition P201412 5 partition P201501, 6 partition P201502, 7 partition P201503, 8 partition P201504, 9 partition P201505, 10 partition P201506, 11 partition P201507, 12 partition P201508, 13 partition P201509, 14 partition P201510, 15 partition P201511 16 ) 17 ; create index IX on T ( y ) * ERROR at line 1: ORA-14024: number of partitions of LOCAL index must equal that of the underlying table
Now my same DDL does not work, because there are 13 table partitions and I only listed 12 index partitions. The easy workaround is not to list the partitions at all.
SQL> SQL> create index IX on T ( y ) 2 local ; Index created. SQL> SQL>
and perhaps have an renaming routine that can be applied after the fact.