Little enhancements are always nice

Before LATERAL and CROSS APPLY were added (exposed to us) in 12c, a common technique to do correlated joins was using the TABLE/MULTISET technique.

For example, we might have had a couple of tables:


SQL> create table t as
  2  select object_type, min(created) min_dte, max(created) max_dte
  3  from dba_objects
  4  where owner = 'SCOTT'
  5  group by object_type;

Table created.

SQL> select * from t;

OBJECT_TYPE         MIN_DTE   MAX_DTE
------------------- --------- ---------
INDEX               09-OCT-13 09-OCT-13
TABLE               09-OCT-13 09-OCT-13

SQL>
SQL> create table t1 as
  2  select object_type, object_name, object_id
  3  from dba_objects
  4  where owner = 'SCOTT';

Table created.

SQL> select * from t1;

OBJECT_TYPE         OBJECT_NAME                               OBJECT_ID
------------------- ---------------------------------------- ----------
INDEX               PK_DEPT                                       86892
TABLE               DEPT                                          86891
TABLE               EMP                                           86893
INDEX               PK_EMP                                        86894
TABLE               BONUS                                         86895
TABLE               SALGRADE                                      86896

6 rows selected.

and then tried to join them in what seems to be a logical fashion, but got the following error


SQL> select *
  2  from   t,
  3         ( select object_id
  4           from   t1
  5           where  object_type = t.object_type );
         where  object_type = t.object_type )
                              *
ERROR at line 5:
ORA-00904: "T"."OBJECT_TYPE": invalid identifier

Now this query could be trivially recast as a normal join, but there are examples where there is not the case, but we’re keeping this one as simple as possible.

We could workaround this by using TABLE and MULTICAST,


SQL> select *
  2  from t,
  3       table(cast(multiset(
  4                    select object_id
  5                    from   t1
  6                    where  object_type = t.object_type
  7                   ) as sys.odcinumberlist));

OBJECT_TYPE         MIN_DTE   MAX_DTE   COLUMN_VALUE
------------------- --------- --------- ------------
INDEX               09-OCT-13 09-OCT-13        86892
TABLE               09-OCT-13 09-OCT-13        86891
TABLE               09-OCT-13 09-OCT-13        86893
INDEX               09-OCT-13 09-OCT-13        86894
TABLE               09-OCT-13 09-OCT-13        86895
TABLE               09-OCT-13 09-OCT-13        86896

6 rows selected.

which is a little “trick”, in effect being

  • Take our child rows
  • Collapse them into a single complex type (cast-multiset) to “avoid” this being seen as correlated join
  • Use the TABLE operator to expand that type back out into rows.

But what if our query within the MULTISET is several “layers” deep. Then we might run into problems – I get this on 11g


SQL> select *
  2  from t,
  3       table(cast(multiset(
  4                    select object_id
  5                        from (
  6                           select distinct object_id
  7                           from   t1
  8                           where  object_type = t.object_type
  9                         )
 10                   ) as sys.odcinumberlist))
 11  order by 1,4;
                         where  object_type = t.object_type
                                              *
ERROR at line 8:
ORA-00904: "T"."OBJECT_TYPE": invalid identifier

Because the reference on line 7 to “T.OBJECT_TYPE” is not a single layer from the originating table T, this query will not run in 11g.

But take a look what happens in 12c


SQL> select *
  2  from t,
  3       table(cast(multiset(
  4                    select object_id
  5                        from (
  6                           select distinct object_id
  7                           from   t1
  8                           where  object_type = t.object_type
  9                         )
 10                   ) as sys.odcinumberlist))
 11  order by 1,4;

OBJECT_TYPE             MIN_DTE   MAX_DTE   COLUMN_VALUE
----------------------- --------- --------- ------------
INDEX                   20-NOV-15 16-DEC-15        92614
INDEX                   20-NOV-15 16-DEC-15        92615
INDEX                   20-NOV-15 16-DEC-15        99130
PROCEDURE               31-MAY-16 31-MAY-16       148999
TABLE                   20-NOV-15 31-MAY-16        92607
TABLE                   20-NOV-15 31-MAY-16        92608
TABLE                   20-NOV-15 31-MAY-16        92609
TABLE                   20-NOV-15 31-MAY-16        92610
TABLE                   20-NOV-15 31-MAY-16        92616
TABLE                   20-NOV-15 31-MAY-16        99128
TABLE                   20-NOV-15 31-MAY-16       149002
TRIGGER                 27-NOV-15 27-NOV-15        98228
TRIGGER                 27-NOV-15 27-NOV-15        98229
TRIGGER                 27-NOV-15 27-NOV-15        98230

14 rows selected.

No dramas at all. I always like these little touches that come along without too much fanfare, but make a developer’s life easier.

      OpenWorld 2016 is not far away

      OpenWorld is just around the corner, and the Ask Tom team will be involved in a number of panels where you can chat to us, ask questions, debate topics and basically have a relaxed 45mins during all the frenzied activity that is OpenWorld.  So if you’ve got any questions you would like answered “face to face”, rather than via Ask Tom, either drop them as a comment here, or feel free to post them on the AskTom site and just annotate it in some way that lets us know you’d like to talk about it on the panels during OpenWorld.

      See in in San Francisco !

      image

      Can a query on the standby update the primary ?

      You would think that (with the exception of the V$ tables which are predominantly memory structures reflecting the state of various parts of the database instance) a query on a read-only standby database would have absolutely no interaction with the primary.  After all, the standby database needs to be able to run independently of the primary should that primary database be down, or destroyed.

      But there’s an exception to the rule.  Consider the following example.  I create a sequence on my primary database.

      
      SQL> create sequence seq;
      
      Sequence created.
      
      SQL> select seq.nextval from dual;
      
         NEXTVAL
      ----------
               1
      
      1 row selected.
      
      SQL> select last_number from user_sequences where sequence_name = 'SEQ';
      
      LAST_NUMBER
      -----------
               21
      
      1 row selected.
      

      You can see that that it’s just a plain old sequence with the default cache value of 20.  Now let’s pop over to my standby (which was in recovery mode, so it now contains the same definition) and flick it out into read only mode.

      
      SQL> alter database open read only;
      
      Database altered.
      
      SQL> conn scott/tiger
      Connected.
      
      SQL> select seq.nextval from dual;
      
         NEXTVAL
      ----------
              21
      

      On the standby as well, I can query the same sequence.  Notice that because it is a different instance to the primary (obviously), it picked up it’s value from the next cache boundary, much like multiple RAC instances can do.  But that suggests a potential issue – what happens when the primary keeps on selecting sequence values.  Will it collide with the sequence values obtained via queries on the standby ?  Let’s look back on the primary database.

      
      SQL> select last_number from user_sequences where sequence_name = 'SEQ';
      
      LAST_NUMBER
      -----------
               41
      
      1 row selected.
      
      
      SQL> select seq.nextval from dual;
      
         NEXTVAL
      ----------
               2
      
      1 row selected.
      

      The primary is still using it’s “cached” values from 2 to 20, but notice the LAST_NUMBER has been incremented to 41.  The query on the standby database has sent a message to the primary to get the LAST_NUMBER column on USER_SEQUENCES (or more accurately, the internal table SEQ$) updated to avoid sequence number collisions.  If we continue to consume sequence values on the primary, you will see it skip over those numbers that are now “allocated” to the standby database in read only mode.

      
      SQL> select seq.nextval from dual
        2  connect by level <= 30;
      
         NEXTVAL
      ----------
               3
               4
               5
               6
               7
               8
               9
              10
              11
              12
              13
              14
              15
              16
              17
              18
              19
              20
              41
              42
              43
              44
      
      

      So there you go …. Something I didn’t know was possible.  A query on the standby may enact an physical update on the primary.

      LAG / LEAD quick tip

      As most of us know, with LAG and LEAD or more generally, any analytic function that may extend “past” the boundary of window it is operating on, you can get null as a result.

      Here’s a trivial example

      
      SQL> create table t as
        2  select rownum x
        3  from dual
        4  connect by level <= 10;
      
      Table created.
      
      SQL>
      SQL> select x, lag(x) over ( order by x ) as lag_Test
        2  from t;
      
               X   LAG_TEST
      ---------- ----------
               1           
               2          1
               3          2
               4          3
               5          4
               6          5
               7          6
               8          7
               9          8
              10          9
      
      10 rows selected.
      

      We get null for the first row, because we cannot lag “below” x=1 because there is no such value. That is of course trivially solved with an NVL, for example:

      
      
      SQL> select x, nvl(lag(x) over ( order by x ),999) as lag_Test
        2  from t;
      
               X   LAG_TEST
      ---------- ----------
               1        999
               2          1
               3          2
               4          3
               5          4
               6          5
               7          6
               8          7
               9          8
              10          9
      
      10 rows selected.
      

      But a quick look at the documentation, shows that you don’t need the “clutter” of NVL, because lag, lead already provide for a default value.

      image

      
      
      SQL> select x, lag(x,1,999) over ( order by x ) as lag_Test
        2  from t;
      
               X   LAG_TEST
      ---------- ----------
               1        999
               2          1
               3          2
               4          3
               5          4
               6          5
               7          6
               8          7
               9          8
              10          9
      
      10 rows selected.
      
      SQL>
      

      Direct mode operations on IOT’s

      An AskTom contributor brought to my attention, that direct mode insert on index organized tables now appears possible in 12c.  We can see the difference by running a simple script in both v11 and v12

      
      SQL> select * from v$version;
      
      BANNER
      --------------------------------------------------------------------------------
      Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
      PL/SQL Release 11.2.0.4.0 - Production
      CORE    11.2.0.4.0      Production
      TNS for 64-bit Windows: Version 11.2.0.4.0 - Production
      NLSRTL Version 11.2.0.4.0 - Production
      
      5 rows selected.
      
      SQL> drop table t purge;
      
      Table dropped.
      
      SQL> create table t(x int primary key, y int) organization index;
      
      Table created.
      
      SQL> insert /*+ append */ into t select 1 x, 1 y from dual;
      
      1 row created.
      
      SQL> select * from t;  -- no error, so no direct mode was done
      
               X          Y
      ---------- ----------
               1          1
      
      1 row selected.
      
      
      SQL> select * from v$version;
      
      BANNER                                                                               CON_ID
      -------------------------------------------------------------------------------- ----------
      Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production              0
      PL/SQL Release 12.1.0.2.0 - Production                                                    0
      CORE    12.1.0.2.0      Production                                                        0
      TNS for 64-bit Windows: Version 12.1.0.2.0 - Production                                   0
      NLSRTL Version 12.1.0.2.0 - Production                                                    0
      
      5 rows selected.
      
      SQL> drop table t purge;
      
      Table dropped.
      
      SQL> create table t(x int primary key, y int) organization index;
      
      Table created.
      
      SQL> insert /*+ append */ into t select 1 x, 1 y from dual;
      
      1 row created.
      
      SQL> select * from t;  -- now we do get the error, so append hint was respected
      select * from t
                    *
      ERROR at line 1:
      ORA-12838: cannot read/modify an object after modifying it in parallel
      

      So that looks pretty cool. People may recall an older blog post of mine where it was not possible to do a large scale IOT population without a large underlying sort operation. So I thought I’d revisit that, and see what we can discover.

      In this case, we are using table T_SOURCE as the source of our data. It was created simply as:

      
      SQL> create table T_SOURCE ( n , r , constraint T_SOURCE_PK primary key( n ))
        2  organization index
        3  as select rownum, rpad(rownum,128)
        4  from dual
        5  connect by level <= 1000000
        6  /
      
      Table created.
      
      SQL>
      SQL> exec dbms_stats.gather_table_stats('','t_source')
      
      PL/SQL procedure successfully completed.
      

      So we are going to copy those 1million rows from T_SOURCE to a new table, and try to avoid having to sort the rows by asking the optimizer to “walk” the source data in index order.

      So let’s try direct mode first. You’ll references to a script called “mystat”, that’s simply a query to v$mystat to pick up session level statistics that might be relevant.

      We’ll run these before and after to see what deltas pop out.

      
      SQL> truncate table t_tgt;
      
      Table truncated.
      
      SQL> alter table T_TGT nologging;
      
      Table altered.
      
      SQL> @mystat
      Enter value for statname: redo size
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      redo size                                              135736
      redo size for lost write detection                          0
      redo size for direct writes                                 0
      
      3 rows selected.
      
      SQL> @mystat
      Enter value for statname: sorts
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      sorts (memory)                                            117
      sorts (disk)                                                0
      sorts (rows)                                              359
      
      3 rows selected.
      
      SQL> set timing on
      SQL> insert /*+ APPEND */ into T_TGT select /*+ index_asc(t_source) */ * from T_SOURCE order by 1  ;
      
      1000000 rows created.
      
      Elapsed: 00:00:46.30
      
      SQL> select count(*) from t_tgt where rownum < 10;
      select count(*) from t_tgt where rownum < 10
                           *
      ERROR at line 1:
      ORA-12838: cannot read/modify an object after modifying it in parallel
      
      
      SQL> commit;
      
      Commit complete.
      
      SQL> @mystat
      Enter value for statname: redo size
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      redo size                                           497096676
      redo size for lost write detection                          0
      redo size for direct writes                                 0
      
      3 rows selected.
      
      SQL> @mystat
      Enter value for statname: sorts
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      sorts (memory)                                            122
      sorts (disk)                                                1
      sorts (rows)                                          1000436
      
      3 rows selected.
      
      SQL> @mystat
      Enter value for statname: direct
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      db block gets direct                                        0
      consistent gets direct                                      0
      physical reads direct                                   19231
      physical writes direct                                  19231
      physical reads direct temporary tablespace              19231
      physical writes direct temporary tablespace             19231
      

       

      So we can make the following deductions

      • Direct mode seems to have been activated, due to the ORA-12838 error.
      • I didn’t avoid the sorting, because I sorted 1,000,000 rows
      • The NOLOGGING attribute seems of no benefit here – still consumed 500megabytes of redo

      So whether it was really a direct mode operation is perhaps up for debate, because all of the direct reads/writes recorded were all against temporary tablespace.

      Let’s explore that last one a little.

      I repeated the same population script, but before I did so, I set my workarea_size_policy to manual, and cranked up my sort_area_size to 1G.

      I’ll save you all the output again, but here’s the stats output

      
      SQL> @mystat
      Enter value for statname: direct
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      db block gets direct                                        0
      consistent gets direct                                      0
      physical reads direct                                       0
      physical writes direct                                      0
      physical reads direct temporary tablespace                  0
      physical writes direct temporary tablespace                 0
      

      So…no direct operations here now. This would appear to confirm that the direct mode operations were simply sorting operations.

      Now we’ll perform the same process without using the direct mode hint. We’ll set the table back to logging because we’ve seen that’s of no benefit, and use a fresh session to pick up fresh statistics

      
      SQL> truncate table t_tgt;
      
      Table truncated.
      
      SQL> alter table T_TGT logging;
      
      Table altered.
      
      SQL> @mystat
      Enter value for statname: redo size
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      redo size                                              137236
      redo size for lost write detection                          0
      redo size for direct writes                                 0
      
      3 rows selected.
      
      SQL> @mystat
      Enter value for statname: sorts
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      sorts (memory)                                             21
      sorts (disk)                                                0
      sorts (rows)                                               86
      
      3 rows selected.
      
      SQL> insert into T_TGT select /*+ index_asc(t_source) */ * from T_SOURCE order by 1  ;
      
      1000000 rows created.
      
      Elapsed: 00:00:13.96
      SQL> set timing off
      SQL> select count(*) from t_tgt where rownum < 10;
      
        COUNT(*)
      ----------
               9
      
      1 row selected.
      
      SQL> commit;
      
      Commit complete.
      
      SQL> @mystat
      Enter value for statname: redo size
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      redo size                                           500630220
      redo size for lost write detection                          0
      redo size for direct writes                                 0
      
      3 rows selected.
      
      SQL> @mystat
      Enter value for statname: sorts
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      sorts (memory)                                             26
      sorts (disk)                                                0
      sorts (rows)                                              146
      
      3 rows selected.
      
      SQL> @mystat
      Enter value for statname: direct
      
      NAME                                                    VALUE
      -------------------------------------------------- ----------
      db block gets direct                                        0
      consistent gets direct                                      0
      physical reads direct                                       0
      physical writes direct                                      0
      physical reads direct temporary tablespace                  0
      physical writes direct temporary tablespace                 0
      

      So for the conventional mode we observe

      • I didn’t have to sort the rows
      • I consumed similar amount of redo
      • It was three times faster than the case where we had to so all that sorting to disk
         

      This just goes to show, that (like everything in software), the advice of “Do ‘X’ and in every case it will be faster” is never a good thing to adhere to.

      Direct mode insert is a very cool facility, but it doesn’t mean that it’s going to be the best option in every situation.

      dbms_output and the scheduler

      One of the nifty things in 12c is the ability to pick up DBMS_OUTPUT output from your scheduler jobs. So if you haven’t built an extensive instrumentation or logging facility, you’ll still have some details you can pick up from the scheduler dictionary views. Let’s look at an example

      
      SQL> create or replace
        2  procedure do_stuff is
        3  begin
        4    dbms_output.put_line('Commenced');
        5    dbms_lock.sleep(30);
        6    dbms_output.put_line('Working');
        7    dbms_lock.sleep(30);
        8    dbms_output.put_line('Done');
        9  end;
       10  /
      
      Procedure created.
      
      SQL>
      SQL> begin
        2      dbms_scheduler.create_job (
        3         job_name           =>  'OUTPUT_DEMO',
        4         job_type           =>  'PLSQL_BLOCK',
        5         job_action         =>  'do_stuff;',
        6         start_date         =>  sysdate,
        7         enabled            =>  true,
        8         comments           =>  'Output demo');
        9  end;
       10  /
      
      PL/SQL procedure successfully completed.
      

      After my job completes, I can now query the dictionary

      
      SQL> select OUTPUT
        2  from DBA_SCHEDULER_JOB_RUN_DETAILS
        3  where job_name = 'OUTPUT_DEMO';
      
      OUTPUT
      -----------------------------------------------------
      Commenced
      Working
      Done
      

      The column is a VARCHAR2(4000), so what happens if you exceed that ? Let’s change our PL/SQL procedure

      
      SQL> create or replace
        2  procedure do_stuff is
        3  begin
        4    for i in 1 .. 1000 loop
        5      dbms_output.put_line(i||' Commenced');
        6    end loop;
        7  end;
        8  /
      
      Procedure created.
      
      SQL>
      SQL> begin
        2      dbms_scheduler.create_job (
        3         job_name           =>  'OUTPUT_DEMO',
        4         job_type           =>  'PLSQL_BLOCK',
        5         job_action         =>  'do_stuff;',
        6         start_date         =>  sysdate,
        7         enabled            =>  true,
        8         comments           =>  'Output demo');
        9  end;
       10  /
      
      PL/SQL procedure successfully completed.
      
      SQL>
      SQL> select output
        2  from DBA_SCHEDULER_JOB_RUN_DETAILS
        3  where job_name = 'OUTPUT_DEMO';
      
      OUTPUT
      ----------------------------------------------------------------------------------------------------------------------------------
      Commenced
      Working
      Done
      
      1 Commenced
      2 Commenced
      3 Commenced
      4 Commenced
      5 Commenced
      6 Commenced
      ...
      144 Commenced
      145 Commenced
      146 Commenced
      147 Commenced
      148 Commenced
      149 Commenced
      150 Commenced
      151 Comm
      
      
      2 rows selected.
      

      You can see that the output is truncated. However, all of the output IS retained in another column BINARY_OUTPUT which is a blob column. So whilst a basic query seems cryptic

      
      SQL> select BINARY_OUTPUT
        2  from DBA_SCHEDULER_JOB_RUN_DETAILS
        3  where job_name = 'OUTPUT_DEMO';
      
      BINARY_OUTPUT
      ----------------------------------------------------------------------------------------------------------------------------------
      436F6D6D656E6365640A576F726B696E670A446F6E65
      3120436F6D6D656E6365640A3220436F6D6D656E6365640A3320436F6D6D656E6365640A3420436F6D6D656E6365640A3520436F6D6D656E6365640A3620436F6D
      6D656E6365640A3720436F6D6D656E
      

      you can cast it back to a string to get what you need.

      
      SQL> select utl_raw.cast_to_varchar2(BINARY_OUTPUT)
        2  from DBA_SCHEDULER_JOB_RUN_DETAILS
        3  where job_name = 'OUTPUT_DEMO';
      
      UTL_RAW.CAST_TO_VARCHAR2(BINARY_OUTPUT)
      ----------------------------------------------------------------------------------------------------------------------------------
      Commenced
      Working
      Done
      
      1 Commenced
      2 Commenced
      3 Commenced
      4 Commenced
      5 Commenced
      ...
      990 Commenced
      991 Commenced
      992 Commenced
      993 Commenced
      994 Commenced
      995 Commenced
      996 Commenced
      997 Commenced
      998 Commenced
      999 Commenced
      1000 Commenced
      
      
      2 rows selected.
      
      

      Granular detail from a summary

      We had an interesting question on AskTom a few days ago.  Given a set of 12 values (forecasts in this case), one for each month of the year , can we manufacture a set of weekly forecasts for the same period.  Now it is perhaps a little dubious to “invent” detailed data out of summarised data, but we can come up with a reasonable algorithm for doing so.

      We could simply divide each month by 4 because there are ~4 weeks per month, but we still face the challenge of mapping 48 values into 52/53 weeks of the year.

      We could also sum all the months, and carve it into 52 weekly chunks, but in doing so, we lose the peaks and troughs that a monthly forecast is meant to represent.

      So can we do better ?

      Let’s assume that the forecast for a month implies an even distribution of that value over each day in that month.  Once we take the forecast down to the daily level, we should then be able to aggregate that into weeks without too much trouble.  So here’s how we did in SQL

      Firstly, our monthly forecasts:

      
      SQL> create table t ( mth date, fcast int );
      
      Table created.
      
      SQL> insert into t
        2  select add_months(date '2016-01-01',rownum-1), dbms_random.value(1000,5000)
        3  from dual
        4  connect by level <= 12;
      
      12 rows created.
      
      SQL> select * from t order by 1;
      
      MTH            FCAST
      --------- ----------
      01-JAN-16       2241
      01-FEB-16       1903
      01-MAR-16       4153
      01-APR-16       4466
      01-MAY-16       3039
      01-JUN-16       2914
      01-JUL-16       4751
      01-AUG-16       2925
      01-SEP-16       1369
      01-OCT-16       3330
      01-NOV-16       3747
      01-DEC-16       4236
      
      12 rows selected.
      

      To generate every day of the year, we can use the normal “trick” with DUAL:

      
      
      SQL> with
        2  every_day as
        3    ( select date '2016-01-01'+rownum-1 d
        4      from dual
        5      connect by level <= to_number(to_char(date '2016-12-31','DDD'))
        6    )
        7  select * from every_day;
      
      D
      ---------
      01-JAN-16
      02-JAN-16
      03-JAN-16
      04-JAN-16
      05-JAN-16
      06-JAN-16
      ...
      24-DEC-16
      25-DEC-16
      26-DEC-16
      27-DEC-16
      28-DEC-16
      29-DEC-16
      30-DEC-16
      31-DEC-16
      
      366 rows selected.
      

      We can use standard date functions to determine how many days are in each month:

      
      SQL> with
        2  every_day as
        3    ( select date '2016-01-01'+rownum-1 d
        4      from dual
        5      connect by level <= to_number(to_char(date '2016-12-31','DDD'))
        6    ),
        7  date_ranges as
        8    ( select mth, last_day(mth) mth_end, to_number(to_char(last_day(mth),'DD')) no_of_days, fcast
        9      from t
       10    )
       11  select * from date_ranges
       12  order by 1;
      
      MTH       MTH_END   NO_OF_DAYS      FCAST
      --------- --------- ---------- ----------
      01-JAN-16 31-JAN-16         31       2241
      01-FEB-16 29-FEB-16         29       1903
      01-MAR-16 31-MAR-16         31       4153
      01-APR-16 30-APR-16         30       4466
      01-MAY-16 31-MAY-16         31       3039
      01-JUN-16 30-JUN-16         30       2914
      01-JUL-16 31-JUL-16         31       4751
      01-AUG-16 31-AUG-16         31       2925
      01-SEP-16 30-SEP-16         30       1369
      01-OCT-16 31-OCT-16         31       3330
      01-NOV-16 30-NOV-16         30       3747
      01-DEC-16 31-DEC-16         31       4236
      
      12 rows selected.
      

      And with that, now generate a daily forecast:

      
      SQL>
      SQL> with
        2  every_day as
        3    ( select date '2016-01-01'+rownum-1 d
        4      from dual
        5      connect by level <= to_number(to_char(date '2016-12-31','DDD'))
        6    ),
        7  date_ranges as
        8    ( select mth, last_day(mth) mth_end, to_number(to_char(last_day(mth),'DD')) no_of_days, fcast
        9      from t
       10    ),
       11  apportioned_forecast as
       12   (  select every_day.d, date_ranges.fcast, date_ranges.no_of_days, date_ranges.fcast / date_ranges.no_of_days daily_amount
       13      from  every_day,
       14            date_ranges
       15       where every_day.d between date_ranges.mth and date_ranges.mth_end
       16   )
       17  select * from  apportioned_forecast
       18  order by 1;
      
      D              FCAST NO_OF_DAYS DAILY_AMOUNT
      --------- ---------- ---------- ------------
      01-JAN-16       2241         31   72.2903226
      02-JAN-16       2241         31   72.2903226
      03-JAN-16       2241         31   72.2903226
      04-JAN-16       2241         31   72.2903226
      05-JAN-16       2241         31   72.2903226
      06-JAN-16       2241         31   72.2903226
      07-JAN-16       2241         31   72.2903226
      08-JAN-16       2241         31   72.2903226
      09-JAN-16       2241         31   72.2903226
      10-JAN-16       2241         31   72.2903226
      11-JAN-16       2241         31   72.2903226
      12-JAN-16       2241         31   72.2903226
      13-JAN-16       2241         31   72.2903226
      14-JAN-16       2241         31   72.2903226
      15-JAN-16       2241         31   72.2903226
      16-JAN-16       2241         31   72.2903226
      17-JAN-16       2241         31   72.2903226
      ...
      21-DEC-16       4236         31   136.645161
      22-DEC-16       4236         31   136.645161
      23-DEC-16       4236         31   136.645161
      24-DEC-16       4236         31   136.645161
      25-DEC-16       4236         31   136.645161
      26-DEC-16       4236         31   136.645161
      27-DEC-16       4236         31   136.645161
      28-DEC-16       4236         31   136.645161
      29-DEC-16       4236         31   136.645161
      30-DEC-16       4236         31   136.645161
      31-DEC-16       4236         31   136.645161
      
      366 rows selected.
      

      With our forecast data now at a day-based granularity, we simply sum the data by week:

      
      SQL> with
        2  every_day as
        3    ( select date '2016-01-01'+rownum-1 d
        4      from dual
        5      connect by level <= to_number(to_char(date '2016-12-31','DDD'))
        6    ),
        7  date_ranges as
        8    ( select mth, last_day(mth) mth_end, to_number(to_char(last_day(mth),'DD')) no_of_days, fcast
        9      from t
       10    ),
       11  apportioned_forecast as
       12   (  select every_day.d, date_ranges.fcast, date_ranges.no_of_days
       13      from  every_day,
       14            date_ranges
       15       where every_day.d between date_ranges.mth and date_ranges.mth_end
       16   )
       17  select trunc(d,'IW') wk, round(sum(fcast/no_of_days),2) wk_fcast
       18  from   apportioned_forecast
       19  group by trunc(d,'IW')
       20  order by 1;
      
      WK          WK_FCAST
      --------- ----------
      28-DEC-15     216.87
      04-JAN-16     506.03
      11-JAN-16     506.03
      18-JAN-16     506.03
      25-JAN-16     506.03
      01-FEB-16     459.34
      08-FEB-16     459.34
      15-FEB-16     459.34
      22-FEB-16     459.34
      29-FEB-16     869.43
      07-MAR-16     937.77
      14-MAR-16     937.77
      21-MAR-16     937.77
      28-MAR-16     982.47
      04-APR-16    1042.07
      11-APR-16    1042.07
      18-APR-16    1042.07
      25-APR-16     991.23
      02-MAY-16     686.23
      09-MAY-16     686.23
      16-MAY-16     686.23
      23-MAY-16     686.23
      30-MAY-16     681.73
      06-JUN-16     679.93
      13-JUN-16     679.93
      20-JUN-16     679.93
      27-JUN-16     848.31
      04-JUL-16    1072.81
      11-JUL-16    1072.81
      18-JUL-16    1072.81
      25-JUL-16    1072.81
      01-AUG-16     660.48
      08-AUG-16     660.48
      15-AUG-16     660.48
      22-AUG-16     660.48
      29-AUG-16      465.6
      05-SEP-16     319.43
      12-SEP-16     319.43
      19-SEP-16     319.43
      26-SEP-16     443.01
      03-OCT-16     751.94
      10-OCT-16     751.94
      17-OCT-16     751.94
      24-OCT-16     751.94
      31-OCT-16     856.82
      07-NOV-16      874.3
      14-NOV-16      874.3
      21-NOV-16      874.3
      28-NOV-16     921.28
      05-DEC-16     956.52
      12-DEC-16     956.52
      19-DEC-16     956.52
      26-DEC-16     819.87
      
      53 rows selected.
      
      SQL>