Hacking around temp space SS enqueues

Metalink Notes 465840 (Configuring Temporary Tablespaces for RAC Databases for Optimal Performance) and 459036 (Temporary tablespace SS Contention In RAC) refer.
I have the following bugs occurring with a vengeance on 10.1.0.3:
Bug 4882834 - EXCESSIVE SS AND TS CONTENTION ON NEW 10G CLUSTERS
Bug 6331283 - LONG WAITS ON 'DFS LOCK HANDLE'
I'm trying to find a work around for many background processes running 24x7 processing a lot of data. These are run in a single schema (VPDB applies and this schema is the owner and exempt from FGAC).
What would be nice is to have something similar to SET TRANSACTION to select a specific dedicated undo space.. but for temp space. If I can get each of my major job queues (home rolled FIFO and LIFO processing queues using DBMS_JOB) to use a different temp space, SS contention should be hopefully reduced.. or not?
Anyone else sitting with this problem or did in the past? And if past, exactly what Oracle patchset resolved the problem?
Edited:
Fixed the spelling error in subject title

> How big per trasnaction sort size and pga size?
Fairly large. A typical transaction can select up to 50+ million rows and aggregate it into a summarised table. There are a couple of these running in parallel. Given the nature of the volumes we process, there's very little flexibility in this regard (besides growing data volumes and an increasing complexity of processing)...
Though, when a process does go through without butting its head against an Oracle bug (fortunately more often than not), performance is pretty impressive.

Similar Messages

  • Temp space problem

    HI all,
    I receive an error while executing a procedure.
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    can any one please exlain what is the problem.
    thanks in advance
    baskar k

    hi
    First ORA-01652 may occur because there is simply no space available in the temp tablespace of which is being used. The second cause of ORA-01652 may have to do with the local temp segment not being able to extent space even though there is space in other instances.
    To trouble shoot for ORA-01652, and find out which of the above scenarios are causing ORA-01652 use this query offered by MetaLink:
    select sum(free_blocks)
    from gv$sort_segment
    where tablespace_name = '<TEMP TABLESPACE NAME>'
    You will know that the first scenario is causing ORA-01652 to be thrown if the free block reads '0' because it signifies that there is no free space.
    If there is a good amount of space, you know that there is another cause for ORA-01652, and it is probably the second scenario. It is important to note that in a non-RAC environment, local instances are not able to extend the temp segments, so in the RAC environment, ORA-01652 has to be handled differently. If you are experiencing ORA-01652 in a non-RA environment, be aware that every SQL making use of the tablespace can fail.
    In RAC, more sort segment space can be used from other instances, which can help resolve ORA-01652 more easily. Try using the query below:
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    Basically, you can then find out how much temp segment space can be used for each instance by viewing the total_blocks, and the used_blocks can reveal the space which has been used so far, and and the free_blocks gives the amount of space allocated to this particular instance. This being, to resolve ORA-01652, you can check out that used_blocks = total_blocks and free_blocks = 0 will probably show for the instance, and ORA-01652 will be shown multiple times within the alert log.
    This basically means that free space from other instances is being requested, and typically signifies that there is instance contention. Instance contention within the temporary space can make the instance take more time to process.
    In sever cases, a slowdown may occur, in which you might want try one of the following work-arounds:
    Increase size of the temp tablespace
    Increase sort_area_size and/or pga_aggregate_target
    However, remember to not use the RAC feature of DEFAULT temp space.
    If ORA-01652 is causing the slowdown, SMON will probably not be able to process the sort segment requests, you you should try to diagnose the contention:
    Output from the following query periodically during the problem:
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    Global hanganalyze and systemstate dumps
    Hope this helps
    CHeers

  • Temp space during import

    I am doing schema refresh...running out of temp space on development....has 500 MB space...no time to add space because storage team has no space they requsted space...for now running import please suggest option
    Oracle--10.2.0.3
    Using IMP utility
    Loading around 8 Gb of data...

    abhishek gera wrote:
    By default, import commits at the end of each table, therefore it is very
    likely your rollback segments will run out of space.
    To workaround this problem, without adding space to the rollback segment
    tablespace, you can specify 'COMMIT=Y' on import. This overrides the
    default and commits at the end of each buffer (also an import parameter),
    rather than at the end of the table.No, it's not at all likely, I think. The OP is running out of temp space, not undo. I don't think this is relevant.

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • Usage of temp space in oracle

    I am using Informatica to load into one table in Oracle .
    Source and target table contains on CLOB colum.
    Source table size is 1GB.
    Target is oracle 10g with 30GB of temp space .
    Whenever i run this job TEMP space usage is complete 30Gb and job is failing .
    Any has any clue in this ?

    Actually, the problem probably is that you are looking at the table but not at the CLOB storage. CLOB's are typically stored outside the table.. so the table might be 1GB, but you might have a MUCH larger storage for the CLOB data.
    Replace the owner and segment name with your owner and table_name in this query and see what gets reported.
    select segment_name, sum(bytes)
    from dba_extents
    where owner = 'OUTLN'
    and segment_name in
      (select 'OL$HINTS' from dual
       union
       select segment_name from dba_lobs where table_name = 'OL$HINTS' and owner = 'OUTLN')
    group by segment_name;

  • Consuming too much temp space

    Dear All
    A technical colleague of mine is executing a procedure which selects data from two databases and inserts it into the third one. The procedure is not new and he has been executing the same since a year now.However in past two weeks the procedure is either consuming too much amount of time ( 3-4 hours as against 10-12 mins ) or it fails as it utilises more amount of temp space on the database on which insertions are made. In the temporary tablespace i added about 10gb more but it is still not suffice for the procedure to execute successfully.The sga for the database onto which insertion is done is 2560M and pga for the same is 2G.
    Please suggest what is to be done as it is an extremely crucial procedure.
    Thanks in advance.
    Regards
    rdxdba

    If you have Diagnostic Pack licence try to use AWR to compare instance activity for this procedure execution. If not try to install Statspack.
    I recommend also to use SQL trace to have trace data for a "good" execution and for a "bad" execution and to compare TKPROF output for related trace files.
    If you are using Oracle 10 or 11 try to use DMBS_MONITOR as described in http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php.

  • Oracle.sql.BLOB.freeTemporary() is not freeing TEMP space in the database

    Hi Folks,
    We are using oracle.sql.BLOB to store some file information into the database.
    Allocation of the temp space is done as below
    BLOB blob=BLOB.createTemporary(conn, false, BLOB.DURATION_SESSION); // this results in the usage of TEMP space from database
    And subsequent release is done as below
    blob.freeTemporary(); // this should have release the space from the database.
    This is on Oracle 10g, Java 1.6, ojdbc6.jar There are no exceptions. Even a simple program results in the same.
    Anybody faced something similar? Any pointers would be really appreciated.
    Thanks,
    Deva
    Edited by: user10728663 on Oct 11, 2011 5:33 AM

    Thanks a lot for the information.
    Memory is fine. And I am able to reproduce this within the scope of a simple example as well.
    Would you have any reference to the thread which had earlier reported this as a bug. I tried a reasonable amount of search in the forum, but no success.
    Thanks very much for any pointers.

  • Mapping   Execute Mapping Utilization of All Temp Space

    Background:
    Creating a Master Table in Data Warehouse; Containing 16 tables (3 - outer joins; Remainder are inner; Total Target table size 8.2 Million reocrds) *** I've ran successfully when I create my own Select statement {takes about 60-70 seconds)
    I may be trying to throw too much at OWB, but when I've created this mapping and joins via (2) joiners (1 - Derived Table; other joins all other tables and Derived table) and have deployed; I have received an error that temp space can not be extended. The thing is and a I've looked into it; our TEMP space is 32gb and with further research via Explain plain in Toad (based on package that is created) that there are some Cartesian join and some other tasks that are trying to be utilized i.e. sort join, etc
    Question:  Am I throwing too much at OWB ? or are their better utilities within the tool that may make this easier ? I was also wondering, if I was able to run something like an explain plan in OWB and not have to run via Toad?
    Thank you for your help                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    I recently had a similar problem with OWB 10.2.0.1.
    The explain plan in our pre-prod environment had something like MERGE JOIN CARTESIAN, with nested loops etc.
    The same mapping in our prod environment had a few hash joins and sort soemwhere in there.
    Needless to say that in our prod env. the mapping took under 10 minutes, while at pre-prod it was taking over 15 hours with no end in sight.
    I finally had to insert and /*+ ORDERED */ hint right after the SELECT in my mapping and it solved everything. Ihad to to this manually in TOAD.
    Problem is that with OWB 10.2.0.1 inserting hints is buggy. This prompted us to switch over to OWB 10.2.0.4.
    Look at one of my other posts (there aren't many) to see how to properly switch over to a new version of OWB.
    Hope this helps you out a bit.
    regards....Mike

  • SQL query using lot of Temp space

    I have sql query which is using lot of temp space , please suggest some ways to reduce this
    SELECT A.POSITION_NBR, TO_CHAR(B.EFFDT,'YYYY-MM-DD'), rtrim( A.SEQNO), A.EMPLID, B.REG_REGION, A.MANAGER_ID, A.REPORTS_TO, case when A.POSITION_NBR = A.REPORTS_TO THEN 'POS reports to same position' else 'Positions with multiple Emp' End Case
    FROM PS_Z_RPTTO_TBL A, PS_POSITION_DATA B, PS_POSTN_SRCH_QRY B1
    WHERE B.POSITION_NBR = B1.POSITION_NBR AND B1.OPRID = 'MP9621Q' AND ( A.POSITION_NBR = B.POSITION_NBR AND ( A.REPORTS_TO = A.POSITION_NBR AND B.EFFDT =
    (SELECT MAX(B_ED.EFFDT)
    FROM PS_POSITION_DATA B_ED
    WHERE B.POSITION_NBR = B_ED.POSITION_NBR) AND A.POSITION_NBR <> '00203392') OR ( B.EFFDT =
    (SELECT MAX(B_ED.EFFDT)
    FROM PS_POSITION_DATA B_ED
    WHERE B.POSITION_NBR = B_ED.POSITION_NBR AND B_ED.EFFDT <= SYSDATE) AND B.MAX_HEAD_COUNT <>
    (SELECT Count( C.EMPLID)
    FROM PS_Z_RPTTO_TBL C)) ) UNION
    SELECT F.POSITION_NBR, TO_CHAR(F.EFFDT,'YYYY-MM-DD'), '', '', F.REG_REGION, '', F.REPORTS_TO, ''
    FROM PS_POSITION_DATA F, PS_POSTN_SRCH_QRY F1
    WHERE F.POSITION_NBR = F1.POSITION_NBR AND F1.OPRID = 'MP9621Q' AND ( F.EFFDT =
    (SELECT MAX(F_ED.EFFDT)
    FROM PS_POSITION_DATA F_ED
    WHERE F.POSITION_NBR = F_ED.POSITION_NBR AND F_ED.EFFDT <= SYSDATE) AND F.EFF_STATUS = 'A' AND F.DEPTID IN
    (SELECT G.DEPTID
    FROM PS_DEPT_TBL G
    WHERE G.EFFDT =
    (SELECT MAX(G_ED.EFFDT)
    FROM PS_DEPT_TBL G_ED
    WHERE G.SETID = G_ED.SETID AND G.DEPTID = G_ED.DEPTID AND G_ED.EFFDT <= SYSDATE) AND F.REG_REGION = G.SETID AND G.EFF_STATUS = 'I') )
    Thanks in Advance
    Rajan

    use {noformat}<your code here>{noformat} tags to format your code.
    I have sql query which is using lot of temp space , please suggest some ways to reduce thisIf your sort_area_size is not set sufficient oracle used temp space for sorting operation. As your code is not readable i cant say much more than this. Check with your DBA if you have to increase the temp space.

  • How to determine what's using data store temp space?

    How can one determine what's using data store temp space? We are interested to know what structures are occupying space in temp space and if possible what pid/process connected to TimesTen created them.
    Also, is there a procedure that will work if temp space is full?
    Recently one of our data stores ran of space. We we're unable to run commands like "monitor", "select * from monitor", "select count(*) from my_application_table", etc. These commands failed because they required temp space to run and temp space was full. We killed the application processes, this in turned freed up temp space, then we were able to run these queries.
    Ideally, we'd like to have a procedure to figure out what's using temp space when temp space is full.
    The other thing we could do is periodically monitor temp space prior to it filling to determine what's using temp space.

    That was my original thought, but once you click the slider track or thumb, and then enter a value in the text control, the clickTarget on the change event envoked by the change to the bound data (after entering a value in the text control) will be whatever slider element had last been clicked. If you've never clicked the slider, clickTarget=null. But once you've clicked the slider the clickTarget always has a value of "thumb" or "track", regardless of what triggered the change event.

  • V$temp_space_header showing temp space fully utilized but not true

    hi,
    any experience regarding temp space headeR?
    currently we are experiencing this issue:
    when using this:
    SELECT tablespace_name, SUM(bytes_used), SUM(bytes_free) FROM v$temp_space_header GROUP BY tablespace_name;
    TABLESPACE_NAME SUM(BYTES_USED) SUM(BYTES_FREE)
    TEMP 227632218112 0
    but when using this:
    SELECT NVL(A.tablespace_name, D.name) tablespace,
    D.mb_total,
    SUM (NVL(A.used_blocks,0) * D.block_size) / 1024 / 1024 mb_used,
    D.mb_total - SUM (NVL(A.used_blocks,0) * D.block_size) / 1024 / 1024 mb_free
    FROM v$sort_segment A,
    SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
    FROM v$tablespace B, v$tempfile C
    WHERE B.ts#= C.ts#
    GROUP BY B.name, C.block_size
    ) D
    WHERE A.tablespace_name (+) = D.name
    GROUP by NVL(A.tablespace_name, D.name), D.mb_total
    TABLESPACE MB_TOTAL MB_USED MB_FREE
    TEMP 217087 839 216248
    is this a bug??
    thanks.

    Hi,
    It may be the case that the operation you are doing needs more temp space than the amount of temp space that actually can be available.
    Instade of looking at free and used temp space, You can look at the operation which needs that much amount of temp space. May be there is some way to minimize that..
    Please share what actually are you doing when this problem is coming.
    Regards,
    Dipali

  • Estimating how much temp space a query will take?

    I have a query that is "SELECT * FROM some_table ORDER BY field_name DESC". some_table has an avg_row_len of 458 bytes and stats are current. There are just about 6 million rows in some_table.
    TEMP is set to 500MB and the query fails for lack of TEMP space. I show about 176MB of TEMP is presently in use, so worst case I should have 324MB free.
    So which calculation is correct for how much TEMP space is needed:
    (a) 458 avg_row_len * 6,000,000 = about 3GB of space (and DBA_SEGMENTS agrees with this rough math). That's assuming it puts the whole row into the sort.
    (b) 6,000,000 rows * 4 bytes for a ROWID (I think they're 4 bytes) = 22MB. That's assuming it sorts just a bunch of pointers to rows (which is how I thought it would work).

    Don't forget to add the length of the column being sorted to the rowid length before you multiply. A [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/logical.htm#CNCPT89008]rowid has four pieces, not four bytes. Also check your plan, in case there is more than just the sort going on for you.
    With appropriate sort_area_size or pga target, you may reduce the need for temp. See [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/memory.htm#i49320]pga memory management in the docs, to start.

  • Report consuming a lot of temp Space (BIP)

    Hi Experts,
    I am facing an issue.
    Some BIP reports consuming a lot of temp space (5-30 gb temp directory), which is causing service down (BIP, RMS, ReIM and RPM). BIP, RMS, ReIM and RPM are installed on same server.
    Please help to troubleshoot this issue.
    Thanks in Advance

    plz see
    Troubleshooting Oracle BI Publisher Enterprise 11g [ID 1387643.1]
    Troubleshooting Oracle BI Publisher Enterprise 10g [ID 412232.1]

  • Never release the temp space

    It seems my oracle database never release temp space.
    The usage of temp space kept at 99.8% ,but there was no alarm in alert.log.
    It is not acceptable to shutdown the machine (7*24).
    It is said that oracle do not use temp space if the memory for sorting can hold processing statement.
    But there is no large work load on the machine and whenever
    select * from v$sort_usage
    no record
    what s wrong?
    how can i release temp space without shutting down the server?
    Sun fire v880 4 cpu at 1050Mhz Memory 8G
    Solaris 8 0202
    Oracle 92

    You do not need to worry about it, unless you are getting "Unable to allocate ... in table TEMP" messages.
    I believe that the space will be released if you bounce the database.
    The 99% you are seeing is like a high water mark, which says that at some point 99% of your TEMP space was used. However no rows in v$sort_usage means that no space is currently being used.
    So, relax!

  • Temp Space Issue

    Hi there folks.
    I've been using this forum as a reference point for the last couple of months but this is the first time I've had to post anything because I happen to have a pickle of an issue. In fact, it's that beefy, even my company's development team is stumped... Yeah.
    So, a quick bit of background. The current client I'm working for is a Government agency related to Health & Public Services. The database we use works as a go-between with Siebel CAM and Business Objects. As the two don't naturally speak to each other, occasionally we get inevitible data incosistencies. We had an Informatica CDC issue a little while ago which left a lot of stale data, primarily relating to tasks which should have been closed or naturally expired. To clear up the backlog and return consistency to the DB, we've been working on updating a workflow which will refresh the tasks across the entire DB and bring everything in line.
    Now, when the DB was designed, Task_Refresh was created and worked when tested as the DB was getting loaded with data. The problem is, when it was tested initially, there was only a few hundred thousand records to filter through. Now that the DB is in full swing, there's 22 million records and we're encountering issues. Essentially when we run the workflow in Informatica, it gets as far as the Source Qualifier and fails, having chewed through the available 16gb of temp space in the space of an hour.
    I've been working for weeks trying to tune the SQL code for the source qualifier and, along with the development team, I'm now well and truly stuck. Here is the query:
    SELECT
        a.1,
        a.2,
        a.3,
        a.4,
        a.5,
        a.6,
        a.7,
        a.8,
        a.9,
        a.10,
        a.11,
        a.12,
        a.13,
        a.14,
        a.15,
        a.16,
        a.17,
        a.18,
        a.19,
        a.20,
        a.21,
        a.22,
        a.23,
        a.24,
        a.25,
        a.26,
        a.27,
        c.1,
        c.2,
        c.3,
        c.4,
        c.5,
        b.1,
        f.1,
        e.1,
        f.2,
        f.3,
        f.4,
        f.5,
        f.6,
        f.7,
        f.8,
        f.9,
        f.10,
        f.11,
        f.12
        FROM PARENT_TABLE a,  CHILD_TABLE2 c, CHILD_TABLE1 b, PARENT_TABLE2 e, PARENT_TABLE3 f
        WHERE   a.1 = b.PAR_ROW_ID(+)
        AND     a.1 = c.PAR_ROW_ID(+)
        AND     a.1 = e.PAR_ROW_ID(+)
        AND     a.TARGET_PER_ID = f.ROW_ID(+)
        AND
        (a.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND a.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (c.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND c.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (b.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND b.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (e.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND e.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
       );We are running this on 10g 10.1.0.5.0 - 64bi
    So, obviously I've replaced the table names and most of the column names, but a.1 is the primary key. What we have is the primary key compared to the PAR_ROW_ID column in each of the child tables and one of the other main tables, looking for matching IDs down the whole of the PARENT_TABLE. In this database, tasks can be entered in any of the five tables mentioned for various different reasons and functions. The dates in the last section are just arbitrary dates we have been working with, when the query works and is running into production, the dates will change, a week at a time based on a parameter file in Informatica.
    In troubleshooting, we found if we remove the left outer joins in the WHERE clause, then the query will return data, but not the right amount of data. Basically, PARENT_TABLE is just too massive for the temp space we have available. We can't increase the temp space and we can't get rid of the joins because that changes the functionality significantly. We have also tried restructuring the query by filtering on the dates first but that doesn't help, though I suspect that the order of the WHERE clause is being ignored by Oracle due to it's own performance related decisions. I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    We have even tried setting the date and time for an hour's worth of entries and that was still too big.
    I'm stuck in a catch 22 here because we need to get this working, but the client is adamant that we should not be changing the functionailty. Is there any possible solution any of you can see for this?
    Thank you in advance for any help you can offer on this.
    (also, we are providiing support for the applications the database uses [CAM, BO], so we don't actually run this db per se, the client does. We've asked them to increase the temp space and they wont.
    The error when we run the workflow is: ORA-01652: unable to extend temp segment by 128 in tablespace BATCH_TEMP)

    938041 wrote:
    I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    Technilcally you should be able to rewrite this query as a UNION ALL of 4 separate parts, providing you remember to include a predicate in the 2nd, 3rd and 4th query blocks that eliminate rows that have already been reported by the earlier query blocks.
    Each part will be driven by one of the four range based predicates you have - and for the three parts that drive off child tables the join to the parent should NOT be an outer join.
    Effectively you are trying to transform the query manually using the CONCATENATION operator. It's possibly (but I suspect very unlikely) that the optimizer will do this for you if you add the hint /*+ use_concat */ to the query you currently have.
    Regards
    Jonathan Lewis

Maybe you are looking for

  • Mail wont open properly

    when i click on the mail icon, the window pops up, but nothing loads...the little rainbow circle just spins and spins and spins, and i have to force quit the application. what do i do?

  • Can't print emails...

    Hi there For some reason my Canon IP4700 has decided to stop properly printing emails from my MBP (with latest mountain lion). The printer itself works ok...it prints test pages not problem. It also prints PDFs without a hitch, and photos. However wh

  • Not able to reprocess IDOCs in BD87

    Hi Team, I have two uses with IDOC processing role assigned, but one user is able to do this sucessfully and other can't  .. I did trace both users and exactly same objects are being cheked and it RC =0 for t-code BD87. Please advice Regards, Sushma

  • How to parse text file (.eml) to get index of line, that contains Subject, From field, and base64 decoded Body

    Hello, Dear Colleagues. Help me please with next deal. As input I have .eml file: x-sender: [email protected] x-receiver: *************************** Received: from ***** with Microsoft SMTPSVC(7.5.7601.17514); Fri, 20 Mar 2015 12:43:03 +0200

  • Cenvat and pla

    hi, i need to know if 1) cenvat account is receivable or expense? 2) what is cenvat clearing for? 3) what is cenvat on hold for? 4) what is the difference for cenvat account and pla account like pla on hold, plaaed, plabed... point will be given. tha