Filter by + count...

hi,
let;s say we have one measure in our report where we use it twice.
1,we use filter by function to bring data for 2 months and
2,we use filter by function to bring data for 2 other months
in order to have a comparizon.
these two measure(sales let;s say),belongs to some items.
how can we achieve to show in one report,
these two measures from the same measure and the products which belong to them??
sales(jan - feb)-----------sales(jun - jul)--------products(jan - feb)------products(jun - jul)
1.000.0000-----------------1.500.0000-----------1000---------------------------1299--------------
tnks in advance

tnks,i used a function i found in forum of bringing the previus one data for a table and i used it for pivot..
closed

Similar Messages

  • Audio capture filter reference count causing leak

    I have a purplexing problem.
    I have written my own audio and video capture filters and they work fine. I have tested with FMLE, graphedit and various other tools which i wrote myself. When i click "Stop" on the stream or close FMLE, the reference count on the audio capture filter is not release to 0, thus causing a memory leak. The leak is not a big problem, but my filters have to do some important processing in their destructors, which dont get called because its not being released.
    To purplex the problem further, my video capture filter is not having this problem, only my audio.
    I have written several more capture filters that literally do nothing other than register themselves to explore this problem and only the audio filters seem to be leaking. Anyone else seen such a bizare problem ?
    I am using FMLE 3.1.0.8703

    Thank you for your quick reply!
    However I cant manage to find the panic logs in the library/logs/ or any of its directorys. Ive looked for this logs before and my guess is that they dont get created if the panic occurs during boot.

  • Filter to counter a yellowish tint

    I have a video I am editing that has a yellowish tint from the lighting. Can I adjust this like I can with a photoshop image to counter with a blue/violet filter to balance and lighten the whole video overall?
    I can't seem to find anyway to search the forums, so excuse me if this has been answered. Where is the forum search? I can't seem to find a way to search in just the forums, and not the rest of Apple.
    Thanks in advance for any help on both of the above.

    Use the Color Corrector filter and move the center nub toward blue-cyan.

  • Library filter: photo count missing on OS X Mavericks

    When using the library filter on metadata no numbers are shown anymore, even not after clicking restore default presets?
    How to resolve this?
    Thanks in advance
    Bob

    It's a known Mavericks bug which is fixed in the 5.3 RC from http://labs.adobe.com
    http://feedback.photoshop.com/photoshop_family/topics/lr_5_2_os_x_mavericks_missing_photo_ count_in_library_module

  • Counting Exchange Transaction Logs

    Trying to come up with a .ps1 that counts log files generated by Exchange (multiple DB's on multiple servers) on an as needed basis between backups.  Input file is a .csv of the format:
    Server,DBName
    server1,exdb11
    server1,exdb12
    server2,exdb21
    server2,exdb22
    Here is what I thought should work but gives me "token 'in'" errors on the ForEach or is not accepting the input to determine the network path to the file locations.
    $PathNames = (Import-csv C:\ExServerDBs.csv)
      ForEach-Object ($P in $PathNames)
          $ExDBLogPath = "\\" + $_.Server + "\d$\ExchangeLogs\" + $_.DBName
          (Get-ChildItem -Path $ExDBLogPath -filter "*.log").Count
    Any help insight or suggestions would be greatly appreciated.
    Thanks!
    KYPaul

    Why aren't you just using the Exchange DB statistics report?
    You are usingthe wrong "ForEach".
    See: help about_foreach
    Import-csv C:\ExServerDBs.csv |
    ForEach-Object{
    $ExDBLogPath="\\$($_.Server)\d$\ExchangeLogs\$($_.DBName)\*"
    (Get-ChildItem -Path $ExDBLogPath -filter *.log).Count
    ¯\_(ツ)_/¯

  • Copy, count files, test path, process indicator and System.IO.FileInfo

    I found this, sctipt, that I try to re-write.
    As it is, it creates sub folders in the targetfolder, which I found out how to stop it from, by deleting the "\" backslash sign in line 9.
    But what I also want is that subfolders if such should exist, also gets copied from $source to $target folder, as of now this doesn't happen. Reason why I chose to try to re-write the script is basically, I can read what it does and I like all the flashy
    Things like counting and that it shows the percentage of the processbar AND the processbar :).
    I just don't now how to re-write it proberly. By the way nothing should be re-named in the targetfolder every thing from sourcefolder should be "as is" in the sourcefolder.
    $SourceFolder = "C:\Color1\TRID"
    $targetFolder = “C:\Color2\TRID”
    $numFiles = (Get-ChildItem -Path $SourceFolder -Filter *.*).Count
    $i=0
    clear-host;
    Write-Host ‘This script will copy ‘ $numFiles ‘ files from ‘ $SourceFolder ‘ to ‘ $targetFolder
    Read-host -prompt ‘Press enter to start copying the files’
    Get-ChildItem -Path $SourceFolder -Filter *.* | %{
    [System.IO.FileInfo]$destination = (Join-Path -Path $targetFolder -ChildPath $_.Name.replace(“_”,“\”))
    if(!(Test-Path -Path $destination.Directory ))
    New-item -Path $destination.Directory.FullName -ItemType Directory
    [int]$percent = $i / $numFiles * 100
    copy-item -Path $_.FullName -Destination $Destination.FullName
    write-Progress -Activity “Copying … ($percent %)” -status $_ -PercentComplete $percent -verbose
    $i++
    Write-Host ‘Total number of files read from directory ‘$SourceFolder ‘ is ‘ $numFiles
    Write-Host ‘Total number of files that was copied to ‘$targetFolder ‘ is ‘ $i
    Read-host -prompt “Press enter to complete…”
    clear-host;

    @Jaap
    Yes I want to overwrite existing files, since backup is taken care of by another script.
    Now I encounter this error when trying to use your write-progress example:
    Get-ChildItem : A parameter cannot be found that matches parameter name 'Files'.
    At line:3 char:51
    + $Files = Get-ChildItem -LiteralPath $SourceFolder -Files
    +                                                  
    ~~~~~~
        + CategoryInfo          : InvalidArgument: (:) [Get-ChildItem], ParameterBindingException
        + FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand
    cmdlet ForEach-Object at command pipeline position 1
    Supply values for the following parameters:
    Process[0]:
    Here is the script as I thought it should look like:
    $SourceFolder = "C:\Color1\TRID"
    $targetFolder = “C:\Color2”
    $Files = Get-ChildItem -LiteralPath $SourceFolder -Files
    $NumberofFiles = $Files.Count
    $Files | ForEach-Object -Begin {
    $FilesCopied = 0
    -progress{
    Write-Progress -Activity "Copying Files..." -PercentComplete [int](($FilesCopied/$NumberofFiles)*100) -CurrentOperation "$FilesCopied files copied out of total of $NumberofFiles files" -Status "Please wait."
    "$((Get-ChildItem -Recurse -File -LiteralPath $SourceFolder).Count) files will be copied to $targetfolder"
    Read-Host -Prompt 'Press Enter to Start Copying...'
    Copy-Item $SourceFolder -Recurse -Destination $targetFolder -Force -verbose
    $FilesCopied++
    # What is wrong now?

  • CBO bug? Lack of SORT UNIQUE.

    Hi all,
    Let's consider following case:
    create table tmp as select rownum id, 0 sign from dual connect by level <= 100;
    create index tmp_i on tmp(id,sign);
    create table t as
    select mod(rownum,2) id, mod(rownum,3) val
    from dual
    connect by level <= 100000;
    begin
       dbms_stats.gather_table_stats (
          user,
          'T',
          estimate_percent   => null,
          method_opt         => 'FOR ALL COLUMNS SIZE SKEWONLY',
          cascade            => true
    end;
    begin
       dbms_stats.gather_table_stats (
          user,
          'TMP',
          estimate_percent   => null,
          method_opt         => 'FOR ALL COLUMNS SIZE SKEWONLY',
          cascade            => true
    end;
    /As you can see it scans TMP_I 50 000 times for statement with max (irrespective of distinct in subquery).
    Is there any way to enforce CBO to make SORT UNIQUE for max as well as for count so that it scans TMP_I only 3 times?
    SQL> select --+ leading(t) use_nl(t tmp)
      2  max(id)
      3  from tmp tmp
      4  where tmp.sign = 0
      5  and tmp.id in (select val from t where id = 1);
       MAX(ID)
             2
    | Id  | Operation           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   0 | SELECT STATEMENT    |       |      1 |        |      1 |00:00:00.14 |     159 |
    |   1 |  SORT AGGREGATE     |       |      1 |      1 |      1 |00:00:00.14 |     159 |
    |   2 |   NESTED LOOPS      |       |      1 |  49750 |  33333 |00:00:00.13 |     159 |
    |*  3 |    TABLE ACCESS FULL| T     |      1 |  50000 |  50000 |00:00:00.02 |     156 |
    |*  4 |    INDEX RANGE SCAN | TMP_I |  50000 |      1 |  33333 |00:00:00.07 |       3 |
    Predicate Information (identified by operation id):
       3 - filter("ID"=1)
       4 - access("TMP"."ID"="VAL" AND "TMP"."SIGN"=0)
    SQL> select --+ leading(t) use_nl(t tmp)
      2  max(id)
      3  from tmp tmp
      4  where tmp.sign = 0
      5  and tmp.id in (select distinct val from t where id = 1);
       MAX(ID)
             2
    | Id  | Operation           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   0 | SELECT STATEMENT    |       |      1 |        |      1 |00:00:00.14 |     159 |
    |   1 |  SORT AGGREGATE     |       |      1 |      1 |      1 |00:00:00.14 |     159 |
    |   2 |   NESTED LOOPS      |       |      1 |  49750 |  33333 |00:00:00.13 |     159 |
    |*  3 |    TABLE ACCESS FULL| T     |      1 |  50000 |  50000 |00:00:00.01 |     156 |
    |*  4 |    INDEX RANGE SCAN | TMP_I |  50000 |      1 |  33333 |00:00:00.07 |       3 |
    Predicate Information (identified by operation id):
       3 - filter("ID"=1)
       4 - access("TMP"."ID"="VAL" AND "TMP"."SIGN"=0)
    SQL> select --+ leading(t) use_nl(t tmp)
      2  count(id)
      3  from tmp tmp
      4  where tmp.sign = 0
      5  and tmp.id in (select val from t where id = 1);
    COUNT(ID)
             2
    | Id  | Operation            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    |   0 | SELECT STATEMENT     |       |      1 |        |      1 |00:00:00.03 |     158 |       |       |          |
    |   1 |  SORT AGGREGATE      |       |      1 |      1 |      1 |00:00:00.03 |     158 |       |       |          |
    |   2 |   NESTED LOOPS       |       |      1 |      3 |      2 |00:00:00.03 |     158 |       |       |          |
    |   3 |    SORT UNIQUE       |       |      1 |  50000 |      3 |00:00:00.03 |     156 |  2048 |  2048 | 2048  (0)|
    |*  4 |     TABLE ACCESS FULL| T     |      1 |  50000 |  50000 |00:00:00.01 |     156 |       |       |          |
    |*  5 |    INDEX RANGE SCAN  | TMP_I |      3 |      1 |      2 |00:00:00.01 |       2 |       |       |          |
    Predicate Information (identified by operation id):
       4 - filter("ID"=1)
       5 - access("TMP"."ID"="VAL" AND "TMP"."SIGN"=0)I can't figure out why SORT UNIQUE is absent for statement with max.
    PS. 11gR2

    Thanks for reply.
    user503699 wrote:
    I don't think it is a good idea to compare your last and first query timings as they are 2 different queries.Ok. I could compare query with max(id), sign(count(*)+1) vs max(id), sign(1). They always produce the same results so can be considered as the same.
    But I think that max(id), count(*) vs max(id) was enough to explain my point.
    user503699 wrote:
    If you are so sure of that you can write something like following :
    SQL> with data as (select /*+ materialize */ distinct val as val from t where id = 1)
    select max(id) from tmp tmp where tmp.sign = 0 and tmp.id in (select val from data) ;  2 
    I thought about that. I’m reluctant to use undocumented hints such as materialize. So folowing query has almost the best plan for my data:
    with data as (select distinct val as val from t where id = 1 and rownum > 0)
    select
    max(id)
    from tmp tmp
    where tmp.sign = 0
    and tmp.id in (select * from data);
    | Id  | Operation               | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                                                    
    |   0 | SELECT STATEMENT        |       |     1 |     8 |    50   (8)| 00:00:01 |                                                                    
    |   1 |  SORT AGGREGATE         |       |     1 |     8 |            |          |                                                                    
    |   2 |   NESTED LOOPS          |       |     3 |    24 |    50   (8)| 00:00:01 |                                                                    
    |   3 |    VIEW                 |       |     3 |     9 |    50   (8)| 00:00:01 |                                                                    
    |   4 |     HASH UNIQUE         |       |     3 |    18 |    50   (8)| 00:00:01 |                                                                    
    |   5 |      COUNT              |       |       |       |            |          |                                                                    
    |*  6 |       FILTER            |       |       |       |            |          |                                                                    
    |*  7 |        TABLE ACCESS FULL| T     | 50000 |   292K|    47   (3)| 00:00:01 |                                                                    
    |*  8 |    INDEX RANGE SCAN     | TMP_I |     1 |     5 |     0   (0)| 00:00:01 |                                                                    
    Predicate Information (identified by operation id):                                                                                                  
       6 - filter(ROWNUM>0)                                                                                                                              
       7 - filter("ID"=1)                                                                                                                                
       8 - access("TMP"."ID"="DATA"."VAL" AND "TMP"."SIGN"=0)   And changing two hidden parameters may lead to the same plan as I expect:
    alter session set "_gby_hash_aggregation_enabled" = false;
    alter session set "_simple_view_merging" = false;
    with data as (select distinct val as val from t where id = 1)
    select
    max(id)
    from tmp tmp
    where tmp.sign = 0
    and tmp.id in (select * from data);
    | Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                                                      
    |   0 | SELECT STATEMENT      |       |     1 |    18 |    50   (8)| 00:00:01 |                                                                      
    |   1 |  SORT AGGREGATE       |       |     1 |    18 |            |          |                                                                      
    |   2 |   NESTED LOOPS        |       |     3 |    54 |    50   (8)| 00:00:01 |                                                                      
    |   3 |    VIEW               |       |     3 |    39 |    50   (8)| 00:00:01 |                                                                      
    |   4 |     SORT UNIQUE       |       |     3 |    18 |    50   (8)| 00:00:01 |                                                                      
    |*  5 |      TABLE ACCESS FULL| T     | 50000 |   292K|    47   (3)| 00:00:01 |                                                                      
    |*  6 |    INDEX RANGE SCAN   | TMP_I |     1 |     5 |     0   (0)| 00:00:01 |                                                                      
    Predicate Information (identified by operation id):                                                                                                  
       5 - filter("ID"=1)                                                                                                                                
       6 - access("TMP"."ID"="DATA"."VAL" AND "TMP"."SIGN"=0)   But here I've got two additional questions:
    1. no_use_hash_aggregation can be used instead of "alter session set "_gby_hash_aggregation_enabled" = false;"
    What hint can be used instead of "alter session set "_simple_view_merging" = false;"?
    2. Is there any way to enforce CBO to use for this one
    select
    max(id)
    from tmp tmp
    where tmp.sign = 0
    and tmp.id in (select distinct val as val from t where id = 1 and rownum > 0);
    | Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                                                
    |   0 | SELECT STATEMENT            |       |     1 |     5 |     3   (0)| 00:00:01 |                                                                
    |   1 |  SORT AGGREGATE             |       |     1 |     5 |            |          |                                                                
    |   2 |   FIRST ROW                 |       |     1 |     5 |     1   (0)| 00:00:01 |                                                                
    |*  3 |    INDEX FULL SCAN (MIN/MAX)| TMP_I |     1 |     5 |     1   (0)| 00:00:01 |                                                                
    |*  4 |     FILTER                  |       |       |       |            |          |                                                                
    |   5 |      COUNT                  |       |       |       |            |          |                                                                
    |*  6 |       FILTER                |       |       |       |            |          |                                                                
    |*  7 |        TABLE ACCESS FULL    | T     |     2 |    12 |     2   (0)| 00:00:01 |                                                                
    -------------------------------------------------------------------------------------  the same plan as for this
    with data as (select distinct val as val from t where id = 1 and rownum > 0)
    select
    max(id)
    from tmp tmp
    where tmp.sign = 0
    and tmp.id in (select * from data);
    | Id  | Operation               | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                                                    
    |   0 | SELECT STATEMENT        |       |     1 |     8 |    50   (8)| 00:00:01 |                                                                    
    |   1 |  SORT AGGREGATE         |       |     1 |     8 |            |          |                                                                    
    |   2 |   NESTED LOOPS          |       |     3 |    24 |    50   (8)| 00:00:01 |                                                                    
    |   3 |    VIEW                 |       |     3 |     9 |    50   (8)| 00:00:01 |                                                                    
    |   4 |     HASH UNIQUE         |       |     3 |    18 |    50   (8)| 00:00:01 |                                                                    
    |   5 |      COUNT              |       |       |       |            |          |                                                                    
    |*  6 |       FILTER            |       |       |       |            |          |                                                                    
    |*  7 |        TABLE ACCESS FULL| T     | 50000 |   292K|    47   (3)| 00:00:01 |                                                                    
    |*  8 |    INDEX RANGE SCAN     | TMP_I |     1 |     5 |     0   (0)| 00:00:01 |                                                                    
    I don't have anything against subquery factoring clause. Just for personal interest.
    (I have read topic "Thread: Materialize a Subquery without using "with" clause"
    Materialize a Subquery without using "with" clause
    user503699 wrote:
    If you are not going to change other things (like stats collection method, table/index structures etc.) which allow optimizer to choose better plan and you know your data better, you may need to be very specific with the hints and also may have to use additional hints in order to influence optimizer decisions.
    One way to do that would be to get the base details from oracle as follows (and tweak them) :I didn't find keyword "ADVANCED" in specification for DISPLAY_CURSOR Function in documentation. Nice trick.
    But anyway outline data makes sense only in case when query already has desirable execution plan.

  • Help needed in inserting data using a query

    I need to have some data as a result of the following query:
    select sched_num,load_id,ord_id,split_id from ord_load_seq
    where (ord_id,split_id,sched_num) in (select ord_id,split_id,sched_num from ord_load_seq where seq_num = '2'
    group by ord_id,split_id,sched_num having count(1) >1)
    order by ord_id,split_id,sched_num;
    But currently it retunrns no rows. The problem is in the having count(1)> 1 clause.
    When i make =1, it returns rows. But no rows on > 1. I even tried inserting some rows to get the result >1
    But still the query on a whole returns no rows.Please help.

    ohhh... lets start our lesson children:
    here is code to consider:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
    SQL>
    SQL> --Q1
    SQL> with tbl as
      2  (select 1 fld1, 1 fld2, 1 fld3 from dual
      3  union all
      4  select 1 fld1, 1 fld2, 2 fld3 from dual
      5  union all
      6  select 1 fld1, 1 fld2, 3 fld3 from dual
      7  union all
      8  select 1 fld1, 2 fld2, 3 fld3 from dual
      9  union all
    10  select 1 fld1, 3 fld2, 3 fld3 from dual)
    11  select  fld1, fld2, fld3 from tbl
    12  order by fld1, fld2, fld3
    13  /
          FLD1       FLD2       FLD3
             1          1          1
             1          1          2
             1          1          3
             1          2          3
             1          3          3
    SQL> --Q2
    SQL> with tbl as
      2  (select 1 fld1, 1 fld2, 1 fld3 from dual
      3  union all
      4  select 1 fld1, 1 fld2, 2 fld3 from dual
      5  union all
      6  select 1 fld1, 1 fld2, 3 fld3 from dual
      7  union all
      8  select 1 fld1, 2 fld2, 3 fld3 from dual
      9  union all
    10  select 1 fld1, 3 fld2, 3 fld3 from dual)
    11  select  fld1, fld2, fld3, count(1) from tbl
    12  group by fld1,fld2,fld3
    13  order by fld1, fld2, fld3
    14  /
          FLD1       FLD2       FLD3   COUNT(1)
             1          1          1          1
             1          1          2          1
             1          1          3          1
             1          2          3          1
             1          3          3          1
    SQL> --Q3
    SQL> with tbl as
      2  (select 1 fld1, 1 fld2, 1 fld3 from dual
      3  union all
      4  select 1 fld1, 1 fld2, 2 fld3 from dual
      5  union all
      6  select 1 fld1, 1 fld2, 3 fld3 from dual
      7  union all
      8  select 1 fld1, 2 fld2, 3 fld3 from dual
      9  union all
    10  select 1 fld1, 3 fld2, 3 fld3 from dual)
    11  select  fld1, fld2, fld3 from tbl
    12  group by fld1,fld2,fld3
    13  having count(1) > 1
    14  order by fld1, fld2, fld3
    15  /
          FLD1       FLD2       FLD3
    SQL> --Q4
    SQL> with tbl as
      2  (select 1 fld1, 1 fld2, 1 fld3 from dual
      3  union all
      4  select 1 fld1, 1 fld2, 2 fld3 from dual
      5  union all
      6  select 1 fld1, 1 fld2, 2 fld3 from dual
      7  union all
      8  select 1 fld1, 1 fld2, 3 fld3 from dual
      9  union all
    10  select 1 fld1, 2 fld2, 3 fld3 from dual
    11  union all
    12  select 1 fld1, 3 fld2, 3 fld3 from dual)
    13  select  fld1, fld2, fld3 from tbl
    14  order by fld1, fld2, fld3
    15  /
          FLD1       FLD2       FLD3
             1          1          1
             1          1          2
             1          1          2
             1          1          3
             1          2          3
             1          3          3
    6 rows selected
    SQL> --Q5
    SQL> with tbl as
      2  (select 1 fld1, 1 fld2, 1 fld3 from dual
      3  union all
      4  select 1 fld1, 1 fld2, 2 fld3 from dual
      5  union all
      6  select 1 fld1, 1 fld2, 2 fld3 from dual -- inserted duplicate row
      7  union all
      8  select 1 fld1, 1 fld2, 3 fld3 from dual
      9  union all
    10  select 1 fld1, 2 fld2, 3 fld3 from dual
    11  union all
    12  select 1 fld1, 3 fld2, 3 fld3 from dual)
    13  select  fld1, fld2, fld3, count(1)   from tbl
    14  group by fld1,fld2,fld3
    15  order by fld1, fld2, fld3
    16  /
          FLD1       FLD2       FLD3   COUNT(1)
             1          1          1          1
             1          1          2          2
             1          1          3          1
             1          2          3          1
             1          3          3          1
    SQL> --Q6
    SQL> with tbl as
      2  (select 1 fld1, 1 fld2, 1 fld3 from dual
      3  union all
      4  select 1 fld1, 1 fld2, 2 fld3 from dual
      5  union all
      6  select 1 fld1, 1 fld2, 2 fld3 from dual -- inserted duplicate row
      7  union all
      8  select 1 fld1, 1 fld2, 3 fld3 from dual
      9  union all
    10  select 1 fld1, 2 fld2, 3 fld3 from dual
    11  union all
    12  select 1 fld1, 3 fld2, 3 fld3 from dual)
    13  select  fld1, fld2, fld3 from tbl
    14  group by fld1,fld2,fld3
    15  having count(1) > 1
    16  order by fld1, fld2, fld3
    17  /
          FLD1       FLD2       FLD3
             1          1          2
    SQL> Q1. As you may see we have an bunch of data where each row is unique combination of the columns value.
    Q2. lets try to group it by fld1 and fld2 and fld3 columns. We don't expect any miracle and got the same data as Q1. Why? Because each row is unique combination(group) in scope of fld1, fld2, fld3 - count(1) exactly shows us that.
    Q3. Q2 is explanation why we got no rows filter (having count(1) > 1)because the result of the clause is always false.
    Q4. Lets put some duplication.
    Q5. Now we got some new result. Count shows us group with more then one rows.
    Q6. Shows us exact group which we've found in Q5.

  • How to apply business rule over report level

    Hi All,
    I have a column name AAA over which i have to apply a business rule/filter like
    COUNT(“X Fact”." ID") WHEN “X Fact”.”Time to Issue Policy” <= 2 days at report level.How to apply that?any idea?
    Regards,
    Sonal

    Hi,
    Thanks for responce.
    Yes my AAA col is measure column coming frm Order Fact table (i.e AAA= "Order Fact"."Number of Order") & below is Business rule which I have to apply there
    COUNT(“Order Fact”."Order ID") WHEN “Order Fact”.”Time to Order” <= 3 days.How Can I apply this logic there at report level.Plz help?
    Regards,
    Sonal

  • Please reply for the query tuning

    Hi, i am a beginner in oracle dba, I have to know if i have studied little bit about query tuning in ORACLE.
    I wanna know if i have the following query and its plan then how it can be tuned:
    QUERY:
              SELECT z.emplid ,h.first_name || ' ' || h.last_name  ,z.grade ,z.DEPTID ,z.LOCATION
                              FROM sysadm.ps_lnt_latestbu_vw z, sysadm.ps_personal_data h
                                    WHERE  z.empl_status ='A'    --index access
                                    AND z.emplid = h.emplid      --join
                                    and z.emplid not in   (select g.emplid from sysadm.ps_lnt_asn_skl_tbl g)    --join
                                    and z.Business_unit=
                                                               (      select l.lnt_subunit from sysadm.ps_position_data l where l.position_nbr in
                                                                                                    ( select b.position_nbr from sysadm.ps_job b,sysadm.psoprdefn y
                                                                                                              where b.effdt=(     select max(g.effdt) from sysadm.ps_job g
                                                                                                                                            where g.emplid=b.emplid           --join costs high
                                                                                                                                                and g.effdt<=SYSDATE)               --filter/index
                                                                                                                    and b.effseq=
                                                                                                                             (select max(h.effseq) from sysadm.ps_job h
                                                                                                                                       where h.emplid=b.emplid           --join costs high
                                                                                                                                      and h.effdt=b.effdt)               --join costs high
                                                                                                                and b.empl_rcd=0          --filter/index access
                                                                                                                and y.EMPLID=b.EMPLID  --join
                                                                                                                and y.OPRID='1112'   -- filter/index access
                                               order by z.emplid
                                            /AND its plan is:
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=64)
       1    0   SORT (ORDER BY) (Cost=6 Card=1 Bytes=64)
       2    1     NESTED LOOPS (ANTI) (Cost=4 Card=1 Bytes=64)
       3    2       NESTED LOOPS (Cost=3 Card=1 Bytes=56)
       4    3         VIEW OF 'PS_LNT_LATESTBU_VW' (Cost=2 Card=1 Bytes=31)
       5    4           UNION-ALL
       6    5             CONCATENATION
       7    6               TABLE ACCESS (BY INDEX ROWID) OF 'PS_POSITION_DATA' (Cost=5 Card=90 Bytes=1890)
       8    7                 NESTED LOOPS
       9    8                   NESTED LOOPS (Cost=275 Card=1 Bytes=90)
      10    9                     NESTED LOOPS (Cost=275 Card=1 Bytes=82)
      11   10                       TABLE ACCESS (BY INDEX ROWID) OF 'PS_JOB' (Cost=3 Card=1 Bytes=50)
      12   11                         INDEX (RANGE SCAN) OF 'PS2JOB' (NON-UNIQUE) (Cost=2 Card=1)
      13   12                           SORT (AGGREGATE)
      14   13                             FIRST ROW (Cost=3 Card=1 Bytes=19)
      15   14                               INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      16   12                           SORT (AGGREGATE)
      17   16                             FIRST ROW (Cost=3 Card=1 Bytes=22)
      18   17                               INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      19   10                       INDEX (UNIQUE SCAN) OF 'PS_EMPLOYMENT'(UNIQUE)
      20    9                     INDEX (UNIQUE SCAN) OF 'PS_PERSONAL_DATA' (UNIQUE)
      21    8                   INDEX (RANGE SCAN) OF 'PS_POSITION_DATA' (UNIQUE) (Cost=5 Card=90)
      22    6               FILTER
      23   22                 NESTED LOOPS (Cost=275 Card=1 Bytes=90)
      24   23                   NESTED LOOPS (Cost=275 Card=1 Bytes=82)
      25   24                     NESTED LOOPS (Cost=275 Card=1 Bytes=71)
      26   25                       INDEX (FAST FULL SCAN) OF 'PS8POSITION_DATA' (NON-UNIQUE) (Cost=5 Card=90 Bytes=1890)
      27   25                       TABLE ACCESS (BY INDEX ROWID) OF 'PS_JOB' (Cost=3 Card=1 Bytes=50)
      28   27                         INDEX (RANGE SCAN) OF 'PS2JOB' (NON-UNIQUE) (Cost=2 Card=1)
      29   28                           SORT (AGGREGATE)
      30   29                             FIRST ROW (Cost=3 Card=1 Bytes=22)
      31   30                               INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      32   28                           SORT (AGGREGATE)
      33   32                             FIRST ROW (Cost=3 Card=1 Bytes=19)
      34   33                               INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      35   24                     INDEX (UNIQUE SCAN) OF 'PS_EMPLOYMENT' (UNIQUE)
      36   23                   INDEX (UNIQUE SCAN) OF 'PS_PERSONAL_DATA'(UNIQUE)
      37   22                 SORT (AGGREGATE)
      38   37                   FIRST ROW (Cost=2 Card=1 Bytes=17)
      39   38                     INDEX (RANGE SCAN (MIN/MAX)) OF 'PS_POSITION_DATA' (UNIQUE) (Cost=2 Card=9000)
      40    5             FILTER
      41   40               NESTED LOOPS (Cost=751 Card=1 Bytes=191)
      42   41                 NESTED LOOPS (OUTER) (Cost=750 Card=1 Bytes=167)
      43   42                   NESTED LOOPS (OUTER) (Cost=749 Card=1 Bytes=143)
      44   43                     NESTED LOOPS (Cost=748 Card=1 Bytes=134)
      45   44                       NESTED LOOPS (Cost=748 Card=1 Bytes=123)
      46   45                         NESTED LOOPS (Cost=748 Card=1 Bytes=119)
      47   46                           NESTED LOOPS (Cost=747 Card=1 Bytes=98)
      48   47                             NESTED LOOPS (Cost=744 Card=1 Bytes=62)
      49   48                               NESTED LOOPS (Cost=744 Card=1Bytes=54)
      50   49                                 VIEW OF 'PS_LNTPRJOBSYSJRVW'(Cost=741 Card=1 Bytes=9)
      51   50                                   FILTER
      52   51                                     NESTED LOOPS (OUTER) (Cost=735 Card=1 Bytes=68)
      53   52                                       NESTED LOOPS (Cost=734Card=1 Bytes=51)
      54   53                                         NESTED LOOPS (Cost=734 Card=1 Bytes=43)
      55   54                                           TABLE ACCESS (BY INDEX ROWID) OF 'PS_JOB' (Cost=734 Card=1 Bytes=32)
      56   55                                             INDEX (RANGE SCAN) OF 'PSCJOB' (NON-UNIQUE) (Cost=206 Card=1013)
      57   54                                           INDEX (UNIQUE SCAN) OF 'PS_EMPLOYMENT' (UNIQUE)
      58   53                                         INDEX (UNIQUE SCAN) OF 'PS_PERSONAL_DATA' (UNIQUE)
      59   52                                       INDEX (RANGE SCAN) OF'PS_POSITION_DATA' (UNIQUE) (Cost=1 Card=1 Bytes=17)
      60   51                                     SORT (AGGREGATE)
      61   60                                       FIRST ROW (Cost=3 Card=1 Bytes=19)
      62   61                                         INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      63   51                                     SORT (AGGREGATE)
      64   63                                       FIRST ROW (Cost=3 Card=1 Bytes=22)
      65   64                                         INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      66   51                                     SORT (AGGREGATE)
      67   66                                       FIRST ROW (Cost=2 Card=1 Bytes=17)
      68   67                                         INDEX (RANGE SCAN (MIN/MAX)) OF 'PS_POSITION_DATA' (UNIQUE) (Cost=2 Card=9000)
      69   49                                 TABLE ACCESS (BY INDEX ROWID) OF 'PS_JOB' (Cost=3 Card=1 Bytes=45)
      70   69                                   INDEX (RANGE SCAN) OF 'PSAJOB' (NON-UNIQUE) (Cost=2 Card=1)
      71   70                                     SORT (AGGREGATE)
      72   71                                       INDEX (RANGE SCAN) OF'PSAJOB' (NON-UNIQUE) (Cost=3 Card=1 Bytes=19)
      73   72                                         SORT (AGGREGATE)
      74   73                                           FIRST ROW (Cost=3Card=8 Bytes=88)
      75   74                                             INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=25963)
      76   70                                     SORT (AGGREGATE)
      77   76                                       FIRST ROW (Cost=3 Card=8 Bytes=88)
      78   77                                         INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=25963)
      79   48                               INDEX (UNIQUE SCAN) OF 'PS_PERSONAL_DATA' (UNIQUE)
      80   47                             TABLE ACCESS (BY INDEX ROWID) OF'PS_JOB' (Cost=3 Card=1 Bytes=36)
      81   80                               INDEX (RANGE SCAN) OF 'PSAJOB'(NON-UNIQUE) (Cost=2 Card=1)
      82   81                                 SORT (AGGREGATE)
      83   82                                   FIRST ROW (Cost=3 Card=1 Bytes=19)
      84   83                                     INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      85   81                                 SORT (AGGREGATE)
      86   85                                   FIRST ROW (Cost=3 Card=1 Bytes=22)
      87   86                                     INDEX (RANGE SCAN (MIN/MAX)) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=207700)
      88   46                           INDEX (RANGE SCAN) OF 'PS8POSITION_DATA' (NON-UNIQUE) (Cost=1 Card=1 Bytes=21)
      89   45                         INDEX (UNIQUE SCAN) OF 'PS_BUS_UNIT_TBL_HR' (UNIQUE)
      90   44                       INDEX (UNIQUE SCAN) OF 'PS_EMPLOYMENT'(UNIQUE)
      91   43                     INDEX (RANGE SCAN) OF 'PS_POSITION_DATA'(UNIQUE) (Cost=1 Card=1 Bytes=9)
      92   42                   INDEX (FULL SCAN) OF 'PS0LOCATION_TBL' (NON-UNIQUE) (Cost=1 Card=1 Bytes=24)
      93   41                 INDEX (RANGE SCAN) OF 'PS0LOCATION_TBL' (NON-UNIQUE) (Cost=1 Card=1 Bytes=24)
      94   40               SORT (AGGREGATE)
      95   94                 FIRST ROW (Cost=2 Card=1 Bytes=17)
      96   95                   INDEX (RANGE SCAN (MIN/MAX)) OF 'PS_POSITION_DATA' (UNIQUE) (Cost=2 Card=9000)
      97    4           TABLE ACCESS (BY INDEX ROWID) OF 'PS_POSITION_DATA' (Cost=2 Card=1 Bytes=13)
      98   97             NESTED LOOPS (Cost=9 Card=1 Bytes=19)
      99   98               VIEW OF 'VW_NSO_1' (Cost=5 Card=1 Bytes=6)
    100   99                 SORT (UNIQUE)
    101  100                   NESTED LOOPS (Cost=5 Card=1 Bytes=44)
    102  101                     TABLE ACCESS (BY INDEX ROWID) OF 'PSOPRDEFN' (Cost=2 Card=1 Bytes=14)
    103  102                       INDEX (UNIQUE SCAN) OF 'PS_PSOPRDEFN'(UNIQUE) (Cost=1 Card=1)
    104  101                     TABLE ACCESS (BY INDEX ROWID) OF 'PS_JOB' (Cost=3 Card=1 Bytes=30)
    105  104                       INDEX (RANGE SCAN) OF 'PSAJOB' (NON-UNIQUE) (Cost=2 Card=1)
    106  105                         SORT (AGGREGATE)
    107  106                           INDEX (RANGE SCAN) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=8 Bytes=128)
    108  105                         SORT (AGGREGATE)
    109  108                           INDEX (RANGE SCAN) OF 'PSAJOB' (NON-UNIQUE) (Cost=3 Card=1 Bytes=19)
    110   98               INDEX (RANGE SCAN) OF 'PS_POSITION_DATA' (UNIQUE) (Cost=1 Card=1)
    111    3         TABLE ACCESS (BY INDEX ROWID) OF 'PS_PERSONAL_DATA'(Cost=1 Card=1 Bytes=25)
    112  111           INDEX (UNIQUE SCAN) OF 'PS_PERSONAL_DATA' (UNIQUE)
    113    2       INDEX (RANGE SCAN) OF 'PS_LNT_ASN_SKL_TBL' (UNIQUE) (Cost=1 Card=10076 Bytes=80608)
    Statistics
             70  recursive calls
              0  db block gets
        1186931  consistent gets
           5660  physical reads
             60  redo size
            462  bytes sent via SQL*Net to client
            373  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              0  rows processedMy thoughts for this is:
    1. NLJ high cost -- rewrite inner sub-query
    2. sort is done for each join for max function every time so,     therefore try use use sort merge hint
    3. h           alias has been referenced twice for table name.
    PLEASE TELL ME WHAT TO DO IF I AM ORACLE DBA.
    Thanks in advance.
    Edited by: user2060331 on Mar 25, 2010 9:17 AM
    Edited by: user2060331 on Mar 25, 2010 9:21 AM
    Edited by: user2060331 on Mar 25, 2010 9:32 AM
    Edited by: user2060331 on Mar 25, 2010 9:47 AM

    No it's not. You should see indentations for each level of the explain plan. You've lost all of it. It should look like this (not your query):
    PLAN_TABLE_OUTPUT
    | Id  | Operation                   | Name                | Rows  | Bytes | Cost  | Inst   |IN-OUT|
    |   0 | SELECT STATEMENT            |                     | 16116 |  2911K|   712 |        |      |
    |   1 |  FILTER                     |                     |       |       |       |        |      |
    |   2 |   CONNECT BY WITH FILTERING |                     |       |       |       |        |      |
    |   3 |    FILTER                   |                     |       |       |       |        |      |
    |   4 |     COUNT                   |                     |       |       |       |        |      |
    |   5 |      HASH JOIN RIGHT OUTER  |                     | 16116 |  2911K|   712 |        |      |
    |   6 |       REMOTE                | LSW_USR_GRP_XREF    |   518 | 13986 |     4 | MYPROJ~ | R->S |
    |   7 |       HASH JOIN RIGHT OUTER |                     | 16116 |  2486K|   707 |        |      |
    |   8 |        REMOTE               | LSW_USR_XREF        |   222 |  2886 |     4 | MYPROJ~ | R->S |
    |   9 |        HASH JOIN RIGHT OUTER|                     | 16116 |  2282K|   702 |        |      |
    |  10 |         TABLE ACCESS FULL   | MYPROJ_PROCESS_MAP   |   176 |  4752 |     4 |        |      |
    |  11 |         HASH JOIN OUTER     |                     | 16116 |  1857K|   698 |        |      |
    |  12 |          TABLE ACCESS FULL  | MYPROJ_MPPA | 16116 |  1243K|    71 |        |      |
    |  13 |          REMOTE             | LSW_TASK            | 80730 |  3074K|   625 | MYPROJ~ | R->S |
    |  14 |    HASH JOIN                |                     |       |       |       |        |      |
    |  15 |     CONNECT BY PUMP         |                     |       |       |       |        |      |
    |  16 |     COUNT                   |                     |       |       |       |        |      |
    |  17 |      HASH JOIN RIGHT OUTER  |                     | 16116 |  2911K|   712 |        |      |
    |  18 |       REMOTE                | LSW_USR_GRP_XREF    |   518 | 13986 |     4 | MYPROJ~ | R->S |
    |  19 |       HASH JOIN RIGHT OUTER |                     | 16116 |  2486K|   707 |        |      |
    |  20 |        REMOTE               | LSW_USR_XREF        |   222 |  2886 |     4 | MYPROJ~ | R->S |
    |  21 |        HASH JOIN RIGHT OUTER|                     | 16116 |  2282K|   702 |        |      |
    |  22 |         TABLE ACCESS FULL   | MYPROJ_PROCESS_MAP   |   176 |  4752 |     4 |        |      |
    |  23 |         HASH JOIN OUTER     |                     | 16116 |  1857K|   698 |        |      |
    |  24 |          TABLE ACCESS FULL  | MYPROJ_MPPA | 16116 |  1243K|    71 |        |      |
    |  25 |          REMOTE             | LSW_TASK            | 80730 |  3074K|   625 | MYPROJ~ | R->S |
    ---------------------------------------------------------------------------------------------------

  • TooManyObjectsException: JBO-25013 but with autosubmit

    Hi all!
    I have this problem with my dialog ADF form.
    I have two combos with districts and counties. When i have autosubmit on create it gives me this error when i select some district to filter my counties table.
    I don't want him to insert on combo selection. How can i avoid this?
    thanks
    André

    Hi thanks for answer.
    I excecuted the query in SQL Navigator 5 and returns 1 row.
    The problen is when i run the proyect and go to the same page more than once. The first time works fine, the second time show a popup with TooManyObjectsException, but i can see the iterator.
    So, after de first time it works, but show me the error.

  • Develop presets: "Unavailable Preset"

    When I enable the library filter Develop Presets I see three instances of "Unavailable Preset" with counts of 1, 23, and 323. In detail these are
    (1) no presets at all, just a Clarity adjust. Click on import, then back on the Clarity --> applies correctly. Then when I refer back to the Develop Presets filter the count has changed from one to zero.
    (23) these images all reference a long-used and active preset that is just Clarity + Vibrance. Clicking on the image history shows that the final image is correct, and the before/after preset results are correct. But after cross-checking several images this way the count has now been reduced from 23 to xxxxxxxxxx
    (323) these images all reference a preset created today that I accidentally mis-applied to those images. (I'll refer to the mis-applied preset as GOOFY -- more on this below). Sampling some of these 323 images I see that the GOOFY preset is correctly applied. When I click on the previous history step, then GOOFY the "Unavailable Preset" count is decremented and a new "Custom" preset is incremented. I have never seen "Custom" preset before today.
    Now, the story of the GOOFY preset. When I could find nothing in the documentation, forum or web, I decided to try disabling the unwanted preset by these steps at the Mac OS Finder level:
    - duplicate GOOFY to GOOFY-2
    - delete GOOFY file (I hoped LR3 would remove the references to the deleted GOOFY)
    - in grid view mass-apply GOOFY-2 to the correct images
    - quit, restart LR3
    I've forgotten exactly what happened next, but for sure LR3 did not remove the non-existant preset links. For some reason I decided to rename the duplicated GOOFY-2 back to GOOFY. Now the images that correctly referenced GOOFY have two references to presets named GOOFY the oldest one is a NO-OP (does nothing), the latest one works as expected.
    Any advice would be appreciated:
    ** how to mass-delete an unwanted preset reference from a selection of images?
    ** how to clean up the above "Unavailable Preset" errors?

    When a preset is created, it stores the current level of the chosen parameters at the time of creation; in the example below, the preset which will be named CurrentPreset1 will store the current levels applied to the selected photo at the time the preset was created, and in this case will only store values corresponding to Color, Lens Corrections, and Process version.
    To update the values recorded in a preset, the various levels must be changed on the currently selected photo and then the "Update with current settings" must be selected on the preset to update. The "Update with current settings" therefore updates the preset itself with new values corresponding to the values set in the current photo and has nothing to do with how it was applied to photos in the past. A preset is not related in any way to photos on which it is applied; a preset simply applies the set values recorded in the preset on the selected photo as if these same values would be manually entered for the selected photo.

  • Filtyer by the sum of a fact/metric that is on the report

    Hi
    I have a fact on the report that shows the count of apointments per day. How do I filter on this fact/report so that the report shows only the days that have more than 7 appointments please?
    Also I couldn't find an answer to this in the forums. Do you just call counts facts in obiee? whereas in a different tool I would call them metrics.
    Regards,
    Hilary

    hi hilary,
    Duplicate the fact column in the report and count that column say count(fact_column) and on it apply filter saying count(fact_column)>7 then it fetches you data when this filter condition satisfies the condition.
    Cheers,
    KK

  • Is Oracle XML DB...right solution?

    Background:
    I'm working on a system, which gets content from various providers and our engine processes the content and extracts the key information and stores them in relational tables. End users can view this information using our web application. Our web application supports multiple levels of filtering (in other words n levels of filters - described below). In addition, application also supports users to and, or, not one or more filters.
    Problem:
    With relational schema we were not able to support n number of filters.
    Here's why it's difficult with relational schema:
    - get base data set
    - apply filter 1 (value 1, 2, 3)
    - filter 1 results
    - apply filter 2 (value 90, 91, 99) on filter 1 results (trick part is applying filter2 constraints on filter1 results)
    - filter2 results
    - apply filter n on filter n-1 results and so on.
    not exact query.. sample version for demonstration (for simplicity sake i've used only one table...in reality each exists query will be using more than one table)
    select *
    from content f1
    where filter in (1, 2, 3)
    and exists (select null from content f2
              where f2.contentid = f1.contentid
                   and f2.filter not in (90, 91, 99))
    and so on...
    As number of filters increases in number, it's difficult to generate efficient queries.
    As interim solution, we have decided to store the entire data in memory and operate on it. It works very well (really fast too). But, we are limited by memory size. Above solution is not scalable for large volumes of data.
    Solution (using Oracle XML db):
    Then, I stumbled on to XML db option and I did some preliminary investigation. Seems like, nice solution for above problem.
    This is what I did so far using XML db:
    Step 1:
    Created and registered schema.
    declare
    xmlblurb varchar2(4000) := '<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb"
    version="1.0" xdb:storeVarrayAsTable="true">
    <xs:element name="docattribs" type="DocAttribsType" xdb:defaultTable="O_DOCATTRIBUTES1"/>
    <xs:complexType name="DocAttribsType">
    <xs:sequence>
    <xs:element name="brandgroups" type="BrandGroupsType"/>
    <xs:element name="messagegroups" type="MessageGroupsType"/>
    <xs:element name="sources" type="SourcesType"/>
    <xs:element name="contentgroups" type="ContentGroupsType"/>
    <xs:element name="authors" type="AuthorsType"/>
    <xs:element name="themes" type="ThemesType"/>
    <xs:element name="entities" type="EntitiesType"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="BrandGroupsType">
    <xs:sequence>
    <xs:element name="bg" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="MessageGroupsType">
    <xs:sequence>
    <xs:element name="mg" type="MessageGroupType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="MessageGroupType">
    <xs:sequence>
    <xs:element name="bg" type="IdType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="id" type="xs:integer"/>
    </xs:complexType>
    <xs:complexType name="SourcesType">
    <xs:sequence>
    <xs:element name="source" type="IdType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ContentGroupsType">
    <xs:sequence>
    <xs:element name="pcg" type="ParentContentGroupType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ParentContentGroupType">
    <xs:sequence>
    <xs:element name="cg" type="IdType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="id" type="xs:integer"/>
    </xs:complexType>
    <xs:complexType name="AuthorsType">
    <xs:sequence>
    <xs:element name="author" type="AuthorType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="AuthorType">
    <xs:attribute name="name" type="xs:string"/>
    </xs:complexType>
    <xs:complexType name="ThemesType">
    <xs:sequence>
    <xs:element name="theme" type="IdType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="EntitiesType">
    <xs:sequence>
    <xs:element name="af" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element name="a" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element name="b" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element name="c" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element name="l" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element name="m" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element name="p" type="IdMentionsType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="IdMentionsType">
    <xs:attribute name="id" type="xs:integer"/>
    <xs:attribute name="mentions" type="xs:integer"/>
    </xs:complexType>
    <xs:complexType name="IdType">
    <xs:attribute name="id" type="xs:integer"/>
    </xs:complexType>     
    </xs:schema>';
    begin
         dbms_xmlschema.registerSchema('nt_t_docAttributes.xsd',
                                                 xmlblurb,
                                                 TRUE,
                                                 TRUE,
                                                 FALSE);
    end;
    Step 2:
    Created table with XMLType column (with the help of article posted at this link http://forums.oracle.com/forums/thread.jspa?threadID=244846&start=15&tstart=0)
    CREATE TABLE ot_docattributes (
    docid NUMBER,
    docdate DATE,
    statusid NUMBER)
    TABLESPACE data_10m_a
    ALTER TABLE ot_docattributes
    add (docdata xmltype)
    xmltype column docdata store as object relational
    xmlschema "nt_t_docAttributes.xsd" element "docattribs"
    VARRAY DOCDATA."XMLDATA"."authors"."author" STORE AS table ot_authors
    (constraint pk_ot_authors primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."brandgroups"."bg" STORE AS table ot_brandgroups
    (constraint pk_ot_brandgroups primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."contentgroups"."pcg" STORE AS table ot_contentgroups
    (constraint pk_ot_contentgroups primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    varray "cg" store as table ot_contentgroup
    (constraint pk_ot_contentgroup primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."entities"."a" STORE AS table ot_analysts
    (constraint pk_ot_analysts primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."entities"."af" STORE AS table ot_analystfirms
    (constraint pk_ot_analystfirms primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."entities"."b" STORE AS table ot_brands
    (constraint pk_ot_brands primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."entities"."c" STORE AS table ot_companies
    (constraint pk_ot_companies primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."entities"."l" STORE AS table ot_locations
    (constraint pk_ot_locations primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."entities"."m" STORE AS table ot_messages
    (constraint pk_ot_messages primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."entities"."p" STORE AS table ot_people
    (constraint pk_ot_people primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."messagegroups"."mg" STORE AS table ot_messagegroups
    (constraint pk_ot_messagegroups primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    varray "bg" store as table ot_brandgroup
    (constraint pk_ot_brandgroup primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."sources"."source" STORE AS table ot_sources
    (constraint pk_ot_sources primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    VARRAY DOCDATA."XMLDATA"."themes"."theme" STORE AS table ot_themes
    (constraint pk_ot_themes primary key (NESTED_TABLE_ID, ARRAY_INDEX))
    Step 3:
    Using Plsql script constructed and loaded XML data into above table. Here's the sample data:
    <docattribs>
    <brandgroups>
    <bg id="28" mentions="1"/>
    <bg id="34" mentions="1"/>
    </brandgroups>
    <messagegroups/>
    <sources>
    <source id="8243"/>
    </sources>
    <contentgroups>
    <pcg id="263">
    <cg id="270"/>
    </pcg>
    <pcg id="264">
    <cg id="275"/>
    </pcg>
    </contentgroups>
    <authors/>
    <themes/>
    <entities>
    &lt;b id="28" mentions="1"/&gt;
    &lt;b id="34" mentions="1"/&gt;
    <c id="4320" mentions="2"/>
    <c id="9662" mentions="1"/>
    <c id="36259" mentions="1"/>
    <c id="44573" mentions="1"/>
    <c id="69889" mentions="2"/>
    <c id="78583" mentions="1"/>
    <c id="93566" mentions="1"/>
    <c id="142667" mentions="1"/>
    <c id="142669" mentions="1"/>
    <c id="155740" mentions="1"/>
    <c id="221847" mentions="5"/>
    <l id="187667" mentions="1"/>
    <l id="222780" mentions="1"/>
    &lt;p id="5973" mentions="1"/&gt;
    &lt;p id="47503" mentions="1"/&gt;
    &lt;p id="113753" mentions="3"/&gt;
    &lt;p id="114425" mentions="7"/&gt;
    &lt;p id="209501" mentions="2"/&gt;
    </entities>
    </docattribs>
    Step 4:
    Here are typical queries we will use via our application. Using following queries application will fetch relevant docids and
    uses them to fetch detail information. See samples below:
    -- 19 secs.
    -- 41185 rows
    -- only watchlist
    SELECT COUNT (docid)
    FROM ot_docattributes
    WHERE ( EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=1]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=2]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=5]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=7]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=8]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=9]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=12]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=13]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=14]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=15]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=19]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=30]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=34]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=43]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=48]') = 1
    -- 8 secs.
    -- 7605 rows
    -- watchlist + brand filter (Viagra OR Zoloft)
    SELECT COUNT (docid)
    FROM ot_docattributes
    WHERE ( EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=1]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=2]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=5]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=7]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=8]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=9]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=12]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=13]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=14]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=15]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=19]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=30]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=34]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=43]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=48]') = 1
    AND ( EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=50]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=27]') = 1
    -- 8 secs
    -- 2324 rows
    -- watchlist + brand filter (Viagra OR Zoloft) + not brand filter (Financial and Levitra)
    SELECT COUNT (docid)
    FROM ot_docattributes
    WHERE ( EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=1]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=2]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=5]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=7]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=8]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=9]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=12]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=13]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=14]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=15]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=19]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=30]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=34]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=43]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=48]') = 1
    AND ( EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=50]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=27]') = 1
    AND EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=16]') = 0
    AND EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=40]') = 0;
    -- 8 secs
    -- 141 rows
    -- watchlist + brand filter (Viagra OR Zoloft) + not brand filter (Financial and Levitra)
    -- message group filter ()
    SELECT COUNT (docid)
    FROM ot_docattributes
    WHERE ( EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=1]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=2]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=5]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=7]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=8]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=9]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=12]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=13]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=14]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=15]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=19]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=30]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=34]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=43]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=48]') = 1
    AND ( EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=50]') = 1
    OR EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=27]') = 1
    AND EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=16]') = 0
    AND EXISTSNODE (docdata, '/docattribs/brandgroups/bg[@id=40]') = 0
    AND EXISTSNODE (docdata, '/docattribs/messagegroups/mg[@id=27]') = 1;      
    Here's query execution plan:
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=ALL_ROWS          1           1191.16324450158                     
    SORT AGGREGATE          1      164                          
    FILTER                                        
    TABLE ACCESS FULL     ELILILLY_ORK.OT_DOCATTRIBUTES     16      2 K     1161.13277939939                     
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_BRANDGROUPS     1      28      3.00304620318792           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_BRANDGROUPS     2           2.00200471148267                
    TABLE ACCESS BY INDEX ROWID     ELILILLY_ORK.OT_MESSAGEGROUPS     1      42      3.00304658697702           
    INDEX RANGE SCAN     ELILILLY_ORK.PK_OT_MESSAGEGROUPS     2           2.00200471148267           
    DB & OS Info:
    Oracle Version : 10.1.0.2.0
    OS : Linux
    Finally, Here are my questions:
    1. Currently, I'm running my queries against 122k rows (data growth rate is ~2million rows per month) . I feel, it's taking too much time for such a small data set. How can i improve the performace of above queries? Or Is there a better way to write them? Since, its in evolution stage, i'm open to restructuring the schema if needed for better performance.
    2. Is this correct approach? If not, Is there a better way to solve the above problem?
    3. Any suggestions/tips are welcome.
    I apologize, for such a lengthy post (my intention was to provide as much information as possible).
    Thanks in advance.

    Org Index Overflow makes it an index organized table (IOT) as distinct from a heap table (HT). In the original design we expected to see some performance enhancements from the use of IOT (In particular when re-serializing entire XML documents that had been persisted with Nested table storage), practice the benefits of IOT over HT in the XML DB use case have been marginal. We still create IOT by default when the tables are generated via Schema Registration, however it is likely that this will become controllable via a Schema Annotation in the not too distant future.
    There are cases where the use of IOT causes problems, the most noticalbe of which is where you want to create a Text Index on a specific element or attribute which is mapped to a column in the Nested Table. If the NT is an IOT this is not possible, where if the Nested Table is an HT then there is no problem

  • COLLECTION ITERATOR PICKLER FETCH along with XMLSEQUENCEFROMXMLTYPE

    Hi All,
    We have Oracle database 10.2.0.4 on solaris 10.
    I found some xml queries which are consuming CPU and memory highly, below is the execution plan for one of this xml sql.
    PLAN_TABLE_OUTPUT
    SQL_ID  gzsfqp1mkfk8t, child number 0
    SELECT B.PACKET_ID FROM CM_PACKET_ALT_KEY B, CM_ALT_KEY_TYPE C, TABLE (XMLSEQUENCE (EXTRACT (:B1 ,
    '/AlternateKeys/AlternateKey'))) T WHERE B.ALT_KEY_TYPE_ID = C.ALT_KEY_TYPE_ID AND C.ALT_KEY_TYPE_NAME = EXTRACTVALUE
    (VALUE (T), '/AlternateKey/@keyType') AND B.ALT_KEY_VALUE = EXTRACTVALUE (VALUE (T), '/AlternateKey') AND NVL
    (B.CHILD_BROKER_CODE, '6209870F57C254D6E04400306E4A78B0') = NVL (EXTRACTVALUE (VALUE (T), '/AlternateKey/@broker'),
    '6209870F57C254D6E04400306E4A78B0')
    Plan hash value: 855909818
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT                    |                        |       |       | 16864 (100)|          |       |       |
    |*  1 |  HASH JOIN                          |                        |    45 |  3240 | 16864   (2)| 00:03:23 |       |       |
    |   2 |   TABLE ACCESS FULL                 | CM_ALT_KEY_TYPE        |     5 |   130 |     6   (0)| 00:00:01 |       |       |
    |*  3 |   HASH JOIN                         |                        |   227 | 10442 | 16858   (2)| 00:03:23 |       |       |
    |   4 |    COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE |       |       |            |          |       |       |
    |   5 |    PARTITION HASH ALL               |                        |    10M|   447M| 16758   (2)| 00:03:22 |     1 |    16 |
    |   6 |     TABLE ACCESS FULL               | CM_PACKET_ALT_KEY      |    10M|   447M| 16758   (2)| 00:03:22 |     1 |    16 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - access("B"."ALT_KEY_TYPE_ID"="C"."ALT_KEY_TYPE_ID" AND
                  "C"."ALT_KEY_TYPE_NAME"=SYS_OP_C2C(EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey/@keyType')))
       3 - access("B"."ALT_KEY_VALUE"=EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey') AND
                  NVL("B"."CHILD_BROKER_CODE",'6209870F57C254D6E04400306E4A78B0')=NVL(EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey/@broker'
                  ),'6209870F57C254D6E04400306E4A78B0'))Seems due to
    1.COLLECTION ITERATOR PICKLER FETCH along with XMLSEQUENCEFROMXMLTYPE which i think is due to usage of table( XMLSEQUENCE() )
    2.Conversion taking place according to SYS_OP_C2C function as shown in Predicate Information.
    3.Table is not using xmltype datatype to store XML
    4.Wilcards have been used (/AlternateKey/@keyType)
    Could anyone please help me in tuning this query as i know very less about XML DB
    Including one more sql which also use to consume huge CPU and memory, these tables are also not hving any column with xmltype datatype.
    SELECT /*+  INDEX(e) */ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES (G.TAG_CATEGORY_CODE AS
    "categoryType"), XMLELEMENT ("TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS
    "origin"), XMLAGG (XMLCONCAT (XMLELEMENT ("Value", XMLATTRIBUTES (F.TAG_LIST_CODE AS "listType"),
    E.TAG_VALUE), CASE WHEN LEVEL = 1 THEN :B4 ELSE NULL END))) )) FROM TABLE (CAST (:B1 AS
    T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F,
    REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID
    AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID =
    A.MAPPED_ENUM_TAG_ID GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE START WITH
    A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_ID
    Plan hash value: 2393257319
    | Id  | Operation                                    | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                             |                  |       |       | 16455 (100)|          |
    |   1 |  SORT AGGREGATE                              |                  |     1 |   185 | 16455   (1)| 00:03:18 |
    |   2 |   SORT GROUP BY                              |                  |     1 |   185 | 16455   (1)| 00:03:18 |
    |*  3 |    CONNECT BY WITH FILTERING                 |                  |       |       |            |          |
    |*  4 |     FILTER                                   |                  |       |       |            |          |
    |   5 |      COUNT                                   |                  |       |       |            |          |
    |*  6 |       HASH JOIN                              |                  |   667K|   117M| 16413   (1)| 00:03:17 |
    |   7 |        COLLECTION ITERATOR PICKLER FETCH     |                  |       |       |            |          |
    |*  8 |        HASH JOIN                             |                  |  8168 |  1459K| 16384   (1)| 00:03:17 |
    |   9 |         TABLE ACCESS FULL                    | REM_TAG_CATEGORY |    25 |   950 |     5   (0)| 00:00:01 |
    |* 10 |         HASH JOIN                            |                  |  8168 |  1156K| 16378   (1)| 00:03:17 |
    |  11 |          TABLE ACCESS FULL                   | REM_TAG_LIST     |   117 |  7137 |     5   (0)| 00:00:01 |
    |  12 |          NESTED LOOPS                        |                  |  8168 |   670K| 16373   (1)| 00:03:17 |
    |  13 |           MERGE JOIN                         |                  |  8168 |   215K|    27   (4)| 00:00:01 |
    |  14 |            TABLE ACCESS BY INDEX ROWID       | CM_ORIGIN        |     2 |    50 |     2   (0)| 00:00:01 |
    |  15 |             INDEX FULL SCAN                  | PK_CM_ORIGIN     |     2 |       |     1   (0)| 00:00:01 |
    |* 16 |            SORT JOIN                         |                  |  8168 | 16336 |    25   (4)| 00:00:01 |
    |  17 |             COLLECTION ITERATOR PICKLER FETCH|                  |       |       |            |          |
    |  18 |           TABLE ACCESS BY INDEX ROWID        | REM_TAG_VALUE    |     1 |    57 |     2   (0)| 00:00:01 |
    |* 19 |            INDEX UNIQUE SCAN                 | PK_REM_TAG_VALUE |     1 |       |     1   (0)| 00:00:01 |
    |* 20 |     HASH JOIN                                |                  |       |       |            |          |
    |  21 |      CONNECT BY PUMP                         |                  |       |       |            |          |
    |  22 |      COUNT                                   |                  |       |       |            |          |
    |* 23 |       HASH JOIN                              |                  |   667K|   117M| 16413   (1)| 00:03:17 |
    |  24 |        COLLECTION ITERATOR PICKLER FETCH     |                  |       |       |            |          |
    |* 25 |        HASH JOIN                             |                  |  8168 |  1459K| 16384   (1)| 00:03:17 |
    |  26 |         TABLE ACCESS FULL                    | REM_TAG_CATEGORY |    25 |   950 |     5   (0)| 00:00:01 |
    |* 27 |         HASH JOIN                            |                  |  8168 |  1156K| 16378   (1)| 00:03:17 |
    |  28 |          TABLE ACCESS FULL                   | REM_TAG_LIST     |   117 |  7137 |     5   (0)| 00:00:01 |
    |  29 |          NESTED LOOPS                        |                  |  8168 |   670K| 16373   (1)| 00:03:17 |
    |  30 |           MERGE JOIN                         |                  |  8168 |   215K|    27   (4)| 00:00:01 |
    |  31 |            TABLE ACCESS BY INDEX ROWID       | CM_ORIGIN        |     2 |    50 |     2   (0)| 00:00:01 |
    |  32 |             INDEX FULL SCAN                  | PK_CM_ORIGIN     |     2 |       |     1   (0)| 00:00:01 |
    |* 33 |            SORT JOIN                         |                  |  8168 | 16336 |    25   (4)| 00:00:01 |
    |  34 |             COLLECTION ITERATOR PICKLER FETCH|                  |       |       |            |          |
    |  35 |           TABLE ACCESS BY INDEX ROWID        | REM_TAG_VALUE    |     1 |    57 |     2   (0)| 00:00:01 |
    |* 36 |            INDEX UNIQUE SCAN                 | PK_REM_TAG_VALUE |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=PRIOR NULL)
       4 - filter(SYS_OP_ATG(VALUE(KOKBF$),2,3,2)=HEXTORAW(:B3))
       6 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=SYS_OP_ATG(VALUE(KOKBF$),2,3,2))
       8 - access("G"."TAGGING_CATEGORY_ID"="F"."TAGGING_CATEGORY_ID")
      10 - access("F"."TAG_LIST_ID"="E"."TAG_LIST_ID")
      16 - access("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
           filter("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
      19 - access("E"."TAG_VALUE_ID"=SYS_OP_ATG(VALUE(KOKBF$),7,8,2))
      20 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=PRIOR NULL)
      23 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=SYS_OP_ATG(VALUE(KOKBF$),2,3,2))
      25 - access("G"."TAGGING_CATEGORY_ID"="F"."TAGGING_CATEGORY_ID")
      27 - access("F"."TAG_LIST_ID"="E"."TAG_LIST_ID")
      33 - access("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
           filter("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
      36 - access("E"."TAG_VALUE_ID"=SYS_OP_ATG(VALUE(KOKBF$),7,8,2))-Yasser
    Edited by: YasserRACDBA on Feb 24, 2010 8:30 PM
    Added one more sql..

    Looking at the second query, it too has a lot of bind variables... Can you find out the types and values of each BIND. Also, I'm suspcious about the use of XMLCONCAT.. Can you found out why the developer is using it..
    SELECT /*+  INDEX(e) */  XMLAGG
                                XMLELEMENT
                                   "TaggingCategory",
                                   XMLATTRIBUTES (G.TAG_CATEGORY_CODE AS "categoryType"),
                                   XMLELEMENT
                                     "TaggingValue",
                                     XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"),
                                     XMLAGG
                                        XMLCONCAT
                                          XMLELEMENT
                                            "Value",
                                            XMLATTRIBUTES (F.TAG_LIST_CODE AS "listType"),
                                            E.TAG_VALUE
                                          CASE WHEN LEVEL = 1
                                              THEN :B4
                                              ELSE NULL
                                          END
    FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A,
          TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C,
          REM_TAG_VALUE E,
          REM_TAG_LIST F,
          REM_TAG_CATEGORY G,
          CM_ORIGIN H
    WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID
      AND F.TAG_LIST_ID = E.TAG_LIST_ID
      AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID
      AND H.ORIGIN_ID = C.ORIGIN_ID
      AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID
    GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE
          START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 )
          CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_IDEdited by: mdrake on Feb 24, 2010 8:11 AM

Maybe you are looking for

  • Error connecting UWL EP7.0 with SAP R/3 4.7

    Hello, We're trying to connect UWL of EP 7.0 with SAP R/3 4.7 Kernel 6.40  patch 247(ST-PI 2005_1_620) and we get the following error: "(Connector) :javax.resource.ResourceException:Failed getting the following function metadata from repository: URL_

  • How to open NRW-files in Lightroom 4?I

    just bought Nikon's Coolpix P7800. It says you can shoot in Raw. Now I want to open these files in LR, just like I'm used to with my SLR's. But Nikon uses the NRW-format for Raw and LR says it doesn't recognise the files. Does someone know if there's

  • Where is the remaining balance on my iTunes account?

    I have a credit balance from iTunes cards posted to my account in the past, but can't locate the account.  Can't believe this information isn't available through apple support.

  • I just bought a song but cannot find?

    I looked in the Purchased History and it says that I bought it, I looked to see if it was down loaded and it said that all purchased items are down loaded, but I don't see a recently purchased tag on my player and it is not in my library. What do I d

  • LDAP advanced search

    I am using IBM tivoli, in my java programming I need get the members from several DNs per the login user. I had several trips to LDAP to make this happen which is slow. Is it possible in LDAp can do one call to get everything back? now my LDAp tree s