BI Statistics issue...

Hi,
We recently upgraded to BI 7.0 few months ago.
I know that new stats tables are introduced in BI 7.0 and their view is RSDDSTAT_DM and RSDDSTAT_OLAP.
Yesterday I have come across an issue that these tables are retaining entries, which are only two weeks old. It is deleting entries older than 2 weeks. It appears that it does it everyday.
The only programs I know which can be used to delete BI Stats are RSDDK_STA_DEL_DATA and RSDDSTAT_DATA_DELETE. I have verified that none of them is scheduled. I have checked the RSDDSTAT transaction as well and it uses the same program to delete.
Earlier I thought that somebody might have triggered accidentally to delete them. But it becomes evident that it is deleting them regularly everyday. I do not know how?
Does anybody have any idea, where I can find if this is scheduled?
I am not able to find it.
Your help will be appreciated.
Surinder.

Statistics data should generally be deleted when data is loaded to the InfoCubes of the technical content. If the technical content is not activated, or if the data is to be deleted from the statistics table for some other reason, you can also do this manually in the maintenance for the statistics properties.
When you choose  Delete Statistical Data
(Tcode- RSDDSTAT), the dialog box for restricting the areas in which the statistics data is to be deleted appears. You can select multiple areas.
1)     <b>Query Statistics Tables:</b> The system deletes the data for the BI query runtime statistics.
2)     <b>Aggregates/BIA Index Processes:</b> See Statistics for the Maintenance Processes of a BI Accelerator Indexs.
3)     <b>InfoCube Statistics (Delete, Compress):</b> The system deletes the data of the InfoCube statistics that results when data is deleted from an InfoCube or when data requests of an InfoCube are compressed.
Using the Up to Day (Incl.) field, you can enter a date up until which the system is to delete the statistics data. If you do not enter a date, all data is deleted. Since this can be executed with a command (TRUNCATE TABLE), (and not using selective deletion in the database), this version is considerably faster.
By restricting to one day, packages of 1000 records only are always deleted from the tables; this is followed by a database Commit. This makes it possible to restart after a termination (resulting from a TIMEOUT, for example), without a need to redo previous work.
Hope it Helps
Chetan
@CP..

Similar Messages

  • Gather Schema Statistics issue?

    Hi
    Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?
    I can manually run 'Gather Table Statistics' program against all tables.
    Regards

    Hi;
    Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?For can run hather stat for custom schema please check
    gather schema stats for EBS 11.5.10
    gather schema stats for EBS 11.5.10
    I can manually run 'Gather Table Statistics' program against all tables. Please see:
    How To Gather Statistics On Oracle Applications 11.5.10(and above) - Concurrent Process,Temp Tables, Manually [ID 419728.1]
    Also see:
    How to work Gather stat
    Gather Schema Statistics
    http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
    Regard
    Helios

  • Essbase statistics issue

    Hello All,
    I have migrated essbase application from 9x to 11.1.2.1. Exported the data from 9x environment & imported the same to 11.1.2.1.
    Data is matching at top level but the statistics are not matching (except block size).
    Can anobody specify the reason for the same.
    *Note - No errors while importing & exporting.
    There is a difference of 1.2e-8 difference in some data cells*
    Regards

    Hi,
    Your source database probably contains blocks that are missing. These blocks were not exported, thus not imported, which explains the difference. So unless you have more level 0 blocks in your target database, it's perfectly fine.
    I think you could try to run a restructure of your source database, it will get rid of the missing blocks.

  • Extended statistics issue

    Hello!
    I have a problem with extended statistics on 11.2.0.3
    Here is the script I run
    drop table col_stats;
    create table col_stats as
    select  1 a, 2 b,
    from dual
    connect by level<=100000;
    insert into col_stats (
    select  2, 1,
    from dual
    connect by level<=100000);
    -- check the a,b distribution
    A
        B
    COUNT(1)
    2
        1
      100000
    1
        2
      100000
    -- extended stats DEFINITION
    select dbms_stats.create_extended_stats('A','COL_STATS','(A,B)') name
    from dual;
    -- set estimate_percent to 100%
    EXEC dbms_stats.SET_TABLE_prefs ('A','COL_STATS','ESTIMATE_PERCENT',100);
    -- check the changes
    select dbms_stats.get_prefs ('ESTIMATE_PERCENT','A','COL_STATS')
    from dual;
    -- NOW GATHER COLUMN STATS
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS (
        OWNNAME    => 'A',
        TABNAME    => 'COL_STATS',
        METHOD_OPT => 'FOR ALL COLUMNS' );
    END;
    set autotrace traceonly explain
    select * from col_stats where a=1 and b=1;
    SQL> select * from col_stats where a=1 and b=1;
    Execution Plan
    Plan hash value: 1829175627
    | Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |           | 50000 |   683K|   177 (2)| 00:00:03 |
    |*  1 |  TABLE ACCESS FULL| COL_STATS | 50000 |   683K|   177 (2)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter("A"=1 AND "B"=1)
    How come the optimizer expects 50000 rows?
    Thanks in advance.
    Rob

    RobK wrote:
    SQL> select * from col_stats where a=1 and b=1;
    Execution Plan
    Plan hash value: 1829175627
    | Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |           | 50000 |   683K|   177 (2)| 00:00:03 |
    |*  1 |  TABLE ACCESS FULL| COL_STATS | 50000 |   683K|   177 (2)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter("A"=1 AND "B"=1)
    How come the optimizer expects 50000 rows?
    Thanks in advance.
    Rob
    This is an expected behavior.
    When you create extended statistics then it creates histogram for column groups(and it is virtual column).
    The query predicate "where a=1 and b=1" is actually out-of-range predicate for that virtual column. In such cases optimizer should be estimate
    selectivity as 0 (so cardinality 1) but in such cases optimizer uses density(in our case actually used Newdensity) of column(or your virtual columns).
    Let see following information(this is exact your case)
    NUM_ROWS
      200000
    COLUMN_NAME                                           NUM_DISTINCT     DENSITY            HISTOGRAM
    A                                                                  2                         .00000250            FREQUENCY
    B                                                                  2                          .00000250           FREQUENCY
    SYS_STUNA$6DVXJXTP05EH56DTIR0X          2                          .00000250           FREQUENCY
    COLUMN_NAME                    ENDPOINT_NUMBER               ENDPOINT_VALUE
    A                                          100000                                      1
    A                                          200000                                      2
    B                                          100000                                      1
    B                                          200000                                      2
    SYS_STUNA$6DVXJXTP05EH56DTIR0X          100000             1977102303
    SYS_STUNA$6DVXJXTP05EH56DTIR0X          200000             7894566276
    Your predicate is "where a=1 and b=1" and it is equivalent with "where SYS_STUNA$6DVXJXTP05EH56DTIR0X = sys_op_combined_hash (1, 1)"
    As you know with frequency histogram selectivity for equ(=) predicate is (E_endpoint-B_Endpoint)/num_rows. Here predicate value has located
    between E_endpoint and B_Endpoint histogram buckets(endpoint numbers). But sys_op_combined_hash (1, 1) = 7026129190895635777. So then how can
    i compare this value and according histogram endpoint values?. Answer is when creating histogram oracle do not use exact sys_op_combined_hash(x,y)
    but it also apply MOD function, so you have to compare MOD (sys_op_combined_hash (1, 1), 9999999999)(which is equal 1598248696) with endpoint values
    . So 1598248696 this is not locate between any endpoint number. Due to optimizer use NewDensity as density(in this case can not endpoint inf)  
    In below trace file you clearly can see that
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table: COL_STATS  Alias: COL_STATS
        #Rows: 200000  #Blks:  382  AvgRowLen:  18.00
    Access path analysis for COL_STATS
    SINGLE TABLE ACCESS PATH
      Single Table Cardinality Estimation for COL_STATS[COL_STATS]
      Column (#1):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#2):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#3):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      ColGroup (#1, VC) SYS_STUNA$6DVXJXTP05EH56DTIR0X
        Col#: 1 2    CorStregth: 2.00
      ColGroup Usage:: PredCnt: 2  Matches Full: #1  Partial:  Sel: 0.2500
      Table: COL_STATS  Alias: COL_STATS
        Card: Original: 200000.000000  Rounded: 50000  Computed: 50000.00  Non Adjusted: 50000.00
      Access Path: TableScan
        Cost:  107.56  Resp: 107.56  Degree: 0
          Cost_io: 105.00  Cost_cpu: 51720390
          Resp_io: 105.00  Resp_cpu: 51720390
      Best:: AccessPath: TableScan
             Cost: 107.56  Degree: 1  Resp: 107.56  Card: 50000.00  Bytes: 0
    Note that NewDensity calculated as 1/(2*num_distinct)= 1/4=0.25 for frequency histogram!.
    CBO used column groups statistic and estimated cardinality was 200000*0.25=50000.
    Remember that they are permanent statistics and RDBMS gathered they by analyzing actual table data(Even correlation columns data).
    But dynamic sampling can be good in your above situation, due to it is calculate selectivity in run time using sampling method together real predicate.
    For other situation you can see extends statistics is great help for estimation like where a=2 and b=1 because this is actual data and according information(stats/histograms) stored in dictionary.
    SQL>  select * from col_stats where a=2 and b=1;
    Execution Plan
    Plan hash value: 1829175627
    | Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |           |   100K|   585K|   108   (3)| 00:00:02 |
    |*  1 |  TABLE ACCESS FULL| COL_STATS |   100K|   585K|   108   (3)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - filter("A"=2 AND "B"=1)
    and from trace file
    Table Stats::
      Table: COL_STATS  Alias: COL_STATS
        #Rows: 200000  #Blks:  382  AvgRowLen:  18.00
    Access path analysis for COL_STATS
    SINGLE TABLE ACCESS PATH
      Single Table Cardinality Estimation for COL_STATS[COL_STATS]
      Column (#1):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#2):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#3):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      ColGroup (#1, VC) SYS_STUNA$6DVXJXTP05EH56DTIR0X
        Col#: 1 2    CorStregth: 2.00
    ColGroup Usage:: PredCnt: 2  Matches Full: #1  Partial:  Sel: 0.5000
      Table: COL_STATS  Alias: COL_STATS
        Card: Original: 200000.000000  Rounded: 100000  Computed: 100000.00  Non Adjusted: 100000.00
      Access Path: TableScan
        Cost:  107.56  Resp: 107.56  Degree: 0
          Cost_io: 105.00  Cost_cpu: 51720365
          Resp_io: 105.00  Resp_cpu: 51720365
      Best:: AccessPath: TableScan
             Cost: 107.56  Degree: 1  Resp: 107.56  Card: 100000.00  Bytes: 0
    Lets calculate:
    MOD (sys_op_combined_hash (2, 1), 9999999999)=1977102303 and for it (e_endpoint-b_enpoint)/num_rows=(200000-100000)/200000=0.5
    and result card=sel*num_rows(or just e_endpoint-b_enpoint)=100000.

  • Same query, different results depending on compute statistics!!!

    This one is really weird, you would think I am making this up but I am not...
    We have one really long query that uses several large inline views and usually returns a few rows. All of a sudden the query stopped working -- i.e. it returned no rows. We tried rebuilding indexes; this didn't help. We tried computing full statistics and this fixed the problem. Has anyone heard of compute statistics affecting the output of a query????
    About a week later, the problem happened again. Computing estimate statistics didn't help. Only computing full statistics fixed the problem.
    The only thing I can note, is that this database was recently upgraded from 9.2.0.6 to 9.2.0.7, but I checked the install log files and there are no errors.
    Luckily this is just a development database but we are a little worried that it might re-occur in production. We have a few other development databases that have also been upgraded to 9.2.0.7 and none of these have the problem.
    We have compared the init.ora files, no real differences. Any other ideas? Maybe a full export import?

    Thanks, will do, but I am a little doubtful it is
    fixed by 9.2.0.8 because it works on one of our
    9.2.0.7 environments... Although if it is a statistics issue, it's likely a corner case, so you have to have a number of things in alignment. It's quite possible that, for example, two systems have identical structures and identical data but slightly different query plans because the data in one table is physically ordered differently on one system than in another which slightly changes the clustering_factor of an index which causes the CBO to use that index on one system and not on another. You may also end up with slightly different statistics because you have histograms on a column in one system and not in another.
    Looks like we are going to 9.2.0.8 anyway because of
    the end-of-life service support forcing us to go to
    9.2.0.8 anyway.If it reproduces on 9.2.0.8 (and I'd tend to suspect it will), it's certainly worth raising an issue. Unless you have an extended support contract, though, I wouldn't hold out a lot of hope for a patch if this isn't already fixed, since 9.2 leaves Premier Support at the end of the month...
    Justin

  • Problem - same report different font, depending on source

    Post Author: smash
    CA Forum: General
    Hi, thanks for any help you can give.
    I have a report that when I print from the crystal designer, it prints exactly like I want it. The report is using a Courier 10cpi font. When I export to pdf or export directly to the printer, the report is printed with an Arial font. I have the courier on the server and the client machines and I am printing to a IBM 6500 feed printer.
    Anyone have a clue on why these might be printing differently?
    Thanks

    Thanks, will do, but I am a little doubtful it is
    fixed by 9.2.0.8 because it works on one of our
    9.2.0.7 environments... Although if it is a statistics issue, it's likely a corner case, so you have to have a number of things in alignment. It's quite possible that, for example, two systems have identical structures and identical data but slightly different query plans because the data in one table is physically ordered differently on one system than in another which slightly changes the clustering_factor of an index which causes the CBO to use that index on one system and not on another. You may also end up with slightly different statistics because you have histograms on a column in one system and not in another.
    Looks like we are going to 9.2.0.8 anyway because of
    the end-of-life service support forcing us to go to
    9.2.0.8 anyway.If it reproduces on 9.2.0.8 (and I'd tend to suspect it will), it's certainly worth raising an issue. Unless you have an extended support contract, though, I wouldn't hold out a lot of hope for a patch if this isn't already fixed, since 9.2 leaves Premier Support at the end of the month...
    Justin

  • How to create an index in TREXADMIN

    Hi,
    Nowadays, we are trying to configure TREX from a SAP system (creating indexes, queues, so on).
    Into TREX Admin Tool in the SAP System (T-code TREXADMIN), where can we create an index? In Index Landscape and Index Admin tabs, we do not see this option.
    <REMOVED BY MODERATOR>
    Thanks in advance,
    Samantha.
    Edited by: Alvaro Tejada Galindo on Feb 15, 2008 3:06 PM

    boyishcovenant wrote:
    Currently, there is already an index on table A on column KEY. And there is already an index on table B on column KEY. So I won't be adding a new index per say, but modifying the way these 2 indexes are set up.
    I am using Oracle 9.2. Stats are up to date. It's not statistics issue. The execution plan I posted in the original post shows the Delete is already doing index range scans nested loop anti join.Yes, and i would expect it to be doing a hash anti join and not a nested loops anti join given the volume of data you say you are dealing with.
    boyishcovenant wrote:
    I think that it is spending too much time in doing the anti join with "NOT IN". I believe 95% of the data is "IN", if you know what I mean. Hence, if I had an index that already does the computation of what is "NOT IN" table B, then there won't be the need to do that anti join.That's really a design issue, and you're adamant you can't alter the design ... so ....
    Cheers,

  • Execution script too much time than before

    Hi experts,
    We run tunning script on daily basis which analyze multiple table of oracle ebs 11i oracle 10g and we also have execute schema statistics,
    Issue is this it take too much time to complete some time more than 12 hrs. It looks like hang on gathereing statistics Inv and ont schema.
    Kindly give me solution
    e.g
    analyze table inv.mtl_transaction_types compute statistics;
    analyze table wip.wip_schedule_groups compute statistics;
    analyze table WSH.WSH_DELIVERY_DETAILS compute statistics;
    exec DBMS_STATS.GATHER_SCHEMA_STATS('APPS'); ---APPS, APPLSYS, INV,ONT,WSH, SYS, SYSTEM, GL,HR,AR,AP,CE,WIP,
    ---PO,MSC,PJI
    exec DBMS_STATS.GATHER_SCHEMA_STATS('APPLSYS');
    exec DBMS_STATS.GATHER_SCHEMA_STATS('INV');
    exec DBMS_STATS.GATHER_SCHEMA_STATS('ONT');
    exec DBMS_STATS.GATHER_SCHEMA_STATS('WSH');
    Now its taking more than 15 minut and not yet complete.
    Thanks

    but to gather statisics for some schema it takes too much time. First it take 3 hrs for whole script now it is taking more than 15 hrs and some scheam go to
    complete and some are still hang then we have to cancel this job.
    I also experiment to execute one by one schema at that time it is completed normally.Have you done any changes recently (i.e. upgrade, installed new patches, data load ..etc)?
    Have you tried to use different parameters? -- Definition of Parameters Used in Gather Schema Statistics Program [ID 556466.1]
    You could enable trace and generate the TKPROF file to find out why it takes that long to gather schema statistics.
    FAQ: Common Tracing Techniques within the Oracle Applications 11i/R12 [ID 296559.1]
    Enabling Concurrent Program Traces [ID 123222.1]
    How To Get Level 12 Trace And FND Debug File For Concurrent Programs [ID 726039.1]
    I would also suggest you log a SR as "Gather Schema Statistics" is a seeded concurrent program and Oracle Support should be able to help.
    Thanks,
    Hussein

  • Consumer for Durable subscribers keep on increasing

              We have a durable subscriber with client id "ABC". We forcibly kill it to test
              recovery.
              Next time I start this subscriber again with same client id. Number of consumers
              on this topic goes up. Everything else works fine. Only Messages pending keeps
              going up too?
              Not sure why consumers are increased?
              Thanks
              

              Thank you Tom and shean
              I will contact support
              Abahy
              Tom Barnes <[email protected]> wrote:
              >Either A) contact customer support to see if they will patch 6.1 no SP
              >for you
              >(or to see if they already have a patch)
              >Or B) upgrade yourself to SP2 as you already plan (or perhaps better
              >yet
              >SP3, which just came out)
              > and see if the problem goes away
              >
              >Personally I think you should go with SP2 or SP3, or perhaps even make
              >the leap
              >to 7.0. Especially
              >since you are not in production.
              >
              >Note BEA newsgroups are not maintained by "customer support", although
              >customer
              >support will occassionally
              >take a peek. The vast majority of posters are either customers or BEA
              >developers that
              >post on their own time.
              >
              >Tom, BEA
              >
              >Abhay wrote:
              >
              >> Shean,
              >>
              >> So what we do next?
              >>
              >> Abhay
              >>
              >> "Shean-Guang Chang" <[email protected]> wrote:
              >> >The support team has all the available release and patch that is why
              >> >I will
              >> >pass this to them to confirm the problem.
              >> >
              >> >"Abhay" <[email protected]> wrote in message
              >> >news:[email protected]...
              >> >>
              >> >> Shean,
              >> >>
              >> >> Did this problem existed in previous releases? With current release
              >> >can
              >> >you kill
              >> >> the subscriber, without unsubscribing and when restarted it shows
              >only
              >> >one
              >> >consumers
              >> >> or two?
              >> >>
              >> >> Do you know of any way to check whether these pending messages are
              >> >in
              >> >uncommited
              >> >> ot unacknowledged mode?
              >> >>
              >> >> Abhay
              >> >>
              >> >> "Shean-Guang Chang" <[email protected]> wrote:
              >> >> >The problem is not in the current environment. I will pass this
              >issue
              >> >> >to our
              >> >> >support team so they can verify this problem further on the 6.1
              >line.
              >> >> >In the
              >> >> >meantime you can call BEA support to get a customer case number
              >so
              >> >that
              >> >> >if
              >> >> >the problem requires any patch the customer support will have all
              >> >the
              >> >> >info
              >> >> >they need.
              >> >> >
              >> >> >Thanks!
              >> >> >
              >> >> >"Abhay" <[email protected]> wrote in message
              >> >> >news:[email protected]...
              >> >> >>
              >> >> >> Shean,
              >> >> >>
              >> >> >> Looks like we are online. I have restarted the subscriber process
              >> >and
              >> >> >do
              >> >> >not get
              >> >> >> messages again. My preliminary analysis it is statistical problem
              >> >only.
              >> >> >But We
              >> >> >> are building a huge EAI bus and want to get all things addressed
              >> >before
              >> >> >it
              >> >> >comes
              >> >> >> in manufacturing.
              >> >> >>
              >> >> >> We are using WLS6.1(no SP's). I am planning to upgrade to SP2
              >in
              >> >weeks
              >> >> >time. I
              >> >> >> looked at bugs fixed since 6.1 and there is no mention of such
              >problem.
              >> >> >So, not
              >> >> >> confidant that upgrading to SP2 will fix problem.
              >> >> >>
              >> >> >> Abhay
              >> >> >>
              >> >> >> "Shean-Guang Chang" <[email protected]> wrote:
              >> >> >> >What version of WLS you are using?
              >> >> >> >So far you indicate there are some statistics issue ONLY.
              >> >> >> >After those messages being consumed (committed) do you get them
              >> >again?
              >> >> >> >
              >> >> >> >I would start the subscriber process one more time and see if
              >those
              >> >> >> >committed messages will be available again or not. Thanks!
              >> >> >> >
              >> >> >> >"Abhay" <[email protected]> wrote in message
              >> >> >> >news:[email protected]...
              >> >> >> >>
              >> >> >> >> Thank you Shean,
              >> >> >> >>
              >> >> >> >> I am creating Durable subscriber with same client id and
              >> >subscription
              >> >> >> >name. When
              >> >> >> >> listerner falls over or killed on perpose. Durable subscription
              >> >> >entry
              >> >> >> >count for
              >> >> >> >> that clientid and subscriptionid (same) remains 1. Now we
              >start
              >> >> >the
              >> >> >> >subscriber
              >> >> >> >> process again and that increaments "consumer" count to 2.
              >> >> >> >>
              >> >> >> >> Is this way it should be? I am not sure. I was thinking that
              >> >it
              >> >> >will
              >> >> >> >be
              >> >> >> >intelligent
              >> >> >> >> enough to say I am from the same machine with same client
              >id,
              >> >> >subscription
              >> >> >> >id
              >> >> >> >> so I am the same guy.
              >> >> >> >>
              >> >> >> >> Anyway, reason I started looking at this because I am having
              >> >another
              >> >> >> >problem.
              >> >> >> >> once this restarted subscriber process consumes the message
              >and
              >> >> >commits
              >> >> >> >session
              >> >> >> >> transaction. (yes, I use createTopicSession(true, AUTO)).
              >the
              >> >message
              >> >> >> >goes
              >> >> >> >in
              >> >> >> >> pending stage. I am not sure why? So I started looking everywhere.
              >> >> >> >Do you
              >> >> >> >know
              >> >> >> >> how to find out what state these pending messages are in (uncommited
              >> >> >> >or
              >> >> >> >unacnkowledged)?
              >> >> >> >>
              >> >> >> >> Abhay
              >> >> >> >>
              >> >> >> >> "Shean-Guang Chang" <[email protected]> wrote:
              >> >> >> >> >What release of WAS?
              >> >> >> >> >
              >> >> >> >> >The durable subscriber will stay around even after the termination
              >> >> >> >of
              >> >> >> >> >the
              >> >> >> >> >Topic.
              >> >> >> >> >The durable subscriber is treated as a persistent subscriber
              >> >to
              >> >> >the
              >> >> >> >Topic
              >> >> >> >> >so
              >> >> >> >> >the durable subscriber will still receive all the qualified
              >> >topic
              >> >> >> >message
              >> >> >> >> >according to the JMS spec.
              >> >> >> >> >The only way through JMS API to remove the durable subscriber
              >> >is
              >> >> >using
              >> >> >> >> >unsubscribe() and there are conditions as to when this is
              >allowed
              >> >> >> >(a.go.
              >> >> >> >> >if
              >> >> >> >> >there is a Topic currently using this durable subscriber
              >then
              >> >> >unsubscribe
              >> >> >> >> >will fail, etc...)
              >> >> >> >> >
              >> >> >> >> >You said you did createDurableSubscriber under the same client
              >> >> >id.
              >> >> >> >Did
              >> >> >> >> >you
              >> >> >> >> >use the same subscription name? If the subscription name
              >is
              >> >different
              >> >> >> >> >then
              >> >> >> >> >you are creating a different durable subscriber.
              >> >> >> >> >
              >> >> >> >> >
              >> >> >> >> >
              >> >> >> >> >"Abhay" <[email protected]> wrote in message
              >> >> >> >> >news:[email protected]...
              >> >> >> >> >>
              >> >> >> >> >> We have a durable subscriber with client id "ABC". We forcibly
              >> >> >kill
              >> >> >> >> >it to
              >> >> >> >> >test
              >> >> >> >> >> recovery.
              >> >> >> >> >>
              >> >> >> >> >> Next time I start this subscriber again with same client
              >id.
              >> >> >Number
              >> >> >> >> >of
              >> >> >> >> >consumers
              >> >> >> >> >> on this topic goes up. Everything else works fine. Only
              >Messages
              >> >> >> >pending
              >> >> >> >> >keeps
              >> >> >> >> >> going up too?
              >> >> >> >> >>
              >> >> >> >> >> Not sure why consumers are increased?
              >> >> >> >> >>
              >> >> >> >> >> Thanks
              >> >> >> >> >>
              >> >> >> >> >
              >> >> >> >> >
              >> >> >> >>
              >> >> >> >
              >> >> >> >
              >> >> >>
              >> >> >
              >> >> >
              >> >>
              >> >
              >> >
              >
              

  • Need help to understand awrsqrpt

    Hi,
    Following is the content of awrsqrpt:
    -> % Total DB Time is the Elapsed Time of the SQL statement divided
       into the Total Database Time multiplied by 100
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                         4,013,371    4,013,371.0    39.7
    CPU Time (ms)                             4,013,407    4,013,406.5    44.5
    Executions                                        1            N/A     N/A
    Buffer Gets                                1.11E+09   1.111433E+09    54.2
    Disk Reads                                        0            0.0     0.0
    Parse Calls                                       1            1.0     0.0
    *Rows                                          5,749        5,749.0     N/A*
    User I/O Wait Time (ms)                           0            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                 80            N/A     N/A
    Execution Plan
    | Id  | Operation                       | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT                |                      |       |       |     7 (100)|          |
    |   1 |  FILTER                         |                      |       |       |            |          |
    *|   2 |   HASH GROUP BY                 |                     |     1 |    66 |     7  (15)| 00:00:01 |*
    |   3 |    NESTED LOOPS                 |                      |     1 |    66 |     6   (0)| 00:00:01 |
    |   4 |     NESTED LOOPS                |                      |     1 |    61 |     5   (0)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL          | TT_TMP1              |     1 |    20 |     2   (0)| 00:00:01 |
    |   6 |      TABLE ACCESS BY INDEX ROWID| T1                   |     1 |    41 |     3   (0)| 00:00:01 |
    |   7 |       INDEX RANGE SCAN          | IDX$$_01150001       |     1 |       |     2   (0)| 00:00:01 |
    |   8 |     INDEX RANGE SCAN            | INDX_T2              |     1 |     5 |     1   (0)| 00:00:01 |
    INSERT INTO TT_TMP2 (
    SELECT
        TRAN_DAT,
        CARD_NUM,
        SUM(MPU_AMT_1 + MPU_AMT_2) TOT_AMT
        FROM
        T1,
        T2,
        TT_TMP1
        WHERE
       MPU_MER_REF = MMR_MER_REF
       AND
       MPU_TRAN_DAT = TRAN_DAT
       AND
       MPU_CRD_NUM = CARD_NUM
       AND
       MPU_SETTL_FLAG = 'Y'
       AND
       MMR_RISK_TYPE IN ('B')
       AND
       MPU_CHANNEL_ID = 0
       GROUP BY
       TRAN_DAT,
       CARD_NUM,
       MMR_RISK_TYPE
       HAVING
       SUM(MPU_AMT_1 + MPU_AMT_2) > CASE MMR_RISK_TYPE WHEN 'B' THEN 0 ELSE NULL END)Where 'TT_TMP1' and 'TT_TMP2' are temporary tables. Now if we see Rows in Plan Statistics it shows 5,749 whereas in execution plan shows 1 row processed. Actually the DML selects 9 Lakhs of rows and inserts into the temporary table TT_TMP2.
    Why the awrsqrpt is showing wrong amount of rows?
    Platform: Windows
    Database: 10.2.0.5
    Regards,

    The plan shows the optimizer estimates - the estimates which led to that particular plan being chosen - rather than the actuals.
    The difference between actual and estimate is what gives you excellent leads to investigate poor plan choices.
    Given that all the estimates are for 1 row - perhaps ( ! ) you have a statistics issue.

  • Strange Query

    Hi all,
    I've got a problem with the following query: if I include a filter on a table, the explain plan say me that there's a cartesian. If I avoid the filter, no.
    The database version is 10.2.0.4.
    In another Environment (same DB, same OS, same etc...) there are no problems!
    Here the query and the explain plan:
    query without filter:
    SELECT DISTINCT
           E.ID_CUSTOMER_UNIFIED
         , E.START_VALIDITY_DATE
         , E.START_ASINC_DATE
         , E.CUSTOMER_STATUS
         , Z.FLAG_PASSIVE_SWITCH
    FROM TW_E_SUPPLY_A Z
       , TW_E_SUPPLY_H B
       , TW_E_CUST_UNI_CUST Y
       , TW_T_CUSTOMER_UNIFIED_A E
    WHERE Z.ID_SUPPLY = B.ID_SUPPLY
      AND Z.END_VALIDITY_DATE IS NULL
      AND B.END_VALIDITY_DATE IS NULL
      AND B.ID_CUSTOMER = Y.ID_CUSTOMER
      AND Y.END_LINK_DATE IS NULL
      AND Y.ID_CUSTOMER_UNIFIED = E.ID_CUSTOMER_UNIFIED
      AND Y.START_VALIDITY_DATE = E.START_VALIDITY_DATE
    --  AND E.CUSTOMER_STATUS = 'NON ATTIVO'
      AND Z.FLAG_PASSIVE_SWITCH = 'Y';explain plan without filter:
    | Id  | Operation                  | Name                    | Rows  | Bytes |TempSpc| Cost (%CPU)| Pstart| Pstop |                                                                                                                                                                                         
    |   0 | SELECT STATEMENT           |                         |  5152 |   477K|       |  2570   (2)|       |       |                                                                                                                                                                                         
    |   1 |  HASH UNIQUE               |                         |  5152 |   477K|  1112K|  2570   (2)|       |       |                                                                                                                                                                                         
    |   2 |   HASH JOIN                |                         |  5152 |   477K|       |  2453   (2)|       |       |                                                                                                                                                                                         
    |   3 |    HASH JOIN               |                         |  5455 |   351K|       |  1923   (2)|       |       |                                                                                                                                                                                         
    |   4 |     HASH JOIN              |                         |  5455 |   175K|       |  1365   (2)|       |       |                                                                                                                                                                                         
    |   5 |      PARTITION RANGE SINGLE|                         |  5455 | 70915 |       |   309   (2)|    22 |    22 |                                                                                                                                                                                         
    |   6 |       TABLE ACCESS FULL    | TW_E_SUPPLY_A           |  5455 | 70915 |       |   309   (2)|    22 |    22 |                                                                                                                                                                                         
    |   7 |      PARTITION RANGE SINGLE|                         |   162K|  3179K|       |  1054   (1)|    22 |    22 |                                                                                                                                                                                         
    |   8 |       TABLE ACCESS FULL    | TW_E_SUPPLY_H           |   162K|  3179K|       |  1054   (1)|    22 |    22 |                                                                                                                                                                                         
    |   9 |     PARTITION RANGE SINGLE |                         |   327K|    10M|       |   556   (1)|    22 |    22 |                                                                                                                                                                                         
    |  10 |      TABLE ACCESS FULL     | TW_E_CUST_UNI_CUST      |   327K|    10M|       |   556   (1)|    22 |    22 |                                                                                                                                                                                         
    |  11 |    TABLE ACCESS FULL       | TW_T_CUSTOMER_UNIFIED_A |   311K|  8811K|       |   529   (1)|       |       |                                                                                                                                                                                         
    -------------------------------------------------------------------------------------------------------------------                                                                                                                                                                                          query with filter:
    SELECT DISTINCT
           E.ID_CUSTOMER_UNIFIED
         , E.START_VALIDITY_DATE
         , E.START_ASINC_DATE
         , E.CUSTOMER_STATUS
         , Z.FLAG_PASSIVE_SWITCH
    FROM TW_E_SUPPLY_A Z
       , TW_E_SUPPLY_H B
       , TW_E_CUST_UNI_CUST Y
       , TW_T_CUSTOMER_UNIFIED_A E
    WHERE Z.ID_SUPPLY = B.ID_SUPPLY
      AND Z.END_VALIDITY_DATE IS NULL
      AND B.END_VALIDITY_DATE IS NULL
      AND B.ID_CUSTOMER = Y.ID_CUSTOMER
      AND Y.END_LINK_DATE IS NULL
      AND Y.ID_CUSTOMER_UNIFIED = E.ID_CUSTOMER_UNIFIED
      AND Y.START_VALIDITY_DATE = E.START_VALIDITY_DATE
      AND E.CUSTOMER_STATUS = 'NON ATTIVO'
      AND Z.FLAG_PASSIVE_SWITCH = 'Y';explain plan with filter:
    | Id  | Operation                           | Name                    | Rows  | Bytes | Cost (%CPU)| Pstart| Pstop |                                                                                                                                                                                        
    |   0 | SELECT STATEMENT                    |                         |     1 |    95 |  1401   (2)|       |       |                                                                                                                                                                                        
    |   1 |  HASH UNIQUE                        |                         |     1 |    95 |  1401   (2)|       |       |                                                                                                                                                                                        
    |   2 |   TABLE ACCESS BY GLOBAL INDEX ROWID| TW_E_SUPPLY_H           |     1 |    20 |     3   (0)|    22 |    22 |                                                                                                                                                                                        
    |   3 |    NESTED LOOPS                     |                         |     1 |    95 |  1400   (2)|       |       |                                                                                                                                                                                        
    |   4 |     HASH JOIN                       |                         |     1 |    75 |  1397   (2)|       |       |                                                                                                                                                                                        
    |   5 |      MERGE JOIN CARTESIAN           |                         |     1 |    42 |   838   (2)|       |       |                                                                                                                                                                                        
    |   6 |       TABLE ACCESS FULL             | TW_T_CUSTOMER_UNIFIED_A |     1 |    29 |   529   (1)|       |       |                                                                                                                                                                                        
    |   7 |       BUFFER SORT                   |                         |  5455 | 70915 |   309   (2)|       |       |                                                                                                                                                                                        
    |   8 |        PARTITION RANGE SINGLE       |                         |  5455 | 70915 |   309   (2)|    22 |    22 |                                                                                                                                                                                        
    |   9 |         TABLE ACCESS FULL           | TW_E_SUPPLY_A           |  5455 | 70915 |   309   (2)|    22 |    22 |                                                                                                                                                                                        
    |  10 |      PARTITION RANGE SINGLE         |                         |   327K|    10M|   556   (1)|    22 |    22 |                                                                                                                                                                                        
    |  11 |       TABLE ACCESS FULL             | TW_E_CUST_UNI_CUST      |   327K|    10M|   556   (1)|    22 |    22 |                                                                                                                                                                                        
    |  12 |     INDEX RANGE SCAN                | TW_E_SUPPLY_H_CUST_IDX  |     2 |       |     2   (0)|       |       |                                                                                                                                                                                        
    --------------------------------------------------------------------------------------------------------------------                                                                                                                                                                                         Someone know what can cause this?
    Thanks in advance
    Steve

    In general, this is not necessarily something to be worried about.
    In general, if that estimate of one row from TW_T_CUSTOMER_UNIFIED_A is inaccurate, then you've most likely got a statistics issue.
    However, is that one row estimate accurate?
    If not, firstly can you post the predicates sections of this plan?
    Then can you also run the query (run it not explain it) with the /*+ gather_plan_statistics */ hint and immediately afterwardsoutput from
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ALLSTATS LAST');

  • Total time saved: is 0

    If I https to the management gui interface on one of our 274 devices and go to monitoring and select the CIFS tab, the total time saved is reading Zero.  This does not seem right because some of the other devices have a large number there??

    Hi Dan,
    Sorry for the delay.
    1. But looking at your issue, is this only related to CIFS? I mean, does it show up all the other statistics?
    2. Further,can you look at the CM and find out if it shows up on CM?
    3. If CM is also showing "0", any idea when it stopped pushing this to CM / WAE itself?
    3. Is this really optimizing CIFS in both directions?
    4. Do you see any alarms on this WAE? - show alarms OR sh alarms hist det
    I am sure this information help us to point the problem. it seems like we have some reporting / statistics issue with this WAE. Have you done anything to correct this issue (like reloading, etc)?
    Thanks.

  • Scripts for system performance

    Hi,
    Please tell me ,what scripts to be executed/used  in order to find the oracle memory &buffer performance,tablespace growth problem and I/O statistics issues.Kindly let me the way & procedure to execute it on AIX 5.3/windows system.
       I would appreciate your help.
    Thanks in advance
    Daniel

    Hello Daniel,
    > oracle memory &buffer performance,tablespace growth problem and I/O statistics issues
    I think you mean the AWR. The Automatic Workload Repository is available with Oracle 10g. But keep in mind that the "AWR feature" must be licesened with Diagnostic Pack / Tuning Pack (check sapnote #740897).
    The AWR report can be created with the following scripts:
    @$ORACLE_HOME/rdbms/admin/awrrpt.sql
    @$ORACLE_HOME/rdbms/admin/awrrpti.sql
    If you don't have the needed licenses or you still have Oracle 9i - then you can use STATSPACK.
    You can find the documentation about STATSPACK here:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96533/statspac.htm
    The only thing you will not find with AWR/STATSPACK is the "tablespace growth problem". For this you will still need some custom made scripts or the DB02/DB02n.
    Regards
    Stefan

  • Statistics technical content activation issue with MD process chain in BI7

    Hi,
    Let me give you history of the issue first.
    I had to activate the Statistics in BI7. I followed the SAP Note 965386 and activated the Technical Content.
    Faced activation issues in MD process chain, Content Master Data-0TCT_MD_C_FULL_P01. It dint get activated, Followed the SAP note 1065919, which asked me to remove the below process from RSPC and activate it.
    Load data- Process Variant Attribute and
    Load data- Process Variant Text.
    I did the same and it got activated after removing them.
    Issue is. Later knew that manually activating the Process chain from Content would have activated those Infopackages aswell.
    Now how should I get those processes into the chain and activate? Based on your suggestions, I can let you know what all I have been trying to fix it?
    Relying on you for solution.
    Thanks
    Pavan

    Thank You Neethika,
    I have this issue solved. I replicated the data sources, activated those infoPackages manually and then added those variants into the process chain manually. So my MD chain now has all the necessary process.
    Now i need to schedule them and see if it runs with out any errors. I will keep this thread open till i run it to take your help if i get any errors while running them.
    Thanks
    Pavan

  • Oracle 11G Searching Issue : Statistics Corruption

    Hi,
    I guess their is some issue with oracle statistics for Text Indexing. Some times i got an issue regarding Text index searching, query throwing wrong results. and when i fire this command :-
    execute immediate 'analyze table ' ||' '||tablename ||' '||'delete statistics';
    for deleting statistics. the query returns the correct results.
    I guess this is the problem of "stale statistics" , Is their any way to sort out this problem ?
    Any help please ?

    Hi,
    I have Oracle Version - 11g Release 1 (11.1.0.6)
    Yes you are absolutely correct, Even i am not getting that why oracle is behaving like this. But i have shown this behaviour many times ( once in 2-3 months approximately )
    I know that we can provide hint in the queries but i cant give hints.
    Any Help please ?

Maybe you are looking for

  • Questions in BI 7.0....

    1) How many types of routines are there in BI7.0 version? How they differ from each other? pls give the technical definition in general. 2) I which schenario/business case we use these routines? 3) What is Rule group and how to create a rule group? 4

  • Activate RE module in ECC 6.0

    Hi, I need to activate this module in ECC 6.0 but i don´t know what i need to activate. I have some things in spro that i can see. Does anyone have some notes for RE implementation? Kind regards, Nuno Silva

  • Someone's trying to break into server

    Hello, According to my directory service log, someone is trying to break into our server by trying to log in as 'root' and 'admin'. DirectoryServices senses this and delays the failed authentication return. Okay, how do we backtrace this fellow's IP

  • Configuring Custom JavaScript/CSS and YUI in WSRP Producer Portlet

    I am trying to find information on how to setup/configure WebCenter for WSRP Producer Portlets that have YUI and custom javascript/CSS. While we have gotten some of the YUI features and custom javascript/CSS to work, we find that some references are

  • Custom Store - Call to Action

    Does anyone have any experience with using "a call to action" "buy now" button inside a DPS Free Issue within a custom store? What I have in mind is a button on the pages of the sampler free issue of a magazine, that when touched, triggers a buy now