Statistics in iPad - Using R in iPad

Hi,
Does anyone know if there's a distribution of R for iPad?
Thanks,
Sérgio.

Thanks Dave.
I'd allready searched for it in the iTunes Store (the portuguese one) but all I could find was some apps, none of them with the power of R.
I read in a post (don't remember where) that it would be difficult to implement R in iOS due to the fact R is a programming language and the restrictions that Apple impose on their iOS apps. I don't understand why, because Codea is an iOS app. But I'm not a versed person in this matters...
I believe that the iPad is already able to handle most of the R packages.
Thanks again

Similar Messages

  • Object statistics - indexed not used (GATHER_STATS_JOB)

    hi all,
    I have few top SQL statements in my DB...which is very basic to me
    select ... from tab where user_id = 'xxx';
    User_id is indexed..
    Therefore, i run the few top SQL statement myself, and realize it is doing FTS.. even for result set of 1 row only.
    Therefore i do a simple DBMS_STATS
    and it started using the index on user_id.
    The optimizer cost drop from a 1000 to simple 13 to 14.
    My question is
    1st) i have value 10 for job_queue_processes
    2nd) i have GATHER_STATS_JOB scheduled and running successfully.
    q1) why do i have to do DBMS_STATS manually before it started to use the index again.
    This FTS has been on-going for about 2 weeks.
    Regards,
    Noob

    GATHER_STATS_JOB, if using default configuration, runs every night from 10 P.M. to 6 A.M. and all day on weekends, gather statistics on objects
    that have no previously gathered statistics, or if the existing statistic are staled, and by default configuration stale statistics are when more than 10% of the table's rows are changed.
    Table which you are mentioning seems to have less then 10% of the whole table rows changed, meaning condition/s for stats gathering with default configuration
    are not met, and therefore stats are not gathered.
    You can run select * from dba_scheduler_jobs where job_name='GATHER_STATS_JOB' to find additional details about the job.
    Official documentation can be good starting point to provide further explanations.

  • How to check/verify running sql in lib cache is using updated statistics of table

    How to check/verify running sql in lib cache is using updated statistics of table used in from clause.
    one of my application table is highly busy i.e frequent update/insert/delete.
    we gather table stats every 30 min.

    Hello, "try dynamic sampling" = think "outside the box", maybe hit two birds with same stone.
    As a matter of fact, I was just backing up your statement: "30 minutes seems pretty extreme"
    cheers

  • Making  Statistics to Discover the use % of Select/Update/Delete

    Hi,
    I have a small database, and I'd like to do a Statistics about the use of Select/Update/Delete. I hear about two ways.
    1-) Using Audit
    2-) Reading the Archives though package DBMS_LOGMner
    What way do u suggest to me ?
    If anyone have links and more information how I can do this statistics, could post it here ?
    Thank you.
    Fernando.

    2-) Reading the Archives though package DBMS_LOGMnerWell, if the database is not in archivelog mode, you can only read the online redo logs, therefore, missing the historical information.
    The best and simpliest solution to implement auditing.
    Jaffar
    Message was edited by:
    Syed Jaffar

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Problem using CTXXPATH index

    Hi all,
    i'm using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 on Windows.
    I created this table
    create table PERSISTENT_COMPOSITION
      COMPOSITION_ID NUMBER(19) not null,
      XML_CONTENT    SYS.XMLTYPE not null,
    )and filled it with more or less 1.000.000 records (that si 1.000.000 xml document loaded into XML_CONTENT).
    Then first of all i tested it with a simple query just like the following:
    SELECT *
      FROM PERSISTENT_COMPOSITION t
    WHERE existsNode(t.xml_content, '/composition/archetype_details/archetype_id[value="openEHR-EHR-COMPOSITION.composition_test.v1"]') = 1;obtaining the expected result: 50,000 records found.
    Now, in order to improve query performances, i created a CTXXPATH index as follows:
    CREATE INDEX IDX#COMP_CTXXPATH ON PERSISTENT_COMPOSITION(XML_CONTENT) INDEXTYPE IS CTXSYS.CTXXPATH;Then i tested the new performances using exactly the same query shown above...and here comes the problem: the query returns NO RESULT! No record was found! I looked at the query execution plan and it uses the created index IDX#COMP_CTXXPATH...but no record could be found...
    I thought it could be a matter of namespace: in fact loaded xml documents have a xmlns set and so i changed the query as follows:
    SELECT *
    FROM persistent_composition t
    WHERE existsNode(t.xml_content,
                      '/composition/archetype_details/archetype_id[value="openEHR-EHR-COMPOSITION.composition_test.v1"]',
                 'xmlns="http://this.is.an.xmlns.url.org/v1"') = 1and surprise: i obtained my 50,000 results just like before BUT, looking at the query execution plan, the IDX#COMP_CTXXPATH index HASN'T BEEN USED!!!
    I really don't understand why using the IDX#COMP_CTXXPATH i get no result....can someone help me?
    Thank you very much
    P.S: i tried using ANALYZE (both on index and on table), CTX_DDL.sync_index and CTX_DDL.optimize_index but got no result..
    Edited by: user11295548 on 29-giu-2009 5.47

    Besides following Mark's advice, and I could be mistaken regarding this in combination with domain indexes, you should NOT use ANALYZE anymore in a Oracle 10 environment. Instead use DBMS_STATS. Its more flexible.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_4005.htm#SQLRF01105
    Note:
    Do not use the COMPUTE and ESTIMATE clauses of ANALYZE to collect optimizer statistics.
    These clauses are supported for backward compatibility.
    Instead, use the DBMS_STATS package, which lets you collect statistics in parallel,
    collect global statistics for partitioned objects, and fine tune your statistics collection
    in other ways. The optimizer, which depends upon statistics, will eventually use only
    statistics that have been collected by DBMS_STATS.
    See PL/SQL Packages and Types Reference for more information on the
    DBMS_STATS package. You must use the ANALYZE statement (rather than
    DBMS_STATS) for statistics collection not related to the cost-based optimizer, such as:
    - To use the VALIDATE or LIST CHAINED ROWS clauses
    - To collect information on freelist blocks

  • How to use the "identify" feature in new 6.3.1 airport utility?

    how to use the "identify" feature in new 6.3.1 airport utility?
    so you can find the basestations in larger networks?

    There is a workaround, which is to use Airport Utility 5.6.
    I can confirm that 5.6 will run on 10.8.4 Mountain Lion, it will recognize the new 2013 Airport Extreme Base Station (A1521) running firmware 7.7.1, and it will give you access to view the device's Log & Statistics, DHCP Clients, and Profiles.
    Four caveats:
    1) The easiest way to install it is to download the app itself, not an installer or through the App Store. There is a page here where you can download the app: http://coreyjmahler.com/2013/03/08/airport-utility-5-6-on-os-x-v10-8-mountain-li on/ This way, you still have both versions, Airport Utility 6.x.x and 5.6.
    2) When you launch 5.6, you'll get a message saying a newer version is available and asking if you want to update. Click Cancel to proceed into the utility.
    3) When you click the Manual Setup button in 5.6, you'll get a warning dialog that "This version of AirPort Utility doesn't support this AirPort wireless device and might improperly configure the device if you continue to use it. Check www.apple.com/support/airport for the latest version of AirPort Utility." You can click Continue to get into the utility without issue.
    4) You should probably only use 5.6 to view the additional status details. I have not tried to modify and save any AEBS settings using 5.6. There are other discussions here in the forums indicating that attempting to save settings via 5.6 that are no longer available in 6.3.1 will not actually save the settings to the AEBS even if both utilities indicate that the settings are changed. See https://discussions.apple.com/message/22677993#22677993
    So, even though you can't use it to modify settings no longer available in Airport Utility 6.3.1, using Airport Utility 5.6 to view DHCP clients, Logs and Statistics is very useful for troubleshooting network issues.
    PS - There are two ways to get to the DHCP Clients list, neither of which is obvious The first is to go to the Airport pane -> Summary tab and click on the "Wireless Clients:" label in the Summary display. All of the labels from "Wireless Mode:" down on the Summary display operate as links to view/edit the corresponding info/settings, which is also not obvious at first glance. Also not obvious, clicking on "Wireless Clients:" actually brings up a new pane with three tabs: Logs, Wireless Clients, and DHCP Clients. You can also get to the same pane by going to the Advanced pane and clicking on the Logs and Statistics button.
    I hope this is helpful information. Took me a while to find out how to do this.

  • New table without statistics returns invalid number of rows

    Hi,
    I've been searching for a while now for an explanation for the following "problem"
    We have an Oracle 11.1.0.7 database on AIX5.3
    In this database we have two tables, called KRT_PRODUCTS_INFO and KRT_STRUCTURES_INFO ( the table name don't really matter ).
    The scenario is as following:
    If we recreate these tables like:
    CREATE TABLE KRT_PRODUCT_INFO_BUP AS SELECT * FROM KRT_PRODUCT_INFO;
    DROP TABLE KRT_PRODUCT_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_PRODUCT_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_PRODUCT_INFO_X1 ON KRT_PRODUCT_INFO (PRODUCT_NUMBER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_PRODUCT_INFO_X2 ON KRT_PRODUCT_INFO (PIM_ARTICLEREVISIONID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_PRODUCT_INFO (SELECT * FROM KRT_PRODUCT_INFO_BUP);
    COMMIT;
    CREATE TABLE KRT_STRUCTURE_INFO_BUP AS SELECT * FROM KRT_STRUCTURE_INFO;
    DROP TABLE KRT_STRUCTURE_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_STRUCTURE_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_STRUCTURES_X1 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_REV_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X2 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_IDENTIFIER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X3 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_STRUCTURE_INFO (SELECT * FROM KRT_STRUCTURE_INFO_BUP);
    COMMIT;
    and we run a complex query with these two tables, this query only return a couple of rows ( exactly 24 !!! )
    If we however generate statistics on these tables after creation, the correct number of rows is returned, being 1.167.991 rows
    The statistics are gathered using:
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_PRODUCT_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_STRUCTURE_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    /I can imagine that the 'plan' for the query used is wrong because of missing statistics.
    But I can't imagine that it would actually return an incorrect number of rows.
    I tested this behaviour in Toad and sqlplus ( first thought it was Toad ), and both behave the same.
    Another fact is, that the "problem" is NOT reproducable on our TEST environment, that runs on Oracle 11.1.0.7 on Windows2008
    Just to be sure this is the "complex" query used. It is not developed by me, and I think it looks somewhat strange but that shouldn't matter:
    SELECT sr."Identifier" STRUCTURE_IDENTIFIER
    , ar_i."Identifier" ITEM_NUMBER
    , SUM (REPLACE (NVL (s.HIDE_LE10, 0) + NVL (p.HIDE_LE10, 0), 2, 1))
    hide_le10
    , SUM (REPLACE (NVL (s.HIDE_LE30, 0) + NVL (p.HIDE_LE30, 0), 2, 1))
    hide_le30
    , SUM (REPLACE (NVL (s.HIDE_LE40, 0) + NVL (p.HIDE_LE40, 0), 2, 1))
    hide_le40
    , SUM (REPLACE (NVL (s.HIDE_LE50, 0) + NVL (p.HIDE_LE50, 0), 2, 1))
    hide_le50
    , SUM (REPLACE (NVL (s.HIDE_LE55, 0) + NVL (p.HIDE_LE55, 0), 2, 1))
    hide_le55
    , SUM (REPLACE (NVL (s.HIDE_LE60, 0) + NVL (p.HIDE_LE60, 0), 2, 1))
    hide_le60
    , SUM (REPLACE (NVL (s.HIDE_LE70, 0) + NVL (p.HIDE_LE70, 0), 2, 1))
    hide_le70
    , SUM (REPLACE (NVL (s.HIDE_LE75, 0) + NVL (p.HIDE_LE75, 0), 2, 1))
    hide_le75
    , SUM (REPLACE (NVL (s.HIDE_LE58, 0) + NVL (p.HIDE_LE58, 0), 2, 1))
    hide_le58
    , SUM (REPLACE (NVL (s.HIDE_LE80, 0) + NVL (p.HIDE_LE80, 0), 2, 1))
    hide_le80
    , SUM (REPLACE (NVL (s.HIDE_LE90, 0) + NVL (p.HIDE_LE90, 0), 2, 1))
    hide_le90
    , SUM (REPLACE (NVL (s.HIDE_LE92, 0) + NVL (p.HIDE_LE92, 0), 2, 1))
    hide_le92
    , SUM (REPLACE (NVL (s.HIDE_LE94, 0) + NVL (p.HIDE_LE94, 0), 2, 1))
    hide_le94
    , SUM (REPLACE (NVL (s.HIDE_LE96, 0) + NVL (p.HIDE_LE96, 0), 2, 1))
    hide_le96
    , COUNT (*) cnt
    FROM KRAMP_HPM_MAIN."StructureRevision" sr
    , KRAMP_HPM_MAIN."StructureGroupRevision" sgr
    , KRAMP_HPM_MASTER."ArticleStructureMap" asm
    , KRAMP_HPM_MASTER."ArticleRevision" ar_p
    , KRAMP_HPM_MASTER."ArticleDetail" ad_p
    , KRAMP_HPM_MASTER."ArticleRevision" ar_i
    , KRAMP_HPM_MASTER."ArticleDetail" ad_i
    , KRAMP_HPM_MASTER."ArticleReference" ar
    , KRT_STRUCTURE_INFO s
    , KRT_PRODUCT_INFO p
    WHERE sr."StructureID" = sgr."StructureID"
    AND sgr."StructureGroupID" = asm."StructureGroupID"
    AND ar_p."ID" = asm."ArticleRevisionID"
    AND ar_p."ID" = ad_p."ArticleRevisionID"
    AND ad_p."Res_Text100_02" = 'PRODUCT'
    AND ar_i."ID" = ad_i."ArticleRevisionID"
    AND ad_i."Res_Text100_02" = 'ARTICLE'
    AND ar."ArticleRevisionID" = ar_p."ID"
    AND ar."ReferencedSupplierAID" = ar_i."Identifier"
    AND s.STRUCTURE_GRP_REV_ID = sgr."ID"
    AND p.PIM_ARTICLEREVISIONID = ar_p."ID"
    GROUP BY sr."Identifier", ar_i."Identifier";Any ideas are welcome..
    Thanks
    FJFranken

    Hemant K Chitale wrote:
    These two tables are in the PIM_KRG schema while the other tables in the query are distributed across two other schemas "KRAMP_HPM_MAIN" and "KRAMP_HPM_MASTER" ?
    Do you happen to have the same table names occurring in multiple schemas - the query is then referencing the data in the wrong schema ?
    Hemant K ChitaleHi,
    This is not the case. The KRAMP_HPM schema's are application dedicated schema's
    And this also then does not explain why the results are correct after generating statistics.
    Anyway thanks for the tip.
    FJFranken

  • Gather system statistics

    Hi Friends,
    I want to gather system statistics in my Oracle 9.2.07 (windows) environment...
    created one statistics table...
    execute DBMS_STATS.CREATE_STAT_TABLE ('SYS','MY_STATS');
    (2) Gathering SYSTEM Statistics Script during office hours ( 8 hours) (8am to 4pm)
    begin
    execute dbms_stats.gather_system_stats(
    gathering_mode=> 'interval',
    interval => 480,
    stattab=> 'MY_STATS',
    STATID=> 'OLTP');
    END;
    REM
    REM
    REM END of Script
    REM===================================================
    (3) Import the Collected System Statistics
    Import the statistics daily around 8AM...because..users starts entering the transactions....
    variable jobno number;
    begin
    dbms_job.submit(:jobno,'dbms_stats.import_system_stats(''IMAL_STATS'',''OLTP'');',
    SYSDATE,'SYSDATE+1');
    COMMIT;
    END;
    ========================================================
    OLAP..also i will collect during night time....
    gathering system statiscs will end after 4pm....so..it takes system statistics during 8 hours interval time...and i am going to import them next day 8am..
    is this the correct method????.please sched some light on this...

    Hi,
    Oracle recommends to gather system statistics from oracle 9i onwards....this the line from Oracle 9i R2 document...
    Oracle9i Database Performance Tuning Guide and Reference
    Release 2 (9.2)
    Gathering System Statistics
    System statistics enable the optimizer to consider a system's I/O and CPU performance and utilization. For each plan candidate, the optimizer computes estimates for I/O and CPU costs. It is important to know the system characteristics to pick the most efficient plan with optimal proportion between I/O and CPU cost.
    System I/O characteristics depend on many factors and do not stay constant all the time. Using system statistics management routines, database administrators can capture statistics in the interval of time when the system has the most common workload. For example, database applications can process OLTP transactions during the day and run OLAP reports at night. Administrators can gather statistics for both states and activate appropriate OLTP or OLAP statistics when needed. This enables the optimizer to generate relevant costs with respect to available system resource plans.
    When Oracle generates system statistics, it analyzes system activity in a specified period of time. Unlike table, index, or column statistics, Oracle does not invalidate already parsed SQL statements when system statistics get updated. All new SQL statements are parsed using new statistics. Oracle Corporation highly recommends that you gather system statistics.The DBMS_STATS.GATHER_SYSTEM_STATS routine collects system statistics in a user-defined timeframe. You can also set system statistics values explicitly using DBMS_STATS.SET_SYSTEM_STATS. Use DBMS_STATS.GET_SYSTEM_STATS to verify system statistics.
    So better performanace, as per above document, we need system statistics.
    if not needed.why it is not need...can u please describe this...
    Message was edited by:
    bsubbu

  • Doubt Regarding Statistics Collection in 10g

    Hello,
    Me Jr Dba i have a doubt regarding statistics calculation in 10g.As we know that if we set
    the initilization parameter STATISTICS_LEVEL=Typical then AOS(Automatic Optimizer staistics) calculate the statistics for the tables, whose blocks are greater then 10% changed from the last calculation.Here my doubt is since already statistics are gathered,
    is there any necessity for us to gather the statistics manually by using
    DBMS_STATS.GATHER to see the tables or system statistics.
    Can u plz assist me on the above
    Regards,
    Vamsi

    Hi,
    Please see here,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41448
    If the table/s are changing very frequently than its better to gather the stats manually.This would lead teh volatile table coming up into the stats job again and again.
    For the system stats and data dictionary stats,they are not collected by default.So there is no choice but to gather them manually.
    Aman....

  • Oracle 9 table statistics doesn't work

    Hy to everybody.
    Hy have a query on 3 big tables (10 millions of rows for each one).
    i perform always the statistics of these 3 tables with
    ANALYZE table TABLE1 estimate statistics sample 1 percent;
    but if i make the select statement without hint, it takes 20 minutes.
    If i put the hints of full
    select /* FULL (TABLE1) FULL(TABLE2) FULL(TABLE3) */+
    it takes 2 minutes.
    Why oracle can't understand that is better to not use indexes to perform this query, but i have to tell him to go in full?
    Thanks in advance!

    user13012184 wrote:
    Why oracle can't understand that is better to not use indexes to perform this query, but i have to tell him to go in full?Probably because you are using the wrong command to gather statistics.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_46a.htm#SQLRF01105
    Oracle Corporation strongly recommends that you use the DBMS_STATS package rather than ANALYZE to collect optimizer statistics. That package lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. Further, the cost-based optimizer, which depends upon statistics, will eventually use only statistics that have been collected by DBMS_STATS. See Oracle9i Supplied PL/SQL Packages and Types Reference for more information on this package.

  • KM Statistics Report

    Hi -
    Under KM /Admin Guide / Content Management / Reports, I see a Statistics Report described:
    Use
    You can use this report to evaluate and search the content of KM repositories using different criteria. The report searches all repositories, regardless of whether indexes exist.
    You can use this report to answer the following questions:
    ·        How many files larger than 5 MB are located in the documents repository?
    ·        How many ZIP files are located in the documents repository?
    ·        How much memory do the files that have been created beneath /documents/Public Documents in the last 14 days need?
    My question:  Where can I find this report?  I see it under Content Administration / KM Content, but where would I actually run it so that I can specify the settings I want to use when I run the report, e.g., "Maximum number of results"?
    Thank you!
    Message was edited by: Jo De Hart

    Jo,
    I'm glad I could help you! Would you be so kind to thank also "SDN"-way? For this just<b> mark the thread as a question</b> an rewards some points! See this link for some more information regarding the "Points Reward System": Spread the Love!
    Thanks,
    Robert

  • PeopleSoft Database Update Statistics

    I am trying to find out what is the best practices for implementing the Update statistics on Psoft Database. Any help or documentation regarding will be a great help
    I am looking for
    what is the best practice for update statistics on oracle , Intervals , Tables , do we need to include any custom script to generate and include all big tables.

    I was the one who designed the pscbo_stats package to leverage the statistics collection techniques used in EBS and Seibel. (Mr. Sierra was the wizard who coded it.) I just noticed this thread and wanted to comment.
    If you are using the pscbo_stats, the gather_schema_stats() procedure will by default gather only stale statistics, so it can be run regularly. I suggest weekly. You can follow the contents of the pscbo_log table for historical data for all (non-dynamic) stats gathering activities.
    If you find there is a table that constantly stale and forcing you to think that the schema statistics should gathered more often, you may want to have two schedules - one weekly for schema, and one more often (daily?) to capture the more volatile objects excepting, of course, those that are already dynamically sampled.
    There may be objects that are volatile enough that they should be added to the Stage Table Exception table. (see the stage_table_ins() procedure) We added that feature to configure dynamic sampling to work on volatile table - tables that are not working storage for COBOL and App Engine but are so often stale that they should be dynamically sampled.
    I find that there are a few transactional tables that work well when dynamically sampled. For example, PSTREESELECT* tables can be problematic if they are dynamically rebuilt by a nVision report book - the end points can get stale quickly. They may be a good candidate for dynamic sampling.
    Lastly, I am very interested to know user experience with the pscbo_stats package. Please post your comments to the communities.oracle.com forum for "Install/Upgrade PSFT".

  • BI Statistics: Cube 0TCT_C01: 0TCTQUCOUNT and 0TCTWTCOUNT

    I am no BW specialist so I am sorry if this question may seem somewhat strange.
    The InfoCube 0TCT_C01 contains among others these key figures:
    0TCTQUCOUNT: This key figure contains the number of BI application objects (query, query view, and so on).
    0TCTWTCOUNT: This key figure contains the counter for the BI applications (Web template, workbook).
    For whatever that means. The SAP documentation is extremly cryptic and snippy on this part.
    Anyway, when I try to correlate 0TCT_C01 contents with the data in RSDDSTAT_OLAP, I get these best matches:
    0TCTQUCOUNT is equivalent to event 000002500  Generation of a Cache Entry
    0TCTWTCOUNT is equivalent to event 000019912  Load 3.x Web Reporting Template
    Are my findings correct? And are these correlations somehow documented by SAP? There is some mapping available between SAP BW 3.x and SAP NW 7.0 statistics, but no useful information about 0TCTQUCOUN and 0TCTWTCOUN:
    [Migration Details|http://help.sap.com/saphelp_nw70ehp1core/helpdata/en/e8/89c6c8ce594faba99a917fe2e3db90/content.htm]
    Edited by: Mark Foerster on Jun 22, 2011 2:21 PM

    Hi Cornelious,
    Did you check SM21 system as well logs . May there is a clue in there
    regards
    Kulmohan

  • BW Statistics setting.

    Dear All,
    I am having some doughts on BW Statistics Setting,
    In selection for BW Statistics settings ( AWB->Tools-> BW Statistics for infoprovider) all objects including Master data object/ODS/Cube is selected.
    My doughts is that is BW Statistics work for Master data objects/ODS ?
    or it only work for Cube.
    Do we need to deselect all check box in seting for Master Data object/ODS and only need to be checked for Cube.
    Thanks All in advance.

    Hi,
         With the new architecture for BI reporting, collection of statistics for query runtime analysis was enhanced or changed. The parallelization in the data manager area (during data read) that is being used more frequently has led to splitting the previous "OLAP" statistics data into "data manager" data (such as database access times, RFC times) and front-end and OLAP times. The statistics data is collected in separate tables, but it can be combined using the InfoProvider for the technical content.
        The information as to whether statistic data is collected for an object no longer depends on the InfoProvider. Instead it depends on those objects for which the data is collected, which means on a query, a workbook or a Web template. The associated settings are maintained in the RSDDSTAT transaction.
    Effects on Existing Data, Due to the changes in the OLAP and front-end architecture, the statistic data collected up to now can only partially be compared with the new data.
    Since the structure of the new tables differs greatly from that of the table RSDDSTAT, InfoProviders that are based on previous data (table RSDDSTAT) can no longer be supplied with data.
    Effects on Customizing
    The Collect Statistics setting is obsolete. Instead you have to determine whether and at which granularity you wish to display the statistics data for the individual objects (query, workbook, Web template). In the RSDDSTAT transaction, you can turn the statistics on and off for all queries for an InfoProvider. The maintenance of the settings (such as before) from the Data Warehousing Workbench can be reached using Tools  ® BW Statistics.
    You can use this BRCONNECT function to update the statistics on the Oracle database for the cost-based optimizer.
    By running update statistics regularly, you make sure that the database statistics are up-to-date, so improving database performance. The Oracle cost-based optimizer (CBO) uses the statistics to optimize access paths when retrieving data for queries. If the statistics are out-of-date, the CBO might generate inappropriate access paths (such as using the wrong index), resulting in poor performance.
    From Release 4.0, the CBO is a standard part of the SAP System. If statistics are available for a table, the database system uses the cost-based optimizer. Otherwise, it uses the rule-based optimizer.
            Partitioned tables, except where partitioned tables are explicitly excluded by setting the active flag in the DBSTATC table to I. For more information, see SAP
            InfoCube tables for the SAP Business Information Warehouse (SAP BW)
    You can update statistics using one of the following methods:
            DBA Planning Calendar in the Computing Center Management System (CCMS)
    For more information, see Update Statistics for the Cost-Based Optimizer in CCMS (Oracle). The DBA Planning Calendar uses the BRCONNECT commands.
    We recommend you to use this approach because you can easily schedule update statistics to run automatically at specified intervals (for example, weekly).
    To use the CBO, make sure that the parameter OPTIMIZER_MODE in the Oracle initialization profile init.ora is set to CHOOSE.
    BRCONNECT performs update statistics using a two-phase approach.
           1.      Checks each table to see if the statistics are out-of-date
           2.      If required, updates the statistics on the table immediately after the check
    For more information about how update statistics works, see Internal Rules for Update Statistics.
    You can influence how update statistics works by using the -force options. For more information, see -f stats.
    Unless you have special requirements, we recommend you to perform the standard update statistics, using one of the following tools to schedule it on a regular basis (for example, daily or weekly):
             DBA Planning Calendar, as described above in "Integration."
             A tool such as cron (UNIX) or at (Windows NT) to execute the following standard call:
    brconnect -u / -c -f stats -t all
    This is also adequate after an upgrade of the database or SAP System. It runs using the OPS$ user without operator intervention.
            Update statistics only for tables and indexes with missing statistics
    brconnect -u / -c -f stats -t missing
            Check and update statistics for all tables defined in the DBSTATC table
    brconnect -u / -c -f stats -t dbstatc_tab
    For examples of how you can override the internal rules for update statistics, see -force with Update Statistics.
    The InfoCube tables used in SAP Business Information Warehouse (SAP BW) and Advanced Planner and Optimizer (APO) need to be processed in a special way when the statistics are being updated. Usually, statistics should be created using histograms, Statistics for the InfoCube tables can be updated, together with other tables in a run. In this case, the statistics for the InfoCube tables are always created with histograms. You can specify which tables are to be handled as InfoCube tables using the init.sap parameters:
            stats_table       
            stats_exclude     
            stats_dbms_stats
    The function of this keyword is to ensure that only InfoCube tables are processed in accordance with the selected parameter settings.
    Statistics are only checked for InfoCube tables and updated, if required
            brconnect -u / -c -f stats -t all -e info_cubes
    Statistics are checked for all tables besides InfoCube tables and updated, if necessary.
            stats_dbms_stats = INFO_CUBES:R:4
    brconnect -u / -c -f stats -t all, Statistics are checked for all tables and updated, if necessary. New statistics for InfoCube tables are created with the DBMS_STATS package using row sampling and an internal parallel degree of 4.
    This is the default. Statistics are checked for all tables and updated, if necessary. If InfoCube tables are present and selected following the update check, statistics are generated for them using histograms.
    You can update statistics on the Oracle database using the Computing Center Management System (CCMS).
    By running update statistics regularly, you make sure that the database statistics are up-to-date, so improving database performance. The Oracle cost-based optimizer (CBO) uses the statistics to optimize access paths when retrieving data for queries. If the statistics are out-of-date, the CBO might generate inappropriate access paths (such as using the wrong index), resulting in poor performance.
    The CBO is a standard part of the SAP system. If statistics are available for a table, the database system uses the cost-based optimizer. Otherwise, it uses the rule-based optimizer.
    You can also run update statistics for your Oracle database using BRCONNECT. Refer to Update Statistics with BRCONNECT. This is the recommended way to update statistics.
    Update statistics after installations and upgrades
    You need to update statistics for all tables in the SAP system after an installation or an upgrade. This is described in the relevant installation or upgrade documentation.
           1.      You use the DBA Planning Calendar in CCMS to schedule regular execution of check statistics and, if necessary, update statistics. For more information.
           2.      If required, you run one-off checks on tables to see if the table’s statistics are out-of-date, and then run an update statistics for the table if required. This is useful, for example, if the data in a table has been significantly updated, but the next scheduled run of update statistics is not for a long time.
    You can check, create, update, or delete statistics for:
    ¡        Single tables
    ¡        Groups of tables
           3.      If required, you configure update statistics by amending the parameters in the control table DBSTATC . This control table contains a list of the database tables for which the default values for update statistics are not suitable. If you change this table, all runs of update statistics – in BRCONNECT, CCMS, or the DBA Planning Calendar – are affected. Configuring update statistics makes sense with large tables, for which the default parameters might not be appropriate.
    Do not add, delete, or change table entries unless you are aware of the consequences.
            Tables from the DBSTATC table with either of the following values:
             ACTIVE field U
             ACTIVE field R or N and USE field A(relevant for the application monitor)
          6.      BRCONNECT writes the results of update statistics to the DBSTATTORA table and also, for tables with the DBSTATC history flag or usage type A, to the DBSTATHORA table.
          7.      For tables with update statistics using methods EI, EX, CI, or CX, BRCONNECT validates the structure of all associated indexes and writes the results to the DBSTATIORA table and also, for tables with the DBSTATC history flag or usage type A, to the DBSTAIHORA table.
          8.      BRCONNECT immediately deletes the statistics that it created in this procedure for tables with the ACTIVE flag set to N or R in the DBSTATC table.

Maybe you are looking for

  • New Phone

    My husband used my iCloud account before I had an iPhone. Now that I have one, I am still finding information from when my husband used the account. I try to backup my iCloud account and it says I do not have enough space, but I know I still 3.9GB le

  • How we can automate the data loading from BI-BPC

    Dear  Guru's Thanks for watching this thread,my question is               How we can load the data from BI7.0 to BPC.My environment is SAP-BI 7.0 and BPC is 7.5 MS version and 2008SQL. How we can automate the data loading from  BI- BPC Ms version.Is

  • OS 3.0 - Default E-Mail address issues

    I'm sure this is user error, but having some difficulty with default e-mail addresses. I set up three e-mail accounts in this order: 1.) Comcast 2.) Gmail 3.) Yahoo! In Mail Contacts Calendars, they appear in that order and near the bottom of that pa

  • Flow of control in SAP implemented Organization?

    Suppose an organization(say A) is implemented SCM system then explain me how the data flow from enterprise portal of A's supplier to the system of A.... I m asking that how BI and ECC are involved and how controls are switch over from one system to a

  • User-Defined Alphanumeric field not showing in Crystal Cross-Tab Reports

    Hi  All, We are facing the given below issue in crystal reports : In Crystal Cross-tab reports User defined fields with alphanumeric datatype are not displaying where as User defined Fields created with Numeric datatype are displayed and SAP fields f