Poor performance in a simple join query

We have 4 tables:
-- Customer
CREATE TABLE "Customer" (
  "CustomerId" RAW(16) NOT NULL,
  "MnemonicId" NUMBER(20) );
ALTER TABLE "Customer"
  ADD CONSTRAINT "PK_Customer"
  PRIMARY KEY ("CustomerId");
-- Language
CREATE TABLE "Language" (
  "Description" NVARCHAR2(250),
  "LanguageId" NVARCHAR2(12));
CREATE UNIQUE INDEX "PK_Language" ON "Language"
("LanguageId");
ALTER TABLE "Language" ADD (
  CONSTRAINT "PK_Language"
  PRIMARY KEY ("LanguageId"));
-- CustomerDescription
CREATE TABLE "CustomerDescription" (
  "Description" NVARCHAR2(250),
  "CustomerId" RAW(16) NOT NULL,
  "LanguageId" NVARCHAR2(12) NOT NULL);
ALTER TABLE "CustomerDescription"
  ADD CONSTRAINT "PK_CustomerDescription"
  PRIMARY KEY ("CustomerId", "LanguageId");
ALTER TABLE "CustomerDescription" ADD (
  CONSTRAINT "FK_CustomerDesc1" FOREIGN KEY ("CustomerId") REFERENCES "Customer" ("CustomerId"),
  CONSTRAINT "FK_CustomerDesc2" FOREIGN KEY ("LanguageId") REFERENCES "Language" ("LanguageId"));
-- JOINTABLE
CREATE TABLE JOINTABLE (ID  RAW(16) NOT NULL );We have also built a view vwCustomers on the tables: Customer, CustomerDescription, Language.
CREATE OR REPLACE FORCE VIEW "vwCustomers" (
  "Description",  "CustomerId",  "LanguageId",  "MnemonicId")
AS
   SELECT "CustomerDescription"."Description",
          "Customer"."CustomerId",
          "Language"."LanguageId",
          "Customer"."MnemonicId"
     FROM "Customer" CROSS JOIN "Language"
          LEFT OUTER JOIN "CustomerDescription"
          ON "Customer"."CustomerId" = "CustomerDescription"."CustomerId"
        AND "CustomerDescription"."LanguageId" = "Language"."LanguageId"The JOINTABLE table has only 1 row ("9C740128F750CF4ABC520DAF131D7E96")
The language table has 2 rows ("it-IT") and ("en-EN")
The Customer and CustometDescription tables has 350000 rows each
We have this simple query:
SELECT ROWNUM pos, VW."CustomerId", VW."LanguageId", VW."MnemonicId"
        FROM "vwCustomers" VW
        WHERE (VW."LanguageId" = 'it-IT')
        AND VW."CustomerId" IN ( SELECT Id FROM JOINTABLE)this query of course select only 1 row.
The time execution of this query is about 1 second, that for us is a really bad performance result.
We have also rewritten the query without the VIEW :
SELECT ROWNUM pos, "Customer"."CustomerId", "Language"."LanguageId", "Customer"."MnemonicId"
    FROM "Customer"
    CROSS JOIN "Language"
    LEFT OUTER JOIN "CustomerDescription"
    ON "Customer"."CustomerId" = "CustomerDescription"."CustomerId"
    AND "CustomerDescription"."LanguageId" = "Language"."LanguageId"
    WHERE ("Language"."LanguageId" = 'it-IT' )
    AND "Customer"."CustomerId" IN ( SELECT Id FROM JOINTABLE)Also for this query the execution time is about 1 second.
BUT if we write the query in a similar "way":
SELECT  ROWNUM pos, VW."CustomerId", VW."LanguageId", VW."MnemonicId"
        FROM "vwCustomers" VW
        WHERE (VW."LanguageId" = 'it-IT')
        AND VW."CustomerId" IN (HEXTORAW('9C740128F750CF4ABC520DAF131D7E96'))NOW we obtain as execution time of 0.0005 second!!!
We discover that when it takes 1 second in the first 2 queries this is because the access plan used has done a TABLE ACCESS FULL on the Customer table (that has 350000 rows). In the third query we have good performance because Oracle uses an INDEX UNIQUE SCAN on Customer table.
Please, could help me to obtain good performance in the first query?
thanks in advance
Filip
Edited by: [email protected] on 2-set-2009 12.59

The first execution plan shows at point 3 the TABLE ACCESS FULL SCAN on the table Customer that has Cardinality of 359,793
SELECT STATEMENT  ALL_ROWS Cost: 2,476  Bytes: 130  Cardinality: 1                                
     9 COUNT                           
          8 NESTED LOOPS OUTER  Cost: 2,476  Bytes: 130  Cardinality: 1                      
               6 HASH JOIN RIGHT SEMI  Cost: 2,475  Bytes: 102  Cardinality: 1                 
                    1 TABLE ACCESS FULL TABLE JOINTABLE Cost: 3  Bytes: 17  Cardinality: 1            
                    5 VIEW SYS. Cost: 2,470  Bytes: 30,582,405  Cardinality: 359,793            
                         4 NESTED LOOPS  Cost: 2,470  Bytes: 17,629,857  Cardinality: 359,793       
                              2 INDEX UNIQUE SCAN INDEX (UNIQUE)PK_Language Cost: 0  Bytes: 11  Cardinality: 1 
                              3 TABLE ACCESS FULL TABLE Customer Cost: 2,470  Bytes: 13,672,134  Cardinality: 359,793 
               7 INDEX UNIQUE SCAN INDEX (UNIQUE) PK_CustomerDescription Cost: 1  Bytes: 28  Cardinality: 1       In the second case at point 3 instead of TABLE ACCESS FULL SCAN Oracle execute an efficient TABLE ACCESS BY INDEX ROWID
SELECT STATEMENT  ALL_ROWS Cost: 3  Bytes: 99  Cardinality: 1                                
     8 COUNT                           
          7 MERGE JOIN OUTER  Cost: 3  Bytes: 99  Cardinality: 1                      
               5 VIEW SYS. Cost: 2  Bytes: 71  Cardinality: 1                 
                    4 NESTED LOOPS  Cost: 2  Bytes: 49  Cardinality: 1            
                         1 INDEX UNIQUE SCAN INDEX (UNIQUE) PK_Language Cost: 0  Bytes: 11  Cardinality: 1       
                         3 TABLE ACCESS BY INDEX ROWID TABLE Customer Cost: 2  Bytes: 38  Cardinality: 1       
                              2 INDEX UNIQUE SCAN INDEX (UNIQUE) PK_Customer Cost: 1  Cardinality: 1 
               6 INDEX UNIQUE SCAN INDEX (UNIQUE) PK_ CustomerDescription Cost: 1  Bytes: 28  Cardinality: 1       

Similar Messages

  • MV Incremental Refresh on join query of remote database tables

    Hi,
    I am trying to create a MV with incremental refresh option on a join query with 2 tables of remote database.
    Created MV logs on 2 tables in the remote database.
    DROP MATERIALIZED VIEW LOG ON emp;
    CREATE MATERIALIZED VIEW LOG ON emp WITH ROWID;
    DROP MATERIALIZED VIEW LOG ON dept;
    CREATE MATERIALIZED VIEW LOG ON dept WITH ROWID;
    Now, trying to create the MV,
    CREATE MATERIALIZED VIEW mv_emp_dept
    BUILD IMMEDIATE
    REFRESH FAST
    START WITH SYSDATE
    NEXT SYSDATE1/(24*15)+
    WITH PRIMARY KEY
    AS
    SELECT e.ename, e.job, d.dname FROM emp@remote_db e,dept@remote_db d
    WHERE e.deptno=d.deptno
    AND e.sal>800;
    Getting ORA-12052 error.
    Can you please help me.
    Thanks,
    Anjan

    Primary Key is on EMPNO for EMP table and DEPTNO for DEPT table.
    Actually, I have been asked to do an feasibility test whether incremental refresh can be performed on MV with join query of 2 remote database tables.
    I've tried with all combinations of ROWID and PRIMARY KEY, but getting different errors. From different links, I found that it's possible, but cannot create any successful testcase anyway.
    It will be very much helpful if you can correct my example or tell me the restrictions in this case.
    Thanks,
    Anjan

  • Join Query hanging

    hi experts,
    please help on this, we have migrated database from 10g to 11.2.0.3 , Everything went sccessful but one of the simple join query hanging occasionally in new upgraded database. is there any known issue or anything need to add after migration for sqls?,also before migration its was executed without any issues. that query is joining and fetching from two more tables . is there any generic reason please provide that would be helpful to analyze, Thanks in Advance.

    why wasn't this discovered during testing prior to the upgrade?
    does new DB have current statistics?
    How do I ask a question on the forums?
    https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360002

  • Join query getting poor performance

    Hi,
    This is my join query to retriving data , i had a problem from this, its getting very slow to retrive data, even i used in report builder, it could not build the report.
    From this select statement i've using three tables ,
    please help and have suggestion to tune my query better fast and give some new idea , but i'm using oracle 8i.
    select a.customer_code customer, c.name name, c.place place, a.product_code product, b.quantity ord_qty, nvl(b.delivery_district_code,c.district_code) district, nvl(b.delivery_town_code,c.town_code) town
    from order_book a, order_book_detail b, customer c
    where a.region_code = b.region_code
    and a.order_book_form_no = b.order_book_form_no
    and a.customer_code = c.customer_code
    and c.division_code = 34
    and a.region_code = 10
    and c.state_code = 1
    and a.order_book_form_date = '18-OCT-2007'
    and nvl(c.classification_code,'N') = 'S'
    order by 1;
    regards
    venki

    why nobody answering me.Because you gave us nothing to investigate. Please read [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]this thread and post a tkprof output here. For an explain plan in 8i, you could "set autotrace on explain" in SQL*Plus.
    Regards,
    Rob.

  • Poor performance with Oracle Spatial when spatial query invoked remotely

    Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
    Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
    Thank you in advance for any thoughts you might share.

    OK, that's clearer.
    Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
    set autotrace on
    set timing on
    SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
    Have you profiled the procedure? Here is an example of how to do it:
    Prompt Firstly, create PL/SQL profiler table
    @$ORACLE_HOME/rdbms/admin/proftab.sql
    Prompt Secondly, use the profiler to gather stats on execution characteristics
    DECLARE
      l_run_num PLS_INTEGER := 1;
      l_max_num PLS_INTEGER := 1;
      v_geom    mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
    BEGIN
      dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE'));  -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
      v_geom := Parallel(v_geom,10,0.05,1);  -- Put your procedure call here
      dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
    END;
    SHOW ERRORS
    Prompt Finally, report activity
    COLUMN runid FORMAT 99999
    COLUMN run_comment FORMAT A40
    SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
      FROM plsql_profiler_runs
      ORDER BY runid;
    COLUMN runid       FORMAT 99999
    COLUMN unit_number FORMAT 99999
    COLUMN unit_type   FORMAT A20
    COLUMN unit_owner  FORMAT A20
    COLUMN text        FORMAT A100
    compute sum label 'Total_Time' of total_time on runid
    break on runid skip 1
    set linesize 200
    SELECT u.runid || ',' ||
           u.unit_name,
           d.line#,
           d.total_occur,
           d.total_time,
           text
    FROM   plsql_profiler_units u
           JOIN plsql_profiler_data d ON u.runid = d.runid
                                         AND
                                         u.unit_number = d.unit_number
           JOIN all_source als ON ( als.owner = 'CODESYS'
                                   AND als.type = u.unit_type
                                   AND als.name = u.unit_name
                                AND als.line = d.line# )
    WHERE  u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
    ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
    regards
    Simon

  • How can I perform this kind of range join query using DPL?

    How can I perform this kind of range join query using DPL?
    SELECT * from t where 1<=t.a<=2 and 3<=t.b<=5
    In this pdf : http://www.oracle.com/technology/products/berkeley-db/pdf/performing%20queries%20in%20oracle%20berkeley%20db%20java%20edition.pdf,
    It shows how to perform "Two equality-conditions query on a single primary database" just like SELECT * FROM tab WHERE col1 = A AND col2 = B using entity join class, but it does not give a solution about the range join query.

    I'm sorry, I think I've misled you. I suggested that you perform two queries and then take the intersection of the results. You could do this, but the solution to your query is much simpler. I'll correct my previous message.
    Your query is very simple to implement. You should perform the first part of query to get a cursor on the index for 'a' for the "1<=t.a<=2" part. Then simply iterate over that cursor, and process the entities where the "3<=t.b<=5" expression is true. You don't need a second index (on 'b') or another cursor.
    This is called "filtering" because you're iterating through entities that you obtain from one index, and selecting some entities for processing and discarding others. The white paper you mentioned has an example of filtering in combination with the use of an index.
    An alternative is to reverse the procedure above: use the index for 'b' to get a cursor for the "3<=t.b<=5" part of the query, then iterate and filter the results based on the "1<=t.a<=2" expression.
    If you're concerned about efficiency, you can choose the index (i.e., choose which of these two alternatives to implement) based on which part of the query you believe will return the smallest number of results. The less entities read, the faster the query.
    Contrary to what I said earlier, taking the intersection of two queries that are ANDed doesn't make sense -- filtering is the better solution. However, taking the union of two queries does make sense, when the queries are ORed. Sorry for the confusion.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Poor performance and high number of gets on seemingly simple insert/select

    Versions & config:
    Database : 10.2.0.4.0
    Application : Oracle E-Business Suite 11.5.10.2
    2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
    INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
      NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
      WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
      WIA.ITEM_TYPE = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          4           0
    Execute      2      3.44       6.36          2      24297        198          36
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.44       6.36          2      24297        202          36
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
    Rows     Execution Plan
          0  INSERT STATEMENT   MODE: ALL_ROWS
          0   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
          0    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             12        0.00          0.00
      gc current block 2-way                         14        0.00          0.00
      db file sequential read                         2        0.01          0.01
      row cache lock                                 24        0.00          0.01
      library cache pin                               2        0.00          0.00
      rdbms ipc reply                                 1        0.00          0.00
      gc cr block 2-way                               4        0.00          0.00
      gc current grant busy                           1        0.00          0.00
    ********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
    exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
    exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
    If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
    If I make the insert into an empty, non-partitioned table, I get :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.08          0        137         53          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.08          0        137         53          25and same explain plan - using index range scan on WF_Item_Attributes_PK.
    This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.10         10         27        136          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.10         10         27        136          25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
    I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
    further info on the objects concerned:
    query source table :
    WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
    WF_Item_Attributes tbl : non-partitioned, 160 blocks
    insert destination table:
    WF_Item_Attribute_Values:
    range partitioned on Item_Type, and hash sub-partitioned on Item_Key
    both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
    WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
    Bind values:
    exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
    exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
    The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
    thanks and regards
    Ivan

    hi Sven,
    Thanks for your input.
    1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
    2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
    3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
    ============= From DBA_Part_Tables : Partition Type / Count =============
    PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
    RANGE   HASH                 77 APPS_TS_TX_DATA
    1 row selected.
    ============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
    Partition Name       TS Name         High Value           High Val Len
    WF_ITEM1             APPS_TS_TX_DATA 'A1'                            4
    WF_ITEM2             APPS_TS_TX_DATA 'AM'                            4
    WF_ITEM3             APPS_TS_TX_DATA 'AP'                            4
    WF_ITEM47            APPS_TS_TX_DATA 'OB'                            4
    WF_ITEM48            APPS_TS_TX_DATA 'OE'                            4
    WF_ITEM49            APPS_TS_TX_DATA 'OF'                            4
    WF_ITEM50            APPS_TS_TX_DATA 'OK'                            4
    WF_ITEM75            APPS_TS_TX_DATA 'WI'                            4
    WF_ITEM76            APPS_TS_TX_DATA 'WS'                            4
    WF_ITEM77            APPS_TS_TX_DATA MAXVALUE                        8
    77 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_TYPE                                    1
    1 row selected.
    PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
    ============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
    Partition Name       SUBPARTITION_NAME              TS Name         High Value           High Val Len
    WF_ITEM49            SYS_SUBP3326                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3328                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3332                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3331                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3330                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3329                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3327                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3325                   APPS_TS_TX_DATA                                 0
    8 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_KEY                                     1
    1 row selected.
    from DBA_Segments - just for partition WF_ITEM49  :
    Segment Name                        TSname       Partition Name       Segment Type     BLOCKS     Mbytes    EXTENTS Next Ext(Mb)
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3332         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3331         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3330         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3329         TblSubPart        16112    125.875       1007         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3328         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3327         TblSubPart        16224     126.75       1014         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3326         TblSubPart        16208    126.625       1013         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3325         TblSubPart        16128        126       1008         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3332         IdxSubPart        59424     464.25       3714         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3331         IdxSubPart        59296     463.25       3706         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3330         IdxSubPart        59520        465       3720         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3329         IdxSubPart        59104     461.75       3694         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3328         IdxSubPart        59456      464.5       3716         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3327         IdxSubPart        60016    468.875       3751         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3326         IdxSubPart        59616     465.75       3726         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3325         IdxSubPart        59376    463.875       3711         .125
    sum                                                                                               4726.5
    [the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
    The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
    Ivan

  • Inner Join. How to improve the performance of inner join query

    Inner Join. How to improve the performance of inner join query.
    Query is :
    select f1~ablbelnr
             f1~gernr
             f1~equnr
             f1~zwnummer
             f1~adat
             f1~atim
             f1~v_zwstand
             f1~n_zwstand
             f1~aktiv
             f1~adatsoll
             f1~pruefzahl
             f1~ablstat
             f1~pruefpkt
             f1~popcode
             f1~erdat
             f1~istablart
             f2~anlage
             f2~ablesgr
             f2~abrdats
             f2~ableinh
                from eabl as f1
                inner join eablg as f2
                on f1ablbelnr = f2ablbelnr
                into corresponding fields of table it_list
                where f1~ablstat in s_mrstat
                %_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
    I wanted to modify the query, since its taking lot of time to load the data.
    Please suggest : -
    Treat this is very urgent.

    Hi Shyamal,
    In your program , you are using "into corresponding fields of ".
    Try not to use this addition in your select query.
    Instead, just use "into table it_list".
    As an example,
    Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
    Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
    On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
    So try not to give "into corresponding fields" in your query.
    Regards,
    SP.

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • Explain Plan for a simple join seems wrong

    I have been investigating some performance issues and I have discovered that a seemingly simple join generates a counter-intuitive execution plan.
    I have two tables, one of which represents users and the other of which represents learning activity transcripts (courses). Each user can have many transcripts. The transcripts each contain the user id of the user. The query provides the primary key of the transcript.
    There are 124853 users 148894 transcripts in the database.
    The plan I expect is to look up the transcript, find the user id, then look up the user. What I get is an index scan of the user table, apparently probing the index of the transcript table for each entry in the user table.
    select U.userLocale
    FROM DRUser U, DRLearningActivityTranscript LATF
    WHERE LATF.userID = U.userID AND LATF.ID=165066;
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=14)
    1 0 NESTED LOOPS (Cost=3 Card=1 Bytes=14)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'DRLEARNINGACTIVITYTRANSCRIPT' (Cost=2 Card=1 Bytes=8)
    3 2 INDEX (UNIQUE SCAN) OF 'DRPK13' (UNIQUE) (Cost=1 Card= 2)
    4 1 TABLE ACCESS (BY INDEX ROWID) OF 'DRUSER' (Cost=1 Card=125649 Bytes=753894)
    5 4 INDEX (UNIQUE SCAN) OF 'DRPK1' (UNIQUE)
    Note that DRPK1 is the index on the userid in the user table and DRPK13 is the index on the ID field of the transcript.
    I can get the plan I want if I express the join as a sub-select, but I don't want to do that throughout the application.
    select U.userLocale
    FROM DRUser U
    WHERE U.userID = (SELECT userid FROM DRLearningActivityTranscript WHERE ID = 165066);
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=6)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'DRUSER' (Cost=2 Card=1 Bytes=6)
    2 1 INDEX (UNIQUE SCAN) OF 'DRPK1' (UNIQUE) (Cost=1 Card=1)
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'DRLEARNINGACTIVITYTRANSCRIPT' (Cost=2 Card=1 Bytes=8)
    4 3 INDEX (UNIQUE SCAN) OF 'DRPK13' (UNIQUE) (Cost=1 Card=2)
    Am I just interpreting the plan incorrectly or does it really do what I'm afraid it seems to be doing? I've tried using /*+ RULE */ and other optimizer hints, but I can't get the plan I think I want without rewriting the query as a sub-select.
    Thanks,
    Mark H. Zellers

    Yes, you're misinterpreting the plan. It's work like you want it to. At the same level, the steps are performed top to bottom so that plan is showing:
    1. lookup the id 165066 on the index DPRK13
    2. retrieve the row from DRLEARNINGACTIVITYTRANSCRIPT
    3. lookup the userid ? on the index DPRK1
    4. retrieve the row from DRUSER
    If Oracle was actually starting with the DRUSER table then it would have to HASH or MERGE join those results with the other table (or worse use the userid index). In fact, you can demonstrate this for yourself by just adding an ORDERED hint to that query (SELECT /*+ ORDERED */ ...) and looking at the plan (as well as the horrible performance you should observe).

  • Join query problem

    Hi all,
    I'm new to Berkeley DB and apologise if the answer to my question has been covered elsewhere.
    I've been using the 'Getting Started Guide' (BerkeleyDB-Core-Cxx-GSG.pdf) to familiarise myself with the C++ API. The document has been vastly useful, but it has left me stranded on the subject of join queries. I have used the example in the guide (p 57) carefully to construct my join query between two secondary databases, but I get a segmentation fault that doesn't seem to originate from within my code. The GDB backtrace shows:
    (gdb) backtrace
    #0 0x007fbffb in __db_c_count () from /nfs/acari/dta/bdb/lib/libdb_cxx-4.4.so
    #1 0x00807aef in __db_join_cmp () from /nfs/acari/dta/bdb/lib/libdb_cxx-4.4.so
    #2 0x0013c1af in msort_with_tmp () from /lib/libc.so.6
    #3 0x0013c0a7 in msort_with_tmp () from /lib/libc.so.6
    #4 0x0013c360 in qsort () from /lib/libc.so.6
    #5 0x00806de6 in __db_join () from /nfs/acari/dta/bdb/lib/libdb_cxx-4.4.so
    #6 0x00804384 in __db_join_pp () from /nfs/acari/dta/bdb/lib/libdb_cxx-4.4.so
    #7 0x0079070b in Db::join () from /nfs/acari/dta/bdb/lib/libdb_cxx-4.4.so
    #8 0x0804a9fe in show_join ()
    #9 0x0804a165 in main ()
    The code that I have written to perform the join query looks like:
    int show_join(MyDb &itemnameSDB, MyDb &catnameSDB,
         std::string &itemName, std::string &categoryName)
    std::cout << "Have item : " << itemName << " and category : "
         << categoryName << std::endl;
    // Position cursor at item
    int ret;
    Dbc *item_curs;
    Dbt key, data;
    try {
    itemnameSDB.getDb().cursor(NULL, &item_curs, 0);
    char * c_item = (char *)itemName.c_str();
    key.set_data(c_item);
    key.set_size(strlen(c_item) + 1);
    if ((ret = item_curs->get(&key, &data, DB_SET)) != 0)
         std::cout << "Did not find any records matching item ["
              << c_item << "]" << std::endl;
    catch(DbException &e) {        
    itemnameSDB.getDb().err(e.get_errno(), "Error!");
    } catch(std::exception &e) {
    itemnameSDB.getDb().errx("Error! %s", e.what());
    // Position cursor at category
    Dbc *category_curs;
    try {
    catnameSDB.getDb().cursor(NULL, &category_curs, 0);
    char c_category = (char )categoryName.c_str();
    key.set_data(c_category);
    key.set_size(strlen(c_category) + 1);
    if ((ret = category_curs->get(&key, &data, DB_SET)) != 0)
         std::cout << "Did not find any records matching category ["
              << c_category << "]" << std::endl;
    catch(DbException &e) {        
    catnameSDB.getDb().err(e.get_errno(), "Error!");
    } catch(std::exception &e) {
    catnameSDB.getDb().errx("Error! %s", e.what());
    // Set up an array of cursors ready for the join
    Dbc *carray[3];
    carray[0] = item_curs;
    carray[1] = category_curs;
    carray[3] = NULL;
    // Perform the join
    Dbc *join_curs;
    try {
    if ((ret = itemnameSDB.getDb().join(carray, &join_curs, 0)) != 0)
         std::cout << "Successful query results should go here." << std::endl;
    catch(DbException &e) {        
    itemnameSDB.getDb().err(e.get_errno(), "Error!");
    } catch(std::exception &e) {
    itemnameSDB.getDb().errx("Error! %s", e.what());
    // Iterate through results using the join cursor
    while ((ret = join_curs->get(&key, &data, 0)) == 0)
    std::cout << "Iterating through cursors" << std::endl;
    // If we exited the loop because we ran out of records,
    // then it has completed successfully.
    if (ret == DB_NOTFOUND)
    item_curs->close();
    category_curs->close();
    join_curs->close();
    return(0);
    The seg fault occurs at the line in the final try/catch block where the Db.join() call is made.
    It seems highly likely that I am making a simple mistake due to inexperience (both with Berkeley DB and C++) and am hoping that the problem glares out at someone with deeper knowledge.
    I'm running this under linux if this makes any difference.
    Many thanks for reading this far,
    Dan

    Hi Keith,
    The following test program isn't pretty, but should produce the seg fault that I'm seeing. Much of the code is copy-and-pasted from the C++ API guide. It will need some input data to run - and presumably create the correct error. Save the following as ./small_inventory.txt (this is also just hacked from the guide):
    Oranges#OranfruiRu6Ghr#0.71#451#fruits#TriCounty Produce
    Spinach#SpinvegeVcqXL6#0.11#708#vegetables#TriCounty Produce
    Banana Split#Banadessfif758#11.07#14#desserts#The Baking Pan
    Thanks for your help,
    Dan
    Code follows:
    #include <db_cxx.h>
    #include <iostream>
    #include <fstream>
    #include <cstdlib>
    class InventoryData
    public:
    inline void setPrice(double price) {price_ = price;}
    inline void setQuantity(long quantity) {quantity_ = quantity;}
    inline void setCategory(std::string &category) {category_ = category;}
    inline void setName(std::string &name) {name_ = name;}
    inline void setVendor(std::string &vendor) {vendor_ = vendor;}
    inline void setSKU(std::string &sku) {sku_ = sku;}
    inline double& getPrice() {return(price_);}
    inline long& getQuantity() {return(quantity_);}
    inline std::string& getCategory() {return(category_);}
    inline std::string& getName() {return(name_);}
    inline std::string& getVendor() {return(vendor_);}
    inline std::string& getSKU() {return(sku_);}
    /* Initialize our data members */
    void clear()
    price_ = 0.0;
    quantity_ = 0;
    category_ = "";
    name_ = "";
    vendor_ = "";
    sku_ = "";
    // Default constructor
    InventoryData() { clear(); }
    // Constructor from a void *
    // For use with the data returned from a bdb get
    InventoryData(void *buffer)
    char buf = (char )buffer;
    price_ = *((double *)buf);
    bufLen_ = sizeof(double);
    quantity_ = *((long *)(buf + bufLen_));
    bufLen_ += sizeof(long);
    name_ = buf + bufLen_;
    bufLen_ += name_.size() + 1;
    sku_ = buf + bufLen_;
    bufLen_ += sku_.size() + 1;
    category_ = buf + bufLen_;
    bufLen_ += category_.size() + 1;
    vendor_ = buf + bufLen_;
    bufLen_ += vendor_.size() + 1;
    * Marshalls this classes data members into a single
    * contiguous memory location for the purpose of storing
    * the data in a database.
    char *
    getBuffer()
    // Zero out the buffer
    memset(databuf_, 0, 500);
    * Now pack the data into a single contiguous memory location for
    * storage.
    bufLen_ = 0;
    int dataLen = 0;
         dataLen = sizeof(double);
         memcpy(databuf_, &price_, dataLen);
         bufLen_ += dataLen;
         dataLen = sizeof(long);
         memcpy(databuf_ + bufLen_, &quantity_, dataLen);
         bufLen_ += dataLen;
    packString(databuf_, name_);
    packString(databuf_, sku_);
    packString(databuf_, category_);
    packString(databuf_, vendor_);
    return (databuf_);
    * Returns the size of the buffer. Used for storing
    * the buffer in a database.
    inline size_t getBufferSize() { return (bufLen_); }
    /* Utility function used to show the contents of this class */
    void
    show() {
    std::cout << "\nName: " << name_ << std::endl;
    std::cout << " SKU: " << sku_ << std::endl;
    std::cout << " Price: " << price_ << std::endl;
    std::cout << " Quantity: " << quantity_ << std::endl;
    std::cout << " Category: " << category_ << std::endl;
    std::cout << " Vendor: " << vendor_ << std::endl;
    private:
    * Utility function that appends a char * to the end of
    * the buffer.
    void
    packString(char *buffer, std::string &theString)
    size_t string_size = theString.size() + 1;
    memcpy(buffer+bufLen_, theString.c_str(), string_size);
    bufLen_ += string_size;
    /* Data members */
    std::string category_, name_, vendor_, sku_;
    double price_;
    long quantity_;
    size_t bufLen_;
    char databuf_[500];
    //Forward declarations
    void loadDB(Db &, std::string &);
    int get_item_name(Db dbp, const Dbt pkey, const Dbt pdata, Dbt skey);
    int get_category_name(Db dbp, const Dbt pkey, const Dbt pdata, Dbt skey);
    int show_join(Db &item_index, Db &category_index,
         std::string &itemName, std::string &categoryName);
    int main (){
    Db primary_database(NULL, 0); // Primary
    Db item_index(NULL, 0); // Secondary
    Db category_index(NULL, 0); // Secondary
    // Open the primary database
    primary_database.open(NULL,
                   "inventorydb.db",
                   NULL,
                   DB_BTREE,
                   DB_CREATE,
                   0);
    /* // Setup the secondary to use sorted duplicates.
    // This is often desireable for secondary databases.
    item_index.set_flags(DB_DUPSORT);
    category_index.set_flags(DB_DUPSORT);
    // Open secondary databases
    item_index.open(NULL,
              "itemname.sdb",
              NULL,
              DB_BTREE,
              DB_CREATE,
              0);
    category_index.open(NULL,
              "categoryname.sdb",
              NULL,
              DB_BTREE,
              DB_CREATE,
              0);
    // Associate the primary and the secondary dbs
    primary_database.associate(NULL,
                   &item_index,
                   get_item_name,
                   0);
    primary_database.associate(NULL,
                   &category_index,
                   get_category_name,
                   0);
    // Load database
    std::string input_file = "./small_inventory.txt";
    try {
    loadDB(primary_database, input_file);
    } catch(DbException &e) {
    std::cerr << "Error loading databases. " << std::endl;
    std::cerr << e.what() << std::endl;
    return (e.get_errno());
    } catch(std::exception &e) {
    std::cerr << "Error loading databases. " << std::endl;
    std::cerr << e.what() << std::endl;
    return (-1);
    // Perform join query
    std::string itemName = "Spinach";
    std::string categoryName = "vegetables";
    show_join(item_index, category_index, itemName, categoryName);
    // Close dbs
    item_index.close(0);
    category_index.close(0);
    primary_database.close(0);
    return(0);
    } // End main
    // Used to locate the first pound sign (a field delimiter)
    // in the input string.
    size_t
    getNextPound(std::string &theString, std::string &substring)
    size_t pos = theString.find("#");
    substring.assign(theString, 0, pos);
    theString.assign(theString, pos + 1, theString.size());
    return (pos);
    // Loads the contents of the inventory.txt file into a database
    void
    loadDB(Db &inventoryDB, std::string &inventoryFile)
    InventoryData inventoryData;
    std::string substring;
    size_t nextPound;
    std::ifstream inFile(inventoryFile.c_str(), std::ios::in);
    if ( !inFile )
    std::cerr << "Could not open file '" << inventoryFile
    << "'. Giving up." << std::endl;
    throw std::exception();
    while (!inFile.eof())
    inventoryData.clear();
    std::string stringBuf;
    std::getline(inFile, stringBuf);
    // Now parse the line
    if (!stringBuf.empty())
    nextPound = getNextPound(stringBuf, substring);
    inventoryData.setName(substring);
    nextPound = getNextPound(stringBuf, substring);
    inventoryData.setSKU(substring);
    nextPound = getNextPound(stringBuf, substring);
    inventoryData.setPrice(strtod(substring.c_str(), 0));
    nextPound = getNextPound(stringBuf, substring);
    inventoryData.setQuantity(strtol(substring.c_str(), 0, 10));
    nextPound = getNextPound(stringBuf, substring);
    inventoryData.setCategory(substring);
    nextPound = getNextPound(stringBuf, substring);
    inventoryData.setVendor(substring);
    void buff = (void )inventoryData.getSKU().c_str();
    size_t size = inventoryData.getSKU().size()+1;
    Dbt key(buff, (u_int32_t)size);
    buff = inventoryData.getBuffer();
    size = inventoryData.getBufferSize();
    Dbt data(buff, (u_int32_t)size);
    inventoryDB.put(NULL, &key, &data, 0);
    inFile.close();
    int
    get_item_name(Db dbp, const Dbt pkey, const Dbt pdata, Dbt skey)
    * First, obtain the buffer location where we placed the item's name. In
    * this example, the item's name is located in the primary data. It is the
    * first string in the buffer after the price (a double) and the quantity
    * (a long).
    u_int32_t offset = sizeof(double) + sizeof(long);
    char itemname = (char )pdata->get_data() + offset;
    // unused
    (void)pkey;
    * If the offset is beyond the end of the data, then there was a problem
    * with the buffer contained in pdata, or there's a programming error in
    * how the buffer is marshalled/unmarshalled. This should never happen!
    if (offset > pdata->get_size()) {
    dbp->errx("get_item_name: buffer sizes do not match!");
    // When we return non-zero, the index record is not added/updated.
    return (-1);
    /* Now set the secondary key's data to be the item name */
    skey->set_data(itemname);
    skey->set_size((u_int32_t)strlen(itemname) + 1);
    return (0);
    int
    get_category_name(Db dbp, const Dbt pkey, const Dbt pdata, Dbt skey)
    * First, obtain the buffer location where we placed the item's name. In
    * this example, the item's name is located in the primary data. It is the
    * first string in the buffer after the price (a double) and the quantity
    * (a long).
    u_int32_t offset = sizeof(double) + sizeof(long);
    char itemname = (char )pdata->get_data() + offset;
    offset += strlen(itemname) + 1;
    char sku = (char )pdata->get_data() + offset;
    offset += strlen(sku) + 1;
    char category = (char )pdata->get_data() + offset;
    // unused
    (void)pkey;
    * If the offset is beyond the end of the data, then there was a problem
    * with the buffer contained in pdata, or there's a programming error in
    * how the buffer is marshalled/unmarshalled. This should never happen!
    if (offset > pdata->get_size()) {
    dbp->errx("get_item_name: buffer sizes do not match!");
    // When we return non-zero, the index record is not added/updated.
    return (-1);
    /* Now set the secondary key's data to be the item name */
    skey->set_data(category);
    skey->set_size((u_int32_t)strlen(category) + 1);
    return (0);
    int
    show_join(Db &itemnameSDB, Db &catnameSDB,
         std::string &itemName, std::string &categoryName)
    std::cout << "Have item : " << itemName << " and category : "
         << categoryName << std::endl;
    // Position cursor at item
    int ret;
    Dbc *item_curs;
    Dbt key, data;
    try {
    itemnameSDB.cursor(NULL, &item_curs, 0);
    char * c_item = (char *)itemName.c_str();
    key.set_data(c_item);
    key.set_size(strlen(c_item) + 1);
    if ((ret = item_curs->get(&key, &data, DB_SET)) != 0)
         std::cout << "Did not find any records matching item ["
              << c_item << "]" << std::endl;
    // while (ret != DB_NOTFOUND)
    //      printf("Database record --\n");
    //     std::cout << "Key : " << (char *)key.get_data() << std::endl;
    //      ret = item_curs->get(&key, &data, DB_NEXT_DUP);
    catch(DbException &e) {        
    itemnameSDB.err(e.get_errno(), "Error!");
    } catch(std::exception &e) {
    itemnameSDB.errx("Error! %s", e.what());
    // Position cursor at category
    Dbc *category_curs;
    try {
    catnameSDB.cursor(NULL, &category_curs, 0);
    char c_category = (char )categoryName.c_str();
    key.set_data(c_category);
    key.set_size(strlen(c_category) + 1);
    if ((ret = category_curs->get(&key, &data, DB_SET)) != 0)
         std::cout << "Did not find any records matching category ["
              << c_category << "]" << std::endl;
    //!! Debug, print everything
    // Dbt temp_key, temp_data;
    // while ((ret = category_curs->get(&temp_key, &temp_data, DB_NEXT)) == 0) {        
    // std::cout << "Key : " << (char *)temp_key.get_data() << std::endl;
    catch(DbException &e) {        
    catnameSDB.err(e.get_errno(), "Error!");
    } catch(std::exception &e) {
    catnameSDB.errx("Error! %s", e.what());
    // Set up an array of cursors ready for the join
    Dbc *carray[3];
    carray[0] = item_curs;
    carray[1] = category_curs;
    carray[3] = NULL;
    // Perform the join
    Dbc *join_curs;
    try {
    if ((ret = itemnameSDB.join(carray, &join_curs, 0)) != 0)
         std::cout << "Successful query results should go here." << std::endl;
    catch(DbException &e) {        
    itemnameSDB.err(e.get_errno(), "Error[3]!");
    } catch(std::exception &e) {
    itemnameSDB.errx("Error! %s", e.what());
    // Iterate through results using the join cursor
    while ((ret = join_curs->get(&key, &data, 0)) == 0)
    std::cout << "Iterating through cursors" << std::endl;
    // If we exited the loop because we ran out of records,
    // then it has completed successfully.
    if (ret == DB_NOTFOUND)
    item_curs->close();
    category_curs->close();
    join_curs->close();
    return(0);
    }

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Skype crashing and poor performance

    Hello!
    I have a Lumia625 with WP8.1. My problem is that Skype has a really poor performance on my phone. It crashes 6 times out of 10 on startup, and even if I manage to start it, the whole app is slow and laggy. Sometimes I can't even write a message it's so laggy. Video call is absolutely out of the question. It crashes my whole phone. I have no similar problems with other instant messaging apps nor with high-end games. There is something obviously using way more resource in the Skype app than it's supposed to. It's a simple chat program, why would it need so much resource?
    The problem seems to be originating from the lower (512 mb) RAM size of my phone model, because I experienced the same effect with poorly written apps, that don't keep in mind that there are 512 RAM devices, not only 1GB+ ones, and use too much resource.
    Please don't try to suggest to restart/reset the phone, and reinstall the app. Those are already behind me, and they did NOT help the problem. I'm not searching for temporary workarounds.
    Please find a solution for this problem, because it is super annoying, and I can't use Skype, which will eventually result in me leaving Skype.
    Solved!
    Go to Solution.

    When it crashes on startup it goes like:
    I tap the skype tile
    The black screen with the "Loading....." appears (default WP loading screen). Usually this takes longer than it would normally take on any other app.
    For a blink of an eye the Skype gui appears, but it instantly crashes.
    If I can successfully start up the app, it just keeps lagging. I sart to write a message to a contact, and sometimes even the letters don't appear as I touch them, but they appear much later altogether. If I tap the send message button the whole gui freezes (seems like it freezes till the contact gets my message). Sometimes the lag get stronger, and sometimes it almost vanishes, but if I keep making inputs when the lag is strong, sometimes it crashes the whole app.
    When I first installed the app, everything was fine. But after a while this behavior appeared. I reinstalled the app, and it solved the problem temporarily, but after some time the problem re-appeared. I don't know if it's relevant, but there was a time when I couldn't make myself appear online all the time (when the app was not started). In that time I didn't experience the lags and crashes. Anyways, what I'm sure about is that the lags get worse with time. Idk if it's because of use of the app (caching?), or the updates the phone makes to itself (conflict?).
    I will try to reinstall Skype. Probably it will fix it for now. I hope the problem won't appear again.

  • Poor Performance in ETL SCD Load

    Hi gurus,
    We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
    Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
    Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
    The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
    The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
    Some questions for the above:
    - is this an expected average execution duration for this number of records?
    - updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
    Thanks in advance,
    Guilherme

    Hi,
    Thank you for posting in Windows Server Forum.
    Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
    RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Poor performance

    I was wondering if anyone has encountered this type of problem with Oracle OLAP before whereby a query against relational tables runs much quicker than an equivalent query against a OLAP cube (10.2)
    We have setup a very simple use case for Oracle OLAP and we are testing performance against an equivalent relational query.
    The cube that we have built comprises of 3 dimensions and 1 measure, each dimension has a standard simple hierarchy.
    The cube is populated from relational tables with these data volumes.
    Calendar - 50
    Sales Person – 10,000
    Product – 400,000
    Measure – 30,000
    Queries
    Most queries against the cube view run in a satisfactory time however when we drop to the lowest level and execute a logical query like
    Select Product, Sales_Value
    From CubeView
    Where Day=sysdate
    And sales_person=’John Smith’
    the query runs approximately 100 times slower than the equivalent relational query. It also returns all products that have no measures.
    We have explored dense looping without any success.
    I suppose my question is can this be expected as we are running a transactional style query against a MOLAP product or is there something we can do improve this performance.

    1) the SQL query we are executing
    select dim1, sales
    from SALES_CUBEVIEW
    where dim2 = '9'
    and dim3 = '1'
    and dim1_level = 'DETAIL'
    and dim2_level = 'DETAIL'
    and dim3_level = 'DETAIL'
    and olap_condition(olap_calc, ' LIMIT sales!dim1 KEEP sales_sales NE NA ',1) = 1
    and olap_condition(olap_calc, ' LIMIT sales!dim2 KEEP sales_sales NE NA ',1) = 1
    and olap_condition(olap_calc, ' LIMIT sales!dim3 KEEP sales_sales NE NA ',1) = 1;
    2) the DDL for the OLAP view
    declare v int;
    begin
    --delete from olap_views.olap_mappings where cube_name like 'SALES%';
    v := olap_views.olap_viewGenerator.generateCubeMap ('GLOBAL', 'SALES', 'SALES');
    olap_views.olap_viewGenerator.createCubeView ('GLOBAL', 'SALES','SALES');
    commit;
    end;
    --desc SALES_CUBEVIEW
    CREATE OR REPLACE FORCE VIEW "GLOBAL"."SALES_CUBEVIEW" ("DIM3", "DIM1", "DIM2", "DIM3_LEVEL", "DIM3_DIM3_DSO_1", "DIM3_SDSC", "DIM3_LDSC", "DIM3_TIME_SPAN", "DIM3_END_DATE", "DIM3_DETAIL_LVLDSC", "DIM3_TOTAL_LVLDSC", "DIM3_PRIMARY_PRNT", "DIM1_LEVEL", "DIM1_SDSC", "DIM1_LDSC", "DIM1_DETAIL_LVLDSC", "DIM1_TOTAL_LVLDSC", "DIM1_PRIMARY_PRNT", "DIM2_LEVEL", "DIM2_SDSC", "DIM2_LDSC", "DIM2_DETAIL_LVLDSC", "DIM2_TOTAL_LVLDSC", "DIM2_PRIMARY_PRNT", "SALES", "OLAP_CALC")
    AS
    SELECT "DIM3",
    "DIM1",
    "DIM2",
    "DIM3_LEVEL",
    "DIM3_DIM3_DSO_1",
    "DIM3_SDSC",
    "DIM3_LDSC",
    "DIM3_TIME_SPAN",
    "DIM3_END_DATE",
    "DIM3_DETAIL_LVLDSC",
    "DIM3_TOTAL_LVLDSC",
    "DIM3_PRIMARY_PRNT",
    "DIM1_LEVEL",
    "DIM1_SDSC",
    "DIM1_LDSC",
    "DIM1_DETAIL_LVLDSC",
    "DIM1_TOTAL_LVLDSC",
    "DIM1_PRIMARY_PRNT",
    "DIM2_LEVEL",
    "DIM2_SDSC",
    "DIM2_LDSC",
    "DIM2_DETAIL_LVLDSC",
    "DIM2_TOTAL_LVLDSC",
    "DIM2_PRIMARY_PRNT",
    "SALES",
    "OLAP_CALC"
    FROM TABLE(OLAP_TABLE ('GLOBAL.SALES duration session', '', '', '&(SALES_CUBE_LIMITMAP)')) MODEL DIMENSION BY ( DIM3, DIM1, DIM2) MEASURES ( DIM3_LEVEL, DIM3_DIM3_DSO_1, DIM3_SDSC, DIM3_LDSC, DIM3_TIME_SPAN, DIM3_END_DATE, DIM3_DETAIL_LVLDSC, DIM3_TOTAL_LVLDSC, DIM3_PRIMARY_PRNT, DIM1_LEVEL, DIM1_SDSC, DIM1_LDSC, DIM1_DETAIL_LVLDSC, DIM1_TOTAL_LVLDSC, DIM1_PRIMARY_PRNT, DIM2_LEVEL, DIM2_SDSC, DIM2_LDSC, DIM2_DETAIL_LVLDSC, DIM2_TOTAL_LVLDSC, DIM2_PRIMARY_PRNT, SALES, OLAP_CALC ) RULES
    UPDATE SEQUENTIAL ORDER();
    select olap_views.olap_viewGenerator.getCubeLimitmap ('GLOBAL', 'SALES', 'SALES') from dual; SALES_CUBE_LIMITMAP
    "DIMENSION DIM3 FROM DIM3 WITH -
    HIERARCHY DIM3_PRIMARY_PRNT FROM DIM3_PARENTREL(DIM3_HIERLIST \'PRIMARY\') -
    INHIERARCHY DIM3_INHIER -
    FAMILYREL DIM3_TOTAL_LVLDSC, -
    DIM3_DETAIL_LVLDSC -
    FROM DIM3_FAMILYREL(DIM3_LEVELLIST \'TOTAL\'), -
    DIM3_FAMILYREL(DIM3_LEVELLIST \'DETAIL\') -
    LABEL DIM3_LONG_DESCRIPTION -
    ATTRIBUTE DIM3_END_DATE FROM DIM3_END_DATE -
    ATTRIBUTE DIM3_TIME_SPAN FROM DIM3_TIME_SPAN -
    ATTRIBUTE DIM3_LDSC FROM DIM3_LONG_DESCRIPTION -
    ATTRIBUTE DIM3_SDSC FROM DIM3_SHORT_DESCRIPTION -
    ATTRIBUTE DIM3_DIM3_DSO_1 FROM DIM3_DIM3_DSO_1 -
    ATTRIBUTE DIM3_LEVEL FROM DIM3_LEVELREL-
    DIMENSION DIM1 FROM DIM1 WITH -
    HIERARCHY DIM1_PRIMARY_PRNT FROM DIM1_PARENTREL(DIM1_HIERLIST \'PRIMARY\') -
    INHIERARCHY DIM1_INHIER -
    FAMILYREL DIM1_TOTAL_LVLDSC, -
    DIM1_DETAIL_LVLDSC -
    FROM DIM1_FAMILYREL(DIM1_LEVELLIST \'TOTAL\'), -
    DIM1_FAMILYREL(DIM1_LEVELLIST \'DETAIL\') -
    LABEL DIM1_LONG_DESCRIPTION -
    ATTRIBUTE DIM1_LDSC FROM DIM1_LONG_DESCRIPTION -
    ATTRIBUTE DIM1_SDSC FROM DIM1_SHORT_DESCRIPTION -
    ATTRIBUTE DIM1_LEVEL FROM DIM1_LEVELREL-
    DIMENSION DIM2 FROM DIM2 WITH -
    HIERARCHY DIM2_PRIMARY_PRNT FROM DIM2_PARENTREL(DIM2_HIERLIST \'PRIMARY\') -
    INHIERARCHY DIM2_INHIER -
    FAMILYREL DIM2_TOTAL_LVLDSC, -
    DIM2_DETAIL_LVLDSC -
    FROM DIM2_FAMILYREL(DIM2_LEVELLIST \'TOTAL\'), -
    DIM2_FAMILYREL(DIM2_LEVELLIST \'DETAIL\') -
    LABEL DIM2_LONG_DESCRIPTION -
    ATTRIBUTE DIM2_LDSC FROM DIM2_LONG_DESCRIPTION -
    ATTRIBUTE DIM2_SDSC FROM DIM2_SHORT_DESCRIPTION -
    ATTRIBUTE DIM2_LEVEL FROM DIM2_LEVELREL-
    MEASURE SALES FROM SALES_SALES-
    ROW2CELL olap_calc"
    3)the output from a trace file collected by writing some diagnostic information to the filesystem - this is collected by running the following commands before and after the SQL query itself:
    10/08/10 11:51:27.534 ->define _AWHTD289C540F64EED0 variable NUMBER <GLOBAL.SALES!DIM3, GLOBAL.SALES!DIM1, GLOBAL.SALES!DIM2> SESSION
    10/08/10 11:51:27.534 ->define _AWHTD289C540F64EF28 variable TEXT <GLOBAL.SALES!DIM3, GLOBAL.SALES!DIM1, GLOBAL.SALES!DIM2> SESSION
    10/08/10 11:51:27.550 ->push GLOBAL.SALES!DIM3, GLOBAL.SALES!DIM1, GLOBAL.SALES!DIM2
    10/08/10 11:51:27.550 ->limit GLOBAL.SALES!DIM3 to
    10/08/10 11:51:27.566 Continue>GLOBAL.SALES!DIM3_INHIER(GLOBAL.SALES!DIM3_HIERLIST 'PRIMARY')
    10/08/10 11:51:27.581 ->limit GLOBAL.SALES!DIM1 to
    10/08/10 11:51:27.597 Continue>GLOBAL.SALES!DIM1_INHIER(GLOBAL.SALES!DIM1_HIERLIST 'PRIMARY')
    10/08/10 11:51:27.628 ->limit GLOBAL.SALES!DIM2 to
    10/08/10 11:51:27.644 Continue>GLOBAL.SALES!DIM2_INHIER(GLOBAL.SALES!DIM2_HIERLIST 'PRIMARY')
    10/08/10 11:51:27.659 ->limit GLOBAL.SALES!DIM3 to
    10/08/10 11:51:27.675 Continue>'1'
    10/08/10 11:51:27.691 ->limit GLOBAL.SALES!DIM3 keep
    10/08/10 11:51:27.691 Continue> (inlist('1' GLOBAL.SALES!DIM3) and inlist('DETAIL' GLOBAL.SALES!DIM3_LEVELREL))
    10/08/10 11:51:27.706 ->limit GLOBAL.SALES!DIM1 to
    10/08/10 11:51:27.722 Continue> (inlist('DETAIL' GLOBAL.SALES!DIM1_LEVELREL))
    10/08/10 11:51:29.425 ->limit GLOBAL.SALES!DIM2 to
    10/08/10 11:51:29.456 Continue>'9'
    10/08/10 11:51:29.487 ->limit GLOBAL.SALES!DIM2 keep
    10/08/10 11:51:29.503 Continue> (inlist('9' GLOBAL.SALES!DIM2) and inlist('DETAIL' GLOBAL.SALES!DIM2_LEVELREL))
    10/08/10 11:51:29.503 -> limit GLOBAL.SALES!DIM1 keep SALES_SALES NE NA
    10/08/10 11:51:55.066 -> limit GLOBAL.SALES!DIM2 keep SALES_SALES NE NA
    10/08/10 11:51:55.081 -> limit GLOBAL.SALES!DIM3 keep SALES_SALES NE NA
    10/08/10 11:51:55.097 ->pop GLOBAL.SALES!DIM3, GLOBAL.SALES!DIM1, GLOBAL.SALES!DIM2
    10/08/10 11:52:12.941 ->dbgoutfile eof
    4) DML to create use case
    /* create 3 source tables for dimensions (2 user dimensions, 1 time dimension) */
    create table dim1 as
    select 9999999 total_id, 'Total' total_desc, rownum detail_id, 'Detail '||rownum detail_desc
    from dual connect by level <= 1000000;
    create table dim2 as
    select 9999999 total_id, 'Total' total_desc, rownum detail_id, 'Detail '||rownum detail_desc
    from dual connect by level <= 10000;
    create table dim3 as
    select 9999999 total_id, 'Total' total_desc, 10 total_span, (trunc(sysdate) - 1) total_end_date,
    (11 - rownum) detail_id, to_char((trunc(sysdate) - rownum),'yyyy-mm-dd') detail_desc, 1 detail_span, (trunc(sysdate) - rownum) detail_end_date
    from dual connect by level <= 10;
    /* create fact table with 29788 records */
    create table facts as
    select (dim1.detail_id + dim2.detail_id + dim3.detail_id) sales, dim3.detail_id dim3_id, dim1.detail_id dim1_id, dim2.detail_id dim2_id
    from dim1 dim1, dim2 dim2, dim3 dim3
    where 10*trunc((dim2.detail_id+dim3.detail_id)/10) = (dim2.detail_id+dim3.detail_id)
    and ((dim2.detail_id = dim1.detail_id + dim3.detail_id) or (dim2.detail_id = dim1.detail_id + dim3.detail_id - 200) or (dim2.detail_id = dim1.detail_id + dim3.detail_id + 200));
    create analytic workspace from attached template (it is based on global schema)
    or use following details:
    analytic workspace: SALES
    dimensions: DIM1 (User Dimension), DIM2 (User Dimension), DIM3 (Time Dimension)
    (Use Natural Keys from Data Source)
    levels: TOTAL, DETAIL
    hierarchy: PRIMARY
    attributes: defaults
    cube: SALES
    dimensions order: dim3, dim1, dim2
    sumarize to: check all levels on all dimensions
    measure: SALES
    create cube view using ViewGenerator plugin for AWM or following script:
    declare x int;
    begin
    --delete from olap_views.olap_mappings where cube_name like 'SALES%';
    x := olap_views.olap_viewGenerator.generateCubeMap ('GLOBAL', 'SALES', 'SALES');
    olap_views.olap_viewGenerator.createCubeView ('GLOBAL', 'SALES','SALES');
    commit;
    end;
    Thanks

Maybe you are looking for

  • Application Specific Popup in Web Dynpro - Issue

    Hi Gurus, I am trying application specific popup which is described in FPM cook book. I am deferring the current event and trying the following steps as described in the cook book. 1) Created new WD ABAP view "POPUP_CARRIER" 2) In the component contr

  • How can i save the output  of  report into  a file,.

    Hi friends,              i was create a executable program .. it was executing fine.. imagine.. my program is just fetching the data from  LFA1  table based on some conditions.. every thing is fine.. and i'm getting the output also.. but i want to sa

  • Promotional page with images

    Hi - I would like to make a promotional leaflet with 20 or so images on and my contact details and then save as PDF using Photoshop.  How do I get a white page background please so I can drag images on please?

  • File on FTP

    Hello, I created a pdf report and putfile in FTP folder. In my application, i need to add a link. Link should point to file on FTP. How to do? When give FTP server information in link, while browsing it add CF server path and never get to the file. W

  • Unexpected value type for Item Tag Name ; expected System.Single, received

    Hi, I am using PCo 2.1 with MII 12.1 to extract values from some PI tags. When I run the query, I get this error - Unexpected value type for Item <Tag Name>; expected System.Single, received System.String[Value = Scan Off] Can somebody explain what I