SMARTBAR does not cancel query

Hi again,
I have the smartbar enabled and my form will call other modal canvases which inherit the main menu.
I then tweak the functionality of my bar using the key triggers at block level. My form level triggers are all default.
no global form key functionality other than key exit does exit form etc.
All standard practice?
When I enter query on my block with inherited menu cancel query will not work.
I actually get the error message FRM-41009
function key not allowed.
CTRL q will not work either??
It also means if my query returns no records I get stuck in enter query mode and have to bin the applet to exit.
Please help me I am baffled!!
Thanks in advance
Oli

Hi;
Issue is not clear. Please review:
Cancel Item Button Does Not Cancel Item On Purchase Order (PO) [ID 736772.1]
You can also avaliable trace and see what happens
Regard
Helios

Similar Messages

  • Server does not return query metadata check the query

    hi,
    i want know how to  use bex query designer,
    when i  insert a data provider,the message is appear "server does not return query metadata check the query",
    who can help me ?
    thanks a lot !
    addition: my current entironment is: gui710 with sp4,bi710 with sp2.

    Hi,
    All the yellow and red lights will have an effect on query performance or execution.  Read up on them as there are too many to explain via this forum.
    There is a document on SDN on query performance.  Some useful links:
    [https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9f4a452b-0301-0010-8ca6-ef25a095834a]
    [http://help.sap.com/saphelp_nw70/helpdata/en/41/b987eb1443534ba78a793f4beed9d5/frameset.htm]
    [http://help.sap.com/saphelp_nw70/helpdata/en/d9/31363dc992752de10000000a114084/frameset.htm]
    [http://help.sap.com/saphelp_nw04/helpdata/en/2e/caceae8dd08e48a63c2da60c8ffa5e/frameset.htm]
    [https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0501cb9-d51c-2a10-239c-c141a22c45a6]
    Cheers...

  • Why does not my query use an index?

    I have a table with some processed rows (state: 9) and some unprocessed rows (states: 0,1,2,3,4).
    This table has over 120000 rows, but this number will grow.
    Most of the rows are processed and most of them also contain a group id. Number of groups is relatively small (let's assume 20).
    I would like to obtain the oldest some_date for every group. This values has to be outer joined to a on-line report (contains one row for each group).
    Here is my set-up:
    Tested on: 10.2.0.4 (Solaris), 10.2.0.1 (WinXp)
    drop table t purge;
    create table t(
      id number not null primary key,
      grp_id number,
      state number,
      some_date date,
      pad char(200)
    insert into t(id, grp_id, state, some_date, pad)
    select level,
         trunc(dbms_random.value(0,20)),
            9,
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 120000;
    insert into t(id, grp_id, state, some_date, pad)
    select level + 120000,
         trunc(dbms_random.value(0,20)),
            trunc(dbms_random.value(0,5)),
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 2000;
    commit;
    exec dbms_stats.gather_table_stats(user, 'T', estimate_percent=>100, method_opt=>'FOR ALL COLUMNS SIZE 1');
    Tom Kyte's printtab
    ==============================
    TABLE_NAME                    : T
    NUM_ROWS                      : 122000
    BLOCKS                        : 3834I know, this could be easily solved by fast refresh on commit materialized view like this:
    select
      grp_id,
      min(some_date),
    from
      t
    where
      state in (0,1,2,3,4)
    grpup by
      grp_id;+ I have to create log on (grp_id, some_date, state)
    Number of rows with active state will be always relatively small. Let's assume 1000-2000.
    So my another idea was to create a selective index. An index which would contain only data for rows with an active state.
    Something like this:
    create index fidx_active on t ( 
      case state 
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end,
      case state
        when 0 then some_date
        when 1 then some_date
        when 2 then some_date
        when 3 then some_date
        when 4 then some_date
      end) compress 1; so a tuple (group_id, some_date) is projected to tuple (null, null) when the state is not an active state and therefore it is not indexed.
    We can save even more space by compressing 1st expression.
    analyze index idx_grp_state_date validate structure;
    select * from index_stats
    @pr
    Tom Kyte's printtab
    ==============================
    HEIGHT                        : 2
    BLOCKS                        : 16
    NAME                          : FIDX_ACTIV
    LF_ROWS                       : 2000 <-- we're indexing only active rows
    LF_BLKS                       : 6 <-- small index: 1 root block with 6 leaf blocks
    BR_ROWS                       : 5
    BR_BLKS                       : 1
    DISTINCT_KEYS                 : 2000
    PCT_USED                      : 69
    PRE_ROWS                      : 25
    PRE_ROWS_LEN                  : 224
    OPT_CMPR_COUNT                : 1
    OPT_CMPR_PCTSAVE              : 0Note: @pr is a Tom Kyte's print table script adopted by Tanel Poder (I'm using Tanel's library) .
    Then I created a query to be outer joined to the report (report contains a row for every group).
    I want to achieve a full scan of the index.
    select
      case state -- 1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end grp_id,
      min(case state --second expression
            when 0 then some_date
            when 1 then some_date
            when 2 then some_date
            when 3 then some_date
            when 4 then some_date
          end) as mintime
    from t 
    where
      case state --1st expression: at least one index column has to be not null
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end is not null
    group by
      case state --1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end;-------------
    Doc's snippet:
    13.5.3.6 Full Scans
    A full scan is available if a predicate references one of the columns in the index. The predicate does not need to be an index driver. A full scan is also available when there is no predicate, if both the following conditions are met:
    All of the columns in the table referenced in the query are included in the index.
    At least one of the index columns is not null.
    A full scan can be used to eliminate a sort operation, because the data is ordered by the index key. It reads the blocks singly.
    13.5.3.7 Fast Full Index Scans
    Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT NULL constraint. A fast full scan accesses the data in the index itself, without accessing the table. It cannot be used to eliminate a sort operation, because the data is not ordered by the index key. It reads the entire index using multiblock reads, unlike a full index scan, and can be parallelized.
    You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Fast full index scans cannot be performed against bitmap indexes.
    A fast full scan is faster than a normal full index scan in that it can use multiblock I/O and can be parallelized just like a table scan.
    So the question is: Why does oracle do a full table scan?
    Everything needed is in the index and one expression is not null, but index (fast) full scan is not even considered by CBO (I did a 10053 trace)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     85 |     20 |00:00:00.11 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |   2000 |00:00:00.10 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
                  THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL)Let's try some minimalistic examples. Firstly with no FBI.
    create index idx_grp_id on t(grp_id);
    select grp_id,
           min(grp_id) min
    from t
    where grp_id is not null
    group by grp_id;
    | Id  | Operation             | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    |   1 |  HASH GROUP BY        |            |      1 |     20 |     20 |00:00:01.00 |     244 |    237 |
    |*  2 |   INDEX FAST FULL SCAN| IDX_GRP_ID |      1 |    122K|    122K|00:00:00.54 |     244 |    237 |
    Predicate Information (identified by operation id):
       2 - filter("GRP_ID" IS NOT NULL)This kind of output I was expected to see with FBI. Index FFS was used although grp_id has no NOT NULL constraint.
    Let's try a simple FBI.
    create index fidx_grp_id on t(trunc(grp_id));
    select trunc(grp_id),
           min(trunc(grp_id)) min
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id);
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.94 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)Again, index (fast) full scan not even considered by CBO.
    I tried:
    alter table t modify grp_id not null;
    alter table t add constraint trunc_not_null check (trunc(grp_id) is not null);I even tried to set table hidden column (SYS_NC00008$) to NOT NULL
    It has no effect, FTS is still used..
    Let's try another query:
    select distinct trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    | Id  | Operation             | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH UNIQUE          |             |      1 |     20 |     20 |00:00:00.85 |     244 |
    |*  2 |   INDEX FAST FULL SCAN| FIDX_GRP_ID |      1 |    122K|    122K|00:00:00.49 |     244 |
    Predicate Information (identified by operation id):
       2 - filter("T"."SYS_NC00008$" IS NOT NULL)Here the index FFS is used..
    Let's try one more query, very similar to the above query:
    select trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.86 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |    122K|    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)And again no index full scan..
    So my next question is:
    What are the restrictions which prevent index (fast) fullscan to be used in these scenarios?
    Thank you very much for your answers.
    Edited by: user1175494 on 16.11.2010 15:23
    Edited by: user1175494 on 16.11.2010 15:25

    I'll start off with the caveat that i'm no Johnathan Lewis so hopefully someone will be able to come along and give you a more coherent explanation than i'm going to attempt here.
    It looks like the application of the MIN function against the case statement is confusing the optimizer and disallowing the usage of your FBI. I tested this against my 11.2.0.1 instance and your query chooses the fast full scan without being nudged in the right direction.
    That being said, i was able to get this to use a fast full scan on my 10 instance, but i had to jiggle the wires a bit. I modified your original query slightly, just to make it easier to do my fiddling.
    original (in the sense that it still takes the full table scan) query
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation        | Name | Starts | E-Rows | A-Rows |      A-Time   | Buffers |
    |   1 |  HASH GROUP BY        |       |      2 |      33 |       40 |00:00:00.07 |    7646 |
    |*  2 |   TABLE ACCESS FULL| T       |      2 |      33 |     4000 |00:00:00.08 |    7646 |
    Predicate Information (identified by operation id):
       2 - filter((CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
               THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL AND
               CASE "STATE" WHEN 0 THEN "SOME_DATE" WHEN 1 THEN "SOME_DATE" WHEN 2 THEN
               "SOME_DATE" WHEN 3 THEN "SOME_DATE" WHEN 4 THEN "SOME_DATE" END  IS NOT
               NULL))
    modified version where we prevent the MIN function from being applied too early, by using ROWNUM
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
      and rownum > 0
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY           |            |      2 |     20 |     40 |00:00:00.01 |      18 |
    |   2 |   VIEW                |            |      2 |     33 |   4000 |00:00:00.07 |      18 |
    |   3 |    COUNT           |            |      2 |      |   4000 |00:00:00.05 |      18 |
    |*  4 |     FILTER           |            |      2 |      |   4000 |00:00:00.03 |      18 |
    |*  5 |      INDEX FAST FULL SCAN| FIDX_ACTIVE |      2 |     33 |   4000 |00:00:00.01 |      18 |
    Predicate Information (identified by operation id):
       4 - filter(ROWNUM>0)
       5 - filter(("T"."SYS_NC00006$" IS NOT NULL AND "T"."SYS_NC00007$" IS NOT NULL))

  • Oracle ADF refresh as deferred does not execute query on page load

    In the oracle ADF page I have two panel boxes. (Oracle ADF 11.1.1.4)
    a) Personal information panel box with PanelFormLayout (PersonalInfoViewObj) - ReadOnly View Object
    b) Address information panel box with Table (AddressInfoViewObj) - Read Only View Object
    For the iterators in a) and b) I have kept refresh condition as deferred and cacheResult=false. Also in b) for af:table, I have kept contentDelivery="immediate"
    When page loads it fires SQL query for a) and populate the data in Personal information panel box. However for b) it does not execute the SQL query and the data is not getting populated ( in AddressInformation panel with the tables. Please note data is there in the DATABASE......)
    Becasue with the refresh as deferred it was not executing the sql query for Panel b) (panel with Table and table iterator). I have tried refresh as always and refresh ifNeeded/renderModel/prepareModel however in that case it is executing the SQL query two times (twice).
    Please let me know the best way to fix this issue.

    Hi,
    I think you need a method in an application module to init your data.
    In your AM : create a method like :
    public void init(){
    getViewObject().executeQuery();
    }In your adfc-config.xml create a methodCall from this method and a "control flow case" from the methodCall to your page.
    and keep refresh as "deferred " in your pageDef.
    Clément

  • 2.1 EA2 does not display query results, query works fine in sqlplus

    2.1 EA2/Windows XP 32-bit
    The following query does not show any results for Total(GB) and Free(GB) columns. The diskgroup name shows correctly.
    SELECT
    DG.name ,
    ROUND(SUM(DSK.TOTAL_MB)/1024,2) "Total (GB)",
    round(SUM(DSK.FREE_MB)/1024,2) "Free (GB)"
    FROM
    V$ASM_DISK DSK,
    V$ASM_DISKGROUP DG
    WHERE
    DSK.GROUP_NUMBER=DG.GROUP_NUMBER
    group by DG.name;
    The query works fine when run from SQL*Plus:
    SQL> SELECT
    2 DG.name ,
    3 ROUND(SUM(DSK.TOTAL_MB)/1024,2) "Total (GB)",
    4 round(SUM(DSK.FREE_MB)/1024,2) "Free (GB)"
    5 FROM
    6 V$ASM_DISK DSK,
    7 V$ASM_DISKGROUP DG
    8 WHERE
    9 DSK.GROUP_NUMBER=DG.GROUP_NUMBER
    10 GROUP BY DG.name;
    NAME Total (GB) Free (GB)
    DG1 707.98 162.32
    DG2 134.84 122.68
    SQL>
    This must be something unique to SQLDeveloper. I tested in 2.1 EA1 and 1.5.5.59.69 and the query does not show results for the Total (GB) and Free (GB) columns.

    I would like to update the problem. It seems like when SQLDeveloper is connected to ASM instance, it has trouble using the divide operator (/). For example, the following query works fine:
    SELECT
    name,
    TOTAL_MB
    FROM
    V$ASM_DISKGROUP;
    But if I try to divide the TOTAL_MB by any number, the column shows blank (the column is blank, not NULL). So, if I try to run the following query, the output will just display diskgroup names. The TOTAL_MB/1024 column shows blank.
    SELECT
    name,
    TOTAL_MB/1024
    FROM
    V$ASM_DISKGROUP;
    If I run the same query in SQL*Plus from the same desktop using the same TNS alias, it works just fine which tells me that it is a SQLDeveloper problem.
    When I am connected to a regular database, then the problem does not show up and SQLDeveloper is able to display the results even when I am using the divide operator (/).

  • Urgent: executeQueryForCollection does not execute query for collection

    Some ViewLink detail views do not refresh when the master RSI navigates.
    Instead, it seems that a cached query collection is used.
    How can we control this behaviour? Under what circumstances does the rowset re-use an old QueryCollection instead of re-executing the query?
    It seems to work for some detail VOs but I cannot find a difference between the working and non-working VLs and VOs.
    This is a duplicate of
    Question about row-level ViewLink refresh
    More information is posted there.
    Thanks,
    Sascha

    Hi Sung,
    thanks for your answer. But I still don't get it.
    It's so erratic.
    I have a worklist with --- say 10 rows.
    Then:
    I navigate to every row and then back to the first row.
    Now I start an integrated application (with its own transaction and database connection) that produces a new detail row.
    I refresh the worklist, navigate back to the row.
    Then I go to the next row and repeat the same.
    Now, sometimes I can do all the stuff after "Then:" 100 times and it works every time. The detail shows the newly added row.
    Sometimes it works just 4 times, then it simply stops working. The newly added rows are not fetched.
    So what's the plan? Why does it work sometimes, and sometimes it doesn't?
    Why does it work for some ViewLinks and not for others?
    Second example: I have a tree binding. The top node has 4 children. Of course, JUTreeBinding uses RL-VLs to build the tree.
    Two of those children detail VOs show the problem all the time, the other two NEVER EVER.
    What's the plan? Why does it work for some VOs and not for the others?
    If this is really by design (I doubt it judging by the flakyness), then that means for us: get rid of ALL ViewLinks and replace them with this listener scheme. Is this really intended? This would be a nightmare. I cannot use VLs and listeners in parallel. How do I know that whether the VL already caused the execution -- this really happens! -- or not?? Is the solution to execute always twice? Where's the reduction in resource consumption then?
    What's the point in caching the query collection of 5 hours ago? Data can change in the DB in the meantime, can it?
    I tried to produce a test application based on Scott/Tiger but failed, because there, the VLs re-executed the query EVERY SINGLE TIME. No matter what changes I made with TOAD in the tables. Is this a bug then???
    And there is no way (setting, property) to control this?
    I am sorry, but I cannot believe that this is really intended. The idea is so ... weird.
    Sascha

  • Why does not  a query go by index but FULL TABLE SCAN

    I have two tables;
    table 1 has 1400 rwos and more than 30 columns , one of them is named as 'site_code' , a index was created on this column;
    table 2 has more 150 rows than 20 column , this primary key is 'site_code' also.
    the two tables were analysed by dbms_stats.gather_table()...
    when I run the explain for the 2 sqls below:
    select * from table1 where site_code='XXXXXXXXXX';
    select * from table1 where site_code='XXXXXXXXXX';
    certainly the oracle explain report show that 'Index scan'
    but the problem raised that
    I try to explain the sql
    select *
    from table1,table2
    where 1.'site_code'=2.'site_code'
    the explain report that :
    select .....
    FULL Table1 Scan
    FULL Table2 Scan
    why......

    Nikolay Ivankin  wrote:
    BluShadow wrote:
    Nikolay Ivankin  wrote:
    Try to use hint, but I really doubt it will be faster.No, using hints should only be advised when investigating an issue, not recommended for production code, as it assumes that, as a developer, you know better than the Oracle Optimizer how the data is distributed in the data files, and how the data is going to grow and change over time, and how best to access that data for performance etc.Yes, you are absolutly right. But aren't we performing such an investigation, are we? ;-)The way you wrote it, made it sound that a hint would be the solution, not just something for investigation.
    select * from .. always performs full scan, so limit your query.No, select * will not always perform a full scan, that's just selecting all the columns.
    A select without a where clause or that has a where clause that has low selectivity will result in full table scans.But this is what I have ment.But not what you said.

  • Web Intelligence Rich Client Not Cancel Query

    Hi all
    When i do a query in Web Intelligence Rich Client and CANCEL that query appears 3 options, but whit any option nothing happend and the laptop is block and i have to use the commands CTRL + ALT + DEL and finish the process.
    What is the problem and the solution please.
    Thanks

    Does it happen with all users and on all machine and with all reports?
    Bashir Awan

  • BI beans does not use QUERY RERWITE for cube

    Hello!
    First of all I would like to say big thanks to Keith for help on dimension rollup. It works now.
    I am creating a pilot environment with
    Oracle      10.2.0.3
    OWB 10.2.0.3
    SS Add-in 10.1.2.2
    and one cube ST_R and three dims
    DIM_R_NOMA for products
    DIM_R_CIDI for stores
    DIM_R_TIME for time
    Now I have deployed dimensions and cube to CWM2. Dimensions are working quite well. I would say even better than in MOLAP. About 1 second for rollup on 2000 members in level.
    Now I am facing second problem. I can not force BI beans (which is used in SS Add-in and Disco and OWB browser) to use summaries prepared by DBMS_ODM package.
    1/ I have prepared MV LOGS for dimension and fact tables
    2/ I have prepared MV for dimensions using DBMS_ODM
    3/ I've prepared materialized view for cube aggs. with
    DBMS_ODM.CREATESTDFACTMV('WWHH','ST_R','ST_R.sql','C:\TEMP',true,'FULL');
    CWM2_OLAP_CUBE.set_mv_summary_code('WWHH','ST_R','GROUPINGSET');
    as described in OLAP REFERENCE on DBMS_ODM page.
    I explained all my MVs - they seem to be OK. They support
    REWRITE_GENERAL,
    REWRITE_FULL_TEXT,
    REWRITE_PART_TEXT
    There are no support for PCT rewrite etc.
    My user 'WWHH' has a privileges:
    ANALYSE_ANY
    QWERY_REWRITE
    GLOBAL_QWERY_REWRITE
    My database has setting
    QUERY_REWRITE_ENABLED=true
    Stale_tolerated=enforced
    all MVs and tables are analyzed.
    I do not use parallel settings on tables,MVs.
    to do some further analisys I've enabled olapcontinuous_trace_file. It generates some useful SQL statements from BI BEANS in UDUMP. These statements DO NOT resolve in MAT VIEW in explain plan
    Questions:
    Are there any settings for BI BEANS to turn on/off?
    Are there any other packages to create MVs?
    How to explain WHY MVs are not used?
    THanks everybody for cooperation.
    Regards,
    Kirill Boyko

    Keith,
    Thank you for response.
    I refreshed metadata, but no result. Again, dimensions are OK, but not cube. I am attaching explanation why it did no rewrite. This explanation is coming from
    dbms_mview.Explain_Rewrite
    QSM-01150: query did not rewrite
    QSM-01263: query rewrite not possible when query references a dictionary table or view
    QSM-01284: materialized view MV1125 has an anchor table DIM_R_TIME_V not found in query
    QSM-01102: materialized view, MV1125, requires join back to table, DIM_R_CIDI_V, on column, REGION_ID
    QSM-01219: no suitable materialized view found to rewrite this query
    QSM-01284: - is wrong. I have DIM_R_TIME_V in MView.
    Here is my MVIEW script from database:
    SELECT
    FROM
    WWHH.CAWHTB2 a,
    WWHH.DIM_R_CIDI_V b,
    WWHH.DIM_R_NOMA_V c,
    WWHH.DIM_R_TIME_V d
    WHERE
    b.STORE_ID = a.CIIDCA AND
    c.ARTICLE_ID = a.NMIDCA AND
    d.DAY_ID = a.CLIDCA
    GROUP BY GROUPING SETS ( ...)
    Could you tell me where to check in metadata that MVIEW is correctly setup?

  • Citadel Archive progress status window does not cancel properly

    Any idea how to fix this since all of the vi's are locked?

    I just canceled out of one yesterday afternoon running under LV 8.2 with no issue. If you are removing the old data, did you shutdown the app using the DB?
    If that doesn't help the only answer I know of sounds like I am being a wise-arse.
    Post an zip that demonstates this issue then "take two upgrades and post back next year."
    I don't mind locked VI if they work. When they don't I have no choice but to be frutstrated. The lock-down tie-up with Shared variables combined with what appears to be poor testing (historical trends recall funtion swapped start and stop times because developer did not use type defs, intermittant lock-ups of the DB on multiple machines with absolutley no tools to troubleshoot, ... ) has resulted in me NOT recomending DSC for an app for a couple of years now. The only reason I still touch it is I have apps running DSC for ten years and they have to keep running.
    Sorry to dump on you. If we ever meet we can discuss who is more disappointed.
    Ben
    (Former Top Contributor to the DSC forum)
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Cancel Query not working

    Problem Summary
    Cancel Query not working on 11.5.10
    Problem Description
    While finding orders, window pops up the cancel query but when user tries to click the cancel button, query does not cancel.

    jemar98 wrote:
    Problem Summary
    Cancel Query not working on 11.5.10
    Problem Description
    While finding orders, window pops up the cancel query but when user tries to click the cancel button, query does not cancel.Please post the details of the application release, database version and OS.
    Was this working before? If yes, any changes been done recently?
    Please review (Canceling Long Running Queries in Oracle Applications 11i [ID 138159.1]) and make sure you complete all the steps.
    Thanks,
    Hussein

  • Autotrace does not show SQL Query Results

    In the previous versions of SQL Developer, the query results will be shown in the Results tab when running the SQL using Autotrace. However, in SQL Developer v2, I cannot find the query results when I run the SQL using Autotrace. Are there any options or settings to turn on the query results in autotrace?

    Please do not duplicate threads (original Autotrace does not show Query Results only makes you and all of us lose time.
    If nobody answered, that would probably indicate a NO.
    You can request this at the SQL Developer Exchange again, so other users can vote and add weight for possible future implementation.
    Sorry,
    K.

  • Workflow does not complete task in other languages

    Hi,
    I created an approval workflow on Sharepoint Designer 2013 set as Sharepoint Workflow 2013.
    It worked fine untill some of our approvers came to find out that using other languanges as primary option on Internet Explorer the workflow stops. It does not cancel it, but does not mark the task as completed automatically as it does for the computers set
    as English as primary language, therefore, the workflow stops until a manual completion of the task.
    On the microsoft site I checked that Workflow Manager has the same files in any language. 
    I even found a Cumulative Update for Workflow Manager and it cames with a fix on languages. Question is: why there are no languages on the first installation?
    These are the download center links:
     Workflow Manager 1.0
     Cumulative Update for Workflow Manager 1.0

    I have figured out what's the cause of the problem.
    There has been no changes in the agent assignment, so, SWU_OBUF is not needed.
    The workflow hangs and states that there is no agent for the step in Ready status because the workflow was deactivated after the Release step.  I think this has been deactivated by the system because there were errors that occured (perhaps in the container bindings).
    The workflow template I used, together with the tasks, were copied from the standard template WS20000075.
    Are there any procedure to just easily check the bindings?
    Thanks.

  • When using iTunes 11.0.0.163, my iPhone does not get past step 3 of 6 when synchronizing. How do I resolve this?

    When I synchronize my iPhone4S with iTunes 11.0.0.163, I get stuck at this screen:
    When pressing the "X" button to cancel, sometimes it does not cancel, but stays in this position. I then have to close the application using the "Ctrl-Shift-Esc" menu. Any suggestions?

    Im having the same problem after installing ios7.1
    Checked the usb status and it sees an iphone ( 4s ) attached,
    Repaired disc permissions
    updated and reset iphone
    and itunes
    reinstalled mavericks from scratch
    phone alert saying " TRUST THIS COMPUTER?"
    hit trust
    same alert keeps appearing
    again and again
    itunes keeps wittling on about "iTunes could not connect to this iPhone. You do not have permission."
    what the actual F*** is going on and can anybody help ??????

  • Cancel Query dialog does not show for long queries

    Hi,
    I've set the interaction mode of my form to non-blocking. I then ran a long query which takes about 5 minutes to display the results. The Cancel Query popup that everyone is talking about is still not showing.
    What should my settings be to make the Cancel Query dialog popup? My client wants to have the option to cancel a query while it is still processing.
    I'm using Oracle Forms 9.0.4.0.10 and Oracle Database 10g.
    Any help with regards to this will be highly appreciated.

    super

Maybe you are looking for

  • Word file (standard / template)

    hello, Can i in labview communicate with word (not text file) , so i can make a standard format / template and insert a graph and some measurements data in it...

  • LDAP connection via SSL is failing

    Hi, I am using following code to connect to LDAP Hashtable env = new Hashtable(); env.put("java.naming.factory.initial", "com.sun.jndi.ldap.LdapCtxFactory"); env.put("java.naming.provider.url", "ldaps://inpvmwin2k3ads1.VELWINTELLAB.COM"); //change fo

  • Configuring AppleTalk for HP LaserJet 5si?

    Hi all, One of our printers was recently "fixed," and in the process the network settings were wiped out. While the printer can be added and printed to simply enough using the add printer utility as an AppleTalk printer, I cannot find a way through t

  • The error occurs by correspondence check (WinVerify Trust) of the signature when Windows Installer with the digital signature is executed.

    The following errors occur by correspondence check (WinVerify Trust) of the signature when Windows Installer with the digital signature is executed. "Error 1330.  A file that is required cannot be installed because the cabinet file C:\<tool>\Data1.ca

  • Download and upload ztables

    Hi sap gurus, I need to to copy ztables from one server to another but not through transports. Is there a way other than creating the tables in another server again. Iam talking about something like downloading and uploading again just like what we d