Query ground scheduling for a Query

Hi Friends,
I need to schdule at background for a query...
Can anyone help  me in this.
Thanks & Regards,
Naga

Hi Baskaran,
Thanks for the reply.....
yes.you are right its for background scheduling.....
actullay one adhoc query is there..now my user wants to create a report based on query fields for background scheduling.
My concern was if we schedule the Adhoc query .than there is no need of creating a report right..
waiting for your reply..
Naga

Similar Messages

  • Query Execution Time for a Query causing ORA-1555

    dear Gurus
    I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
    But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
    My question -
    1. Is it possible to accurately find the query duration time besides the Alert Log file ?

    abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
    1- Tune the query. The faster a query runs the less likely a 1555 will occur.
    2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
    If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
    3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
    Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
    HTH -- Mark D Powell --
    dear mark
    Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
    I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
    regards
    abhishek

  • Query rewrite don't work wor aggregate query but work for join query

    Dear experts,
    Let me know what's wrong for
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    I have two MATERIALIZED VIEW:
    A) -- Only join
    CREATE MATERIALIZED VIEW "SCOTT"."TST_MV"
    ENABLE QUERY REWRITE AS
    SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
    "T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
    "T57410"."DISTANCE" "DISTANCE",
    "T57410"."IS_LEAF" "IS_LEAF",
    "T57460"."DEPARTMENTID" "DEPARTMENTID",
    "T57460"."NAME" "NAME","T57460"."PARENT"
    "PARENT","T57460"."SHORTNAME" "SHORTNAME",
    "T57460"."SKIMOID" "SKIMOID"
    FROM "BI_OIV_HIER" "T57410",
    "BI_DEPARTMENTS" "T57460"
    WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
    B) -- Join with aggregation
    CREATE MATERIALIZED VIEW "SCOTT"."TST_MV2"
    ("C41", "C42", "C43",
    "C44", "C45", "C46",
    "C47", "C48", "C49",
    "C50", "C51", "C52",
    "C53", "C54", "C55",
    "C56", "C57", "C58",
    "C59", "C60", "C61",
    "INCIDENTTYPE")
    ENABLE QUERY REWRITE
    AS SELECT COUNT(T56454.TOTAL) AS c41,
    T56840.CATEGORYID AS c42,
    T56840.PARENT AS c43,
    T56908.DOCSTATEID AS c44,
    T56908.PARENT AS c45,
    T56947.EXPIREDID AS c46,
    T56947.PARENT AS c47,
    T56986.ISSUESTATEID AS c48,
    T56986.PARENT AS c49,
    T57025.LOCATIONID AS c50,
    T57025.PARENT AS c51,
    T57064.NEWID AS c52,
    T57064.PARENT AS c53,
    T57103.PARENT AS c54,
    T57103.RESOLUTIONID AS c55,
    T57142.PARENT AS c56,
    T57142.RESPONSIBLEID AS c57,
    T57181.PARENT AS c58,
    T57181.SOURCEID AS c59,
    T57460.DEPARTMENTID AS c60,
    T57460.PARENT AS c61,
    T56454.INCIDENTTYPE
    FROM BI_OIV_HIER T57410
    BI_DEPARTMENTS T57460
    BI_SOURCE_HIER T57176
    SOURCE T57181
    BI_RESPONSIBLE_HIER T57137
    RESPONSIBLE T57142
    BI_RESOLUTIONS_HIER T57098
    RESOLUTIONS T57103
    BI_NEW_HIER T57059
    NEW T57064
    BI_LOCATIONS_HIER T57020
    LOCATIONS T57025
    BI_ISSUESTATES_HIER T56981
    ISSUESTATES T56986
    BI_EXPIRED_HIER T56942
    EXPIRED T56947
    BI_DOCSTATES_HIER T56903
    DOCSTATES T56908
    BI_CATEGORY_HIER T56835
    CATEGORY T56840
    INCIDENTS T56454
    WHERE ( T56454.RESOLUTION = T57098.MEMBER_KEY
    AND T56454.CATEGORY = T56835.MEMBER_KEY
    AND T56454.DOCSTATE = T56903.MEMBER_KEY
    AND T56454.EXPIRED = T56942.MEMBER_KEY
    AND T56454.ISSUESTATE = T56981.MEMBER_KEY
    AND T56454.LOCATION = T57020.MEMBER_KEY
    AND T56454.NEW = T57059.MEMBER_KEY
    AND T56454.RESPONSIBLE = T57137.MEMBER_KEY
    AND T56454.SOURCE = T57176.MEMBER_KEY
    AND T56454.DEPARTMENTID = T57410.MEMBER_KEY
    AND T56835.ANCESTOR_KEY = T56840.CATEGORYID
    AND T56903.ANCESTOR_KEY = T56908.DOCSTATEID
    AND T56942.ANCESTOR_KEY = T56947.EXPIREDID
    AND T56981.ANCESTOR_KEY = T56986.ISSUESTATEID
    AND T57020.ANCESTOR_KEY = T57025.LOCATIONID
    AND T57059.ANCESTOR_KEY = T57064.NEWID
    AND T57098.ANCESTOR_KEY = T57103.RESOLUTIONID
    AND T57137.ANCESTOR_KEY = T57142.RESPONSIBLEID
    AND T57176.ANCESTOR_KEY = T57181.SOURCEID
    AND T57410.ANCESTOR_KEY = T57460.DEPARTMENTID
    GROUP BY T56840.CATEGORYID,
    T56840.PARENT,
    T56908.DOCSTATEID,
    T56908.PARENT,
    T56947.EXPIREDID,
    T56947.PARENT,
    T56986.ISSUESTATEID,
    T56986.PARENT,
    T57025.LOCATIONID,
    T57025.PARENT,
    T57064.NEWID,
    T57064.PARENT,
    T57103.PARENT,
    T57103.RESOLUTIONID,
    T57142.PARENT,
    T57142.RESPONSIBLEID,
    T57181.PARENT,
    T57181.SOURCEID,
    T57460.DEPARTMENTID,
    T57460.PARENT,
    T56454.INCIDENTTYPE;
    So, optimizer uses query rewrite in
    select * from TST_MV
    and don't use query rewrite in
    select * from TST_MV2
    within one session.
    select * from TST_MV should be read as underlying select for TST_MV:
    SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
    "T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
    "T57410"."DISTANCE" "DISTANCE",
    "T57410"."IS_LEAF" "IS_LEAF",
    "T57460"."DEPARTMENTID" "DEPARTMENTID",
    "T57460"."NAME" "NAME","T57460"."PARENT"
    "PARENT","T57460"."SHORTNAME" "SHORTNAME",
    "T57460"."SKIMOID" "SKIMOID"
    FROM "BI_OIV_HIER" "T57410",
    "BI_DEPARTMENTS" "T57460"
    WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
    So, select * from TST_MV2 should be read by similar way as underlying select to TST_MV2
    DBMS_STATS.GATHER_TABLE_STAT is done for each table and MV.
    Please help to investigate the issue.
    Why TST_MV2 don't used for query rewrite ?
    Kind regards.

    Hi Carlos
    It looks like you have more than one question in your posting. Would I be right in saying that you have an issue with how long Discoverer takes when compared with SQL, and a second issue with regards to MVs not being used? I will add some comments on both. If one of these is not an issue please inform.
    Issue 1:
    Have you compared the explain plan from Discoverer with SQL? You may need to use a tool like TOAD to see it.
    Also, is Discoverer doing anything complicated with the data after it comes back? By complicated I mean do you have a large number of Page Items and / or Group Sorted items? SQL wouldn't have this overhead you see.
    Because SQL would create a table, have you tried creating a table in Discoverer and seeing how long it takes?
    Finally, what version of the database are you using?
    Issue 2:
    Your initial statement was that query rewrite works with several MV but not with others, yet in the body of the report you only show explain plans that do use the MV. Could you therefore go into some more detail regarding this situation.
    Best wishes
    Michael

  • RWI 00200 Error while scheduling the Bex Query based Webi report

    Dear All,
    I am facing the below error while scheduling the Bex Query based Webi report.
    Error:RWI 00200
    We are using BO XI 4.0 Sp05.
    Please advise

    Hi,
    I am on BO 4.0 SP05 Patch 6. Webi reports are just showing processing but not giving results. When I try to create new report it is throwing java security error.
    Tried applet patch upgrade(From link : https://websmp207.sap-ag.de/~sapidb/011000358700000902752013E) for webi certificate but didn't help.
    Please suggest what could be done.
    Thanks and Regards,
    Ankit Sharma

  • Pre-fill the OLAP cache for a query on Data change event  of infoprovider

    Hi Gurus,
    I have to pre-fill the OLAP cache for a query,which has bad performance.
    I read a doc 'Periodic Jobs and Tasks in SAP BW'
    which suggested sum steps to do this
    i hav created the setting for Bex broadcasting for scheduling job Execution with data change in info provider
    thereafter doc says  "an event has to be raised in the process chain which loads the data to this InfoProvider.When the process chain executes the process u201CTrigger Event Data Change (for Broadcaster)u201D, an event is raised to inform the Broadcaster that the query can be filled in the OLAP cache."
    how can this b done please provide with sum proper steps
    Answers are always appreciated.
    Thanks.

    Hi
    U need to create a process chain or use the existing process chain which you are using to load your current solution, just add event change process type in the process chian  and inside it add the info provider which are going to be affected.
    Once you are done with this go to the broadcaster  and  create new setting for that query...you will see the option for event data chainge in infoprovider just choose that  and create the settings.
    hope it helps

  • Schedule a ABAP query from SQ01 to transfer the output to a file path

    I am trying to schedule an ABAP query to run every night and transmit the output as a .TXT file to a particular file path on a server. In the "output format" of the selection screen I selected the second last radio button "File store" and entered the full path and file name.
    When I run this in foreground I get a pop up window confirming the file path before it is downloaded. I have to just hit enter. But when I run in background the file gets transmitted to the spool and not to the file path entered in "File Store".
    Is there any way I can make the file go to the specified path instead of the spool?
    Any help will be appreciated

    See documentation for enhancement SQUE0001. Go to SMOD, enter SQUE0001, and choose documentation.
    I had the same requirement, and I used enhancement SQUE0001 to create a screen exit that adds the selection, "private file". I then created a variant for my query with the private file button selected and the path included, and set up a batch job to run my query with my variant every night.
    Hope that helps.

  • ABAP query to schedule through batch job

    Hi
    i have requirement to schedule batch job for ABAP query report and download the report data to local drive throuh batch job.
    we have created ABAP query report, and this report should run through batch job and download the report data to local drive,,
    please help, how we can solve this.
    Regards
    Vanraj

    Hi Vanraj,
    I have two topics to talk about:
    1st: in order to schedule a background job, try to do the following:
      - Go to transaction SQ01 and select your query
      - Check that you've already created a variant, containing the required selection data.
      - Instead of running in online, go to "Query > Execute > Exec.in background"
      - This will allow you to schedule the background job.
    2nd: it is NOT possible for a background job to download a file to a local PC.
    I hope it helps.
    Kind regards,
    Alvaro

  • SQ01 - Query Reporting - Scheduling

    How can SQ01 be scheduled for batch execution?  Is there a batch program which can be executed with the query name and variant?

    Hi,
    On SQ01, enter the query name and press execute. On the program selection screen, go to the menu System-> Status. Copy the program name from the window that comes up and put it on the scheduler (transaction: SM36).
    or
    see the below link.
    http://www.auditware.co.uk/downloads/SAPQuery_step_thru.pdf
    Anil

  • Query to schedule my report

    Anybody provide me the query to schedule my report for last 2months.
    I have the date column in this format 20 mar 10(dd mmm yy).
    I want to execute my report from back 2nd month first day to last month last day
    Ex:if I schedule this repot on 3rd oct2010,report should execute it from aug 1st to sep 31.
    Anyone please provide me the query for my date format.?
    Thanks in advance...
    Edited by: user12255470 on Oct 22, 2010 9:58 AM
    Edited by: user12255470 on Oct 22, 2010 4:54 PM

    though you are telling schedule date.. that i can treat it as current date ..
    Because, report will be scheduled to specific date when it occurs.. and will become current date ..
    you should use combination of timestampAdd function with between operator to define range between
    I don't want to give you entire code.. so that you 'll try
    but i can point to this blog: http://obiee101.blogspot.com/2008/12/obiee-first-last-of-month.html
    this will explain how to get first day of month and last day of month in order to define date range for that report.
    Thanks in advance...anyway I will give you the points....Not sure, whoz is working here for points... You should understand it's knowledge center who will share ideas and provide possible solutions for your issues. All that we want is: closing of threads, which will helpful to other folks who are really facing the same issue can quickly refer this.. And, assigning points will be dependent on you, if you are satisfied and want to give then can assign points and you'll be treated as Gentle man who follows forums Etiquettes..
    all the best.
    Edited by: Kishore Guggilla on Oct 22, 2010 10:37 PM

  • Help needed for writing query

    help needed for writing query
    i have the following tables(with data) as mentioned below
    FK*-foregin key (SUBJECTS)
    FK**-foregin key (COMBINATION)
    1)SUBJECTS(table name)     
    SUB_ID(NUMBER) SUB_CODE(VARCHAR2) SUB_NAME (VARCHAR2)
    2           02           Computer Science
    3           03           Physics
    4           04           Chemistry
    5           05           Mathematics
    7           07           Commerce
    8           08           Computer Applications
    9           09           Biology
    2)COMBINATION
    COMB_ID(NUMBER) COMB_NAME(VARCHAR2) SUB_ID1(NUMBER(FK*)) SUB_ID2(NUMBER(FK*)) SUB_ID3(NUMBER(FK*)) SUBJ_ID4(NUMBER(FK*))
    383           S1      9           4           2           3
    384           S2      4           2           5           3
    ---------I actually designed the ABOVE table also like this
    3) a)COMBINATION
    COMB_ID(NUMBER) COMB_NAME(VARCHAR2)
    383           S1
    384           S2
    b)COMBINATION_DET
    COMBDET_ID(NUMBER) COMB_ID(FK**) SUB_ID(FK*)
    1               383          9
    2               383          4
    3               383          2
    4               383          3
    5               384          4
    6               384          2          
    7               384          5
    8               384          3
    Business rule: a combination consists of a maximum of 4 subjects (must contain)
    and the user is less relevant to a COMB_NAME(name of combinations) but user need
    the subjects contained in combinations
    i need the following output
    COMB_ID COMB_NAME SUBJECT1 SUBJECT2      SUBJECT3      SUBJECT4
    383     S1     Biology Chemistry      Computer Science Physics
    384     S2     Chemistry Computer Science Mathematics Physics
    or even this is enough(what i actually needed)
    COMB_ID     subjects
    383           Biology,Chemistry,Computer Science,Physics
    384           Chemistry,Computer Science,Mathematics,Physics
    you can use any of the COMBINATION table(either (2) or (3))
    and i want to know
    1)which design is good in this case
    (i think SUB_ID1,SUB_ID2,SUB_ID3,SUB_ID4 is not a
    good method to link with same table but if 4 subjects only(and must) comes
    detail table is not neccessary )
    now i am achieving the result by program-coding in C# after getting the rows from oracle
    i am using oracle 9i (also ODP.NET)
    i want to know how can i get the result in the stored procedure itsef.
    2)how it could be designed in any other way.
    any help/suggestion is welcome
    thanks for your time --Pradeesh

    Well I forgot the table-alias, here now with:
    SELECT C.COMB_ID
    , C.COMB_NAME
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID1) AS SUBJECT_NAME1
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID2) AS SUBJECT_NAME2
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID3) AS SUBJECT_NAME3
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID4) AS SUBJECT_NAME4
    FROM COMBINATION C;
    As you need exactly 4 subjects, the columns-solution is just fine I would say.

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Multiple Queries in Workbook - Refresh Screen Shows Up for Every Query

    We have multiple queries in a workbook.  All of these queries have the exact same selections for the variable selection screen.  When all the queries are refreshed once, the selection screen used to show up once and all the queries are refreshed with the same selections.  
    We were on BI 7.0 and SP10.  We recently moved to SP12.  Since the SP12 installation, the multiple query refresh pops-up the selection screen for every query.  It is nothing like "multiple query refresh" at once since the user has to click "execute" button for every single query.  It is interesting to note that the selection screen only contains hierarchy variables and hierarchy node variables.  The other variables of selection screen do not show up.  I couldn't find any OSS note on this topic.  Please let me know if anyone has any comments on this issue.  I will assign points to useful posts.

    hi Sameer,
    try to update front end patch to latest version ?
    Using the BI 7.x Add-On for SAP GUI 7.10 - Requirements
    hope this helps.

  • Error in SQL Query The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator. for the query

    hi Experts,
    while running SQL Query i am getting an error as
    The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator. for the query
    select  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price ,
    T2.LineText
    from OQUT T0  INNER JOIN QUT1 T1 ON T0.DocEntry = T1.DocEntry INNER JOIN
    QUT10 T2 ON T1.DocEntry = T2.DocEntry where T1.DocEntry='590'
    group by  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price
    ,T2.LineText
    how to resolve the issue

    Dear Meghanath,
    Please use the following query, Hope your purpose will serve.
    select  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price ,
    CAST(T2.LineText as nvarchar (MAX))[LineText]
    from OQUT T0  INNER JOIN QUT1 T1 ON T0.DocEntry = T1.DocEntry LEFT OUTER JOIN
    QUT10 T2 ON T1.DocEntry = T2.DocEntry --where T1.DocEntry='590'
    group by  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price
    ,CAST(T2.LineText as nvarchar (MAX))
    Regards,
    Amit

  • Short dump in report generation for bex query

    Hi,
    I have e newly installed SAP NetWeaver 7.3 and I'm not able to run BEx-Queries. When I start transaction RSRT and try to generate the report for the selected bex query, I get the following short dump:
    Category               ABAP Programming Error
    Runtime Errors         RAISE_EXCEPTION
    ABAP Program           SAPLRRSI
    Application Component  BW-BEX-OT
    If I start the test in RSRV for that query I get:
    Generation limits for the generated report
    Unable to load report GPEM3XZBL2Y9VX9H6SN
    Am I missing a profile parameter for that generartion limits or something else?
    Thanks in advance!

    Hi,
    Check whether your SAP GUI is activated and installed correctly.
    check following threads
    Runtime error RAISE_EXCEPTION has occurred
    Dump when activating Business Content 7.35
    Thanks and regards
    Kiran

  • How to query opening balance for all customer or Vendor for an speci. date

    Hi,
    How to query opening balance for all customer or Vendor for an specific date?
    Example:
    put any date and query will show all customer/ Vendor  that date opening/current balance.
    Regards,
    Mizan

    Hi mizan700 ,
    Try this
    SELECT T0.[DocNum] As 'Doc No.', T0.[CardCode] As 'Customer Code',
    T0.[CardName] As 'Customer Name',(T0.[DocTotal]-T0.[PaidSys]) As 'O/S Balance'
    FROM OINV T0 INNER JOIN OCRD T1 ON T0.CardCode = T1.CardCode
    INNER JOIN OCRG T2 on T1.GroupCode = T2.GroupCode
    INNER JOIN INV1 T3 ON T0.DocEntry = T3.DocEntry
    WHERE T0.[DocStatus] ='O'
    AND
    (T0.[DocDate] >='[%0]' AND T0.[DocDate] <='[%1]')
    Regards:
    Balaji.S

Maybe you are looking for

  • Adobe Reader Versions Compatible with Window 7 & I.E. 8.0

    We have a few users that experience blank pages or a pop-up with no text and only an OK button when trying to open an Adobe (web) Form with Adobe Reader 9.0 or 9.4.2 or 9.4.0 on Window 7 and I.E. 8.  Adobe Reader X is not an option since it is not su

  • How to divide and load parts of Large Adobe AIR app for Android

    Hi! I want to create an app for Android but it will quite large. It will be larger than the 50MB offered by Google Play Store. This means that I need to find a way and break the app and load parts of it when I need them. Imagine it like minigames ins

  • When I try to set up a link to AOL on the desk top, I get a link to Chrome, which I deleted from my computer. How do I get a link to Firefox?

    When I drag the symbol from the URL line to the desktop to set up a link, I get a google chrome icon instead of a firefox icon. I removed chrome from the computer, but I still get a chrome icon which links does not work. How do I get the reference to

  • TV Remote Activating iPod

    A new problem...apparently, my TV remote turns on my iPod when I turn on the TV and when I turn it off. I have the iPod setting in an Alec Lansing speaker system near the TV. The iPod turns itself off after awhile, but it drains the battery - I wonde

  • How to display & sign

    Hi everyone, How to display & sign in below statements such as "A & B" and "X & Y". Thanks. <mx:dataProvider> <mx:Array> <mx:Object label="A & B" data="1" id="MensClothing" subcategoryImage="assets/images/MensClothing.gif" /> </mx:Array> </mx:dataPro