Invalid month sql request for chart

Hi,
i try to refresh a report with a dynamical sql request as following :
select NULL LINK, status_label LABEL, count(fcr.status_code) as VALUE
from table
where table.date > '05/06/2007'
This query is generated dynamicaly from a date picker.
My problem is that the chart witch should be refreshed by this query, isn't.
The following error message come from the ajax query :
chart Flash Chart error: ORA-20987: APEX - Flash Chart error:  - ORA-20001: Fetch error: ORA-01843: not a valid month
Something disapoint me: when i execute the query in plSQL i have got my wished results.
Does someone have any idea about this mistake?

Hi better,
Try to give
where table.date >to_date( '05/06/2007','dd/mm/yyyy');Brgds,
Mini
Mark Answers Promptly

Similar Messages

  • How make a sql query for chart dynamically [I got it solved]

    Hi ,
    I' m in search of a method to go from a tabular form (click on a item in the record) to a chart page where the from <table> sql of the chart is varying from the item select.
    eg I click on emp and navigate to a page displaying the chart with a sql
    select dat, val from :p_item where p_item is the table name passed
    Can I do that?
    Thanks
    Erwin
    Message was edited by:
    Erwin L

    For anybody who wants to do this too
    Problem
    Every time a new member comes to the unit a separate holiday calendar table is created
    So I want a page to be dynamically loaded when the user clicks on his name in the tabular form and navigates to the page where his calendar is displayed.
    I don't want to create a new page for every person (minimize changes to the app)
    My solution:
    all the tables are created as cal_<userid>
    I created a view on one table
    create or replace view cal_individual_v as
    select * from cal_bbbbb (for example)
    Then I create a procedure
    prc$change_view(p_user_id IN VARCHAR2) IS
    t_sqlstr VARCHAR2(300);
    BEGIN
    t_sqlstr := 'CREATE OR REPLACE VIEW cal_individual_v AS SELECT * FROM CAL_'|| p_userid;
    EXECUTE IMMEDIATE t_sqlstr;
    END;
    In the page rendering proceses I added a pl/sql process on before header
    with
    begin
    prc$change_view(:p18_userid); -- hidden item on the page)
    end;
    That's all
    I works good and is still fast
    Hope it can help others
    Erwin

  • SQL Query for Chart does not work

    Have written a query that works fine outside of the app, but no data returned (displayed) within the app. I suspect i could simply create a view with the query in and all would be fine, but i'm intrigued to know why it will not yield data as is. Any ideas? Here's the query:
    select * from
    select null link, period, data from (
    select dp3.period, dp3.data-dp14.data data from
    (select period, data from ml_data_daily
    where data_point = 3
    and category = 'A. 0'
    and period_type = 'DAY'
    and period >= (select period from ml_period where period_number = :p1_date_start)
    and period <= (select period from ml_period where period_number = :p2_date_end)
    ) dp3,
    (select period, data from ml_data_daily
    where data_point = 14
    and category = 'Invalid Search'
    and period_type = 'DAY'
    and period >= (select period from ml_period where period_number = :p1_date_start)
    and period <= (select period from ml_period where period_number = :p2_date_end)
    ) dp14
    where dp3.period=dp14.period
    order by period)
    )

    Bruce - Make sure the items accessed via the bind variables have been set in session state. If those are just search fields, the chart query won't see them until the page has been submitted.
    Scott

  • Netflix viewed perfect for over a month, then it didn't with this error msge: Invalid URL The requested URL "/home", is invalid. Reference #9.9d0bc841.1320231048.92bcc44 --What's up with this now?

    Been watching Netflix on my Verizon HP Notebook, it's less than 3mo old. runs windows xp...pretty standard stuff. Anyway tried to login to netflix and Firefox gave this error message--Invalid URL
    The requested URL "/home", is invalid. Reference #9.9d0bc841.1320231048.92bcc44 -- I called Netflix & there system running perfectly. My pc running perfectly as well. Logged into netflix using IE--worked just fine. So by process of elimination, it's Firefox's Problem! I have NO CLUE about it, never seen it before on ANY browser. Tried opening & closing FFx, restarting puter, etc...Don't know what else to tell you except Firefox works on every other website now EXCEPT NETFLIX, it's the darnedest (I know that's not a word LOL) I've seen--it happened 10/2/11 @ about 3am!

    As I understand:
    The procedure works.
    The link 'appears' to work. (ie you saw the procedure run a 2nd at the database level)
    But it doesn't actually work (because you get a 404)
    I think I ran into an issue, on a completely different page type, that produced an 'error 404'. (I can't seem to reproduce it, though)
    The only way I discovered what was really going on was with an HTTP sniffer. (eg HTTPFox for FireFox)
    Through that, I noticed that the browser was (for some reason) calling a completely non-sensible APEX page. (hence the PAGE NOT FOUND error)
    My solution was to add a "submit to self" branch point.
    If that doesn't work, I'm at a loss.
    MK

  • Concurrent manager encountered an error while running sql*plus for your concurrent request create internal order

    Hi
    We have a big problem, We can't create internal orders, when I run the CREATE INTERNAL ORDER, it finish with ERROR:
    Concurrent Manager encountered an error while running SQL*Plus for your concurrent request 134980682.
    Review your concurrent request log and/or report output file for more detailed information.
    this is the log:
    +---------------------------------------------------------------------------+
    Purchasing: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    POCISO module: Create Internal Orders
    +---------------------------------------------------------------------------+
    Current system time is 26-JUL-2013 09:21:09
    +---------------------------------------------------------------------------+
    +-----------------------------
    | Starting concurrent program execution...
    +-----------------------------
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    Begin create internal sales order
    Updating Req Headers
    14 Reqs selected for processing
    Top of Fetch Loop
    Source Operating Unit: 82
    Selecting Currency Code
    Currency Code : MXP
    Selecting Order Type
    Order Type ID:1001
    Selecting Price List from Order Type
    Deliver To Location Id: 196
    Inserting Header : 3908784
    Getting the customer id
    Getting the customer id: 15334
    Unhandled Exception : ORA-01403: no data found
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    Concurrent Manager encountered an error while running SQL*Plus for your concurrent request 134980682.
    Review your concurrent request log and/or report output file for more detailed information.
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Output file size:
    78
    Output is not being printed because:
    The print option has been disabled for this report.
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 26-JUL-2013 09:21:14
    +---------------------------------------------------------------------------+
    Some suggestion for resolve it??
    Thanks & Regards.

    In the document 294932.1 Section 4 there are no pre-installation patches or update for OS RedHat LinuxAS4.
    When I type echo $LD_ASSUME_KERNEL it doesn't display any value so do I need to set the LD_Assume_Kernal value manually.
    If yes, please let me know the path and command to set the kernel value.
    Thanks
    Amith

  • Invalid state in SQL query for a function that was created with no errors.

    SQL> CREATE OR REPLACE FUNCTION overlap(in_start1 IN TIMESTAMP, in_end1 IN TIMESTAMP, in_start2 IN TIMESTAMP, in_end2 IN TIMESTAMP) RETURN NUMBER
    2 IS
    3
    4 BEGIN
    5 IF (in_start1 BETWEEN in_start2 AND in_end2 OR in_end1 BETWEEN in_start2 AND in_end2 OR in_start2 BETWEEN in_start1 AND in_end1) THEN
    6 RETURN 0;
    7 ELSE
    8 RETURN 1;
    9 END IF;
    10 END;
    11 /
    Function created.
    SQL> show errors;
    No errors.
    SQL>
    SQL> SELECT * FROM tbl where overlaps(current_time,current_time+1,current_time-1,current_time+2) = 0;
    SELECT * FROM tbl where overlaps(current_time,current_time+1,current_time-1,current_time+2) = 0
    ERROR at line 1:
    ORA-06575: Package or function OVERLAPS is in an invalid state
    I do not understand why overlaps is returned as in invalid state in the query, when it was created with no errors earlier. Could anyone help me?

    Marius
    Looking at the logic you are trying to create it looks like you are looking for overlapping time periods.
    Consider two date/time ranges:
    Range 1 : T1 - T2
    Range 2 : T3 - T4
    Do they overlap?
    1) No: T1 < T4 (TRUE)  T2 > T3 (FALSE)
    T1 --- T2
               T3 --- T4
    2) Yes: T1 < T4 (TRUE)  T2 > T3 (TRUE)
    T1 ---------- T2
               T3 --- T4
    3) Yes: T1 < T4 (TRUE)  T2 > T3 (TRUE)
    T1 -------------------- T2
               T3 --- T4
    4) Yes: T1 < T4 (TRUE)  T2 > T3 (TRUE)
                   T1 ----- T2
               T3 --- T4
    5) Yes: T1 < T4 (TRUE)  T2 > T3 (TRUE)
               T1 --- T2
           T3 ------------ T4
    5) No: T1 < T4 (FALSE) T2 > T3 (TRUE)
                    T1 --- T2
           T3 --- T4Answer: Yes they overlap if:
    T1 < T4 AND T2 > T3
    So you can code the logic in your SQL as simply:
    SELECT *
    FROM tbl
    WHERE range1_start < range2_end
    AND    range_1_end > range2_startIf you go around implementing PL/SQL functions for simple logic that can be achieved in SQL alone then you cause context switching between the SQL and PL/SQL engines which degrades performance. Wherever possible stick to just SQL and only use PL/SQL if absolutely necessary.

  • Request for an sql to scan for a special character from the tables

    Hi Gurus
    request for an sql statement for finding the character (and replace with a single byte character ) which occupy
    multibytes(in a unidata char set database) the database which is yet to be migrated to unidata(multibyte) database
    any kind of help is highly apprciated
    Thanks in advance

    Query below will find all bulti-byte characters in string column:
    select  string_column,
            substr(string_column,column_value,1)
      from  some_table,
            table(
                  cast(
                       multiset(
                                select  level
                                  from  dual
                                  connect by level <= length(string_column)
                       as sys.OdciNumberList
      where lengthb(substr(string_column,column_value,1)) > 1
      order by string_column,
               column_value
    /SY.

  • High invalidations in v$sqlarea for 1 query tag with "SQL Analyze"

    Hi All,
    Hopefully post to the right forum, if not please do let me know. Thanks
    I have one pre-production issue still don't have any clue how to move forward.
    This is 2 RAC nodes in linux platform with Oracle 11.2.0.2
    In the begininng this environment having a lot of performance issue, as huge "cursor: pin S wait on X", "latch: shared pool"
    and "library cache:Mutex X" causing very bad performance in this environment. After oracle support suggest to disable few hidden paramter
    and adjust some parameter, then it help to stablized the environment (according to oracle, the initial issue was caused by high version count).
    But we still can find minimal "latch:shared pool" and "library cache:Mutex X" on top 5 wait event list on hourly AWR report.
    This time oracle was saying it might caused by high reload and high invalidatiosn in sqlarea (not sure how true it is), luckily the event
    did not caused the performance issue at this moment, but we're asking support how can we get rid of the "mutex/latch" event.
    They give me one query to check the invalidations in v$sqlarea, and they suspect the high validation is causing by application.
      select *
      from v$sqlarea
      order by invalidations DESC;
      Weird thing is, there have one SQL tag with "SQL Analyze" always causing high invalidations. But we're not able to get more detail (base on SQL_ID)
    in v$sql or v$session table. This SQL insert into v$sqlarea and remove within 1 or 2 seconds, hard to get more information.
    And the statement is exactly the same, but don't know why SQL Analyze always checking on it.
    This SQL is triggering by SYS user, and it is inserting into MLOG$ table (one of the application materialized log file)
      insert into "test"."MLOG$_test1" select * from "test"."MLOG$_test1"
      The v$sqlarea information as below, sometime the invalidations can hit more than 10,000
      SQL_ID              SQL_TEXT                                                                                        LOADS  INVALIDATIONS
      0m6dhq90rg82x /* SQL Analyze(632,0) */ insert into "test"."MLOG$_test" select * from "test"."MLOG$_test  7981    7981
      {code}
      Anyone have any idea how can i move forward for this issue? As Oracle is asking me to use SQLTXPLAIN get the detail?
      Please share with me if you have any idea. Thanks in advance.
      Regards,
      Klng                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Hi Dom,
    We have checked there have no SQL Tuning enable for this SQL_ID. Below is the optimizer parameter in this environment, the hidden parameter was changed which suggest by oracle support.
    NAME                                 TYPE        VALUE
    _optimizer_adaptive_cursor_sharing   boolean     FALSE
    _optimizer_extended_cursor_sharing_r string      NONE
    el
    object_cache_optimal_size            integer     102400
    optimizer_capture_sql_plan_baselines boolean     FALSE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      11.2.0.2
    optimizer_index_caching              integer     90
    optimizer_index_cost_adj             integer     10
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    optimizer_use_invisible_indexes      boolean     FALSE
    optimizer_use_pending_statistics     boolean     FALSE
    optimizer_use_sql_plan_baselines     boolean     TRUE
    plsql_optimize_level                 integer     2
    SQL> select * from dba_sql_plan_baselines;
    no rows selected
    SQL>yeah we did run the ash, but the high invalidation did not caputre in the report. Actually this SQL tag with sql analyze it gone from v$sqlarea very fast (only 1 or 2 seconds only).
    Thanks.
    Regards,
    Klng

  • Single SQL call for both Report & Chart

    Hi everyone,
    I am pretty new to APEX and I am loving it :)
    I need to create a page with two region (SQL Report & Flash Chart) and both region data will come from identical SQL statement.
    Could I make them both get data from a single sql call, instead of making 2 identical sql call for each region?
    Thanks before for any help.

    Can't think of any practical way to use only one call. Best I can come up with it to create the query as a Shared Component. At least that way your using the identical SQL source.
    HTH
    Chris

  • Invalid month error sometimes in SQL Developer

    Hi
    sometimes I get 'Invalid Month error' in SQL developer when I execute the following query
    select TRUNC(TO_DATE('01-JUN-2013','DD-MON-YYYY')) from dual;
    But when I dosconnect session and reconnect, It works fine
    Any suggestions on how to avoid this issue ?
    Thanks

    872202 wrote:
    Hi
    sometimes I get 'Invalid Month error' in SQL developer when I execute the following query
    select TRUNC(TO_DATE('01-JUN-2013','DD-MON-YYYY')) from dual;
    But when I dosconnect session and reconnect, It works fine
    Any suggestions on how to avoid this issue ?
    ThanksThere's absolutely no reason that that should result in that error.
    The SQL is sent to the database and it's perfectly valid SQL, taking a string, and converting it to a date with the correct date format mask, and then truncating the date (which, by default is to the day, which in this case is pointless as it's already truncated), returning a DATE datatype, which SQL Developer will then render using it's date display format.

  • Invalid request for a change in window state

    Ok...
    Creating a JSR168 portlet and it works fine in Pluto, but I get the following when I deploy to P7:
    ERROR: Content not available. REASONE: Invalid request for a change in window state
    Here is what the portlet is supposed to do:
    * Display a form
    * User clicks find
    * Lookup is performed
    * Window state in processAction() method of portlet is changed to MAXIMIZED
    * I'm guessing this is where the error is occuring
    I also think that for the TableContainer, I have some sort of attribute incorrect having to do with one of the channelsIs values.
    I tried toggling them one way or another without any effect.
    Suggestions?

    Ive run into this issue also. I need a way for a portlet to change to its maximized state for the purposes of user registration. Is there anyway this can be done? I know that sun jsp "Providers" which are shown on their example desktop, have some sort of maximized state that they can go into, but im not sure how to use this in a portlet.
    There seems to be very inconsistent support between the proprietary sun provider api, and the portlet api.

  • Java.sql.SQLException: Invalid SQL type for column

    Hi guys!
    We are migrating from TOMCAT to WebLogic and we are getting the following error:
    java.sql.SQLException: Invalid SQL type for column
         at javax.sql.rowset.RowSetMetaDataImpl.checkColType(RowSetMetaDataImpl.java:94)
         at javax.sql.rowset.RowSetMetaDataImpl.setColumnType(RowSetMetaDataImpl.java:439)
         at      .initMetaData(CachedRowSetImpl.java:743)
         at com.sun.rowset.CachedRowSetImpl.populate(CachedRowSetImpl.java:621)
    In TOMCAT environment rowset.jar is in common/endorsed directory. Need I to put rowset.jar at any specific location?
    Thanks.
    Best regards

    Hi!
    We are using WebLogic 11g (10.3.2). We know about datasource configuration, but for now we are justing migrating from Tomcat to WebLogic without change the application source code.
    Is there any option to solve this problem?
    Thanks a lot!
    Best Regards

  • Help asked for a sql request - thanks

    Hello,
    I'm not a sql Guru... Who can help for this sql request ?
    First I have this:
    SELECT ADDINFO_ID, INFO, LANGUAGE_FK, ENGLISH_NAME
    FROM V_ADDINFOS
    WHERE LANGUAGE_FK = 'EN' (which is very simple...-)
    But now complicated... I have to add this in the same request:
    select sum(val) as nbrInfo
    from(
    select count(*) val from eccgis where addinfo1_fk = ADDINFO_ID
    union all
    select count(*) val from eccgis where addinfo2_fk = ADDINFO_ID
    union all
    select count(*) val from eccgis where addinfo3_fk = ADDINFO_ID
    union all
    select count(*) val from thirdgis where addinfo1_fk = ADDINFO_ID
    union all
    select count(*) val from thirdgis where addinfo2_fk = ADDINFO_ID
    In other words, for each row of the first select, I need to know how much it is linked in the tables eccgis and thirdgis...
    Hope is is clear... -)
    Thank you very very much,
    Michel

    Hi, Michel,
    Almost anywhere that SQL allows an expression (such as a column name, literal or function call) it also allows a scalar sub-query, a SELECT statement based on any table (or tables) that returns one column and (at most) one row. Like other sub-queries, scalar sub-queries can be corellated to the main query.
    To get the grand total you want on each row of your output:
    SELECT ADDINFO_ID, INFO, LANGUAGE_FK, ENGLISH_NAME
    , (select count(*) from eccgis where addinfo1_fk = ADDINFO_ID)
    + (select count(*) from eccgis where addinfo2_fk = ADDINFO_ID)
    + (select count(*) from eccgis where addinfo3_fk = ADDINFO_ID)
    + (select count(*) from thirdgis where addinfo1_fk = ADDINFO_ID)
    + (select count(*) from thirdgis where addinfo2_fk = ADDINFO_ID)
    AS nrbInfo
    FROM V_ADDINFOS
    WHERE LANGUAGE_FK = 'EN';VERY IMPORTANT: Each sub-query must be in parentheses. You'll get a run-time error if any scalar sub-query returns more than one row. (Returning no rows is okay: the value will be NULL).
    By the way, this looks like a bad table design. If each row in eccgis or thirdgis can be associated with more than one foreign key, they should be kept in a separate table. That's the standard way to handle many-to-many relationships.

  • I got an iphone from three network Uk, I am planning go to have holidays abroad for one and a half month, I request to my mobile company if they can unlock my iphone to use abroad with another sim , after one week they unlock my mobile.

    I got an iphone from three network Uk, I am planning go to have holidays abroad for one and a half month, I request to my mobile company if they can unlock my iphone to use abroad with another sim , after one week they unlock my mobile.and my  basic questions , if for any reason someone stole my iphone, they can activate in other country because my iphone is unlocked with the original codes.

    Yes.

  • Performance issue on 1 SQL request

    Hi,
    We have a performance problem. We have 2 systems. PRD and QAS (QAS is a copy of PRD as of September 2nd)
    SQL request is identical.
    table structures is identical.
    indexes are identical.
    views are identical
    DB stats have all been recalculated on both systems
    initSID.ora values are almost identical. only memory related parameters (and SID) are different.
    Obviously, data is different
    For you info, view ZBW_VIEW_EKPO fetched its info from tables EIKP, LFA1, EKKO and EKPO.
    Starting on September 15th, a query that used to take 10 minutes started taking over 120 minutes.
    I compared explain plans on both system and they are really different:
    SQL request:
    SELECT
      "MANDT" , "EBELN" , "EBELP" , "SAISO" , "SAISJ" , "AEDAT" , "AUREL" , "LOEKZ" , "INCO2" ,
      "ZZTRANSPORT" , "PRODA" , "ZZPRDHA" , "ZZMEM_DATE" , "KDATE" , "ZZHERKL" , "KNUMV" , "KTOKK"
    FROM
      "ZBW_VIEW_EKPO"
    WHERE
      "MANDT" = :A0#
    Explain plan for PRD:
    SELECT STATEMENT ( Estimated Costs = 300,452 , Estimated #Rows = 0 )
            8 HASH JOIN
              ( Estim. Costs = 300,451 , Estim. #Rows = 4,592,525 )
              Estim. CPU-Costs = 9,619,870,571 Estim. IO-Costs = 299,921
              Access Predicates
                1 TABLE ACCESS FULL EIKP
                  ( Estim. Costs = 353 , Estim. #Rows = 54,830 )
                  Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 350
                  Filter Predicates
                7 HASH JOIN
                  ( Estim. Costs = 300,072 , Estim. #Rows = 4,592,525 )
                  Estim. CPU-Costs = 9,093,820,218 Estim. IO-Costs = 299,571
                  Access Predicates
                    2 TABLE ACCESS FULL LFA1
                      ( Estim. Costs = 63 , Estim. #Rows = 812 )
                      Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
                      Filter Predicates
                    6 HASH JOIN
                      ( Estim. Costs = 299,983 , Estim. #Rows = 4,592,525 )
                      Estim. CPU-Costs = 8,617,899,244 Estim. IO-Costs = 299,508
                      Access Predicates
                        3 TABLE ACCESS FULL EKKO
                          ( Estim. Costs = 2,209 , Estim. #Rows = 271,200 )
                          Estim. CPU-Costs = 561,938,609 Estim. IO-Costs = 2,178
                          Filter Predicates
                        5 TABLE ACCESS BY INDEX ROWID EKPO
                          ( Estim. Costs = 290,522 , Estim. #Rows = 4,592,525 )
                          Estim. CPU-Costs = 6,913,020,784 Estim. IO-Costs = 290,141
                            4 INDEX SKIP SCAN EKPO~Z02
                              ( Estim. Costs = 5,144 , Estim. #Rows = 4,592,525 )
                              Search Columns: 2
                              Estim. CPU-Costs = 789,224,817 Estim. IO-Costs = 5,101
                             Access Predicates Filter Predicates
    Explain plan for QAS:
    SELECT STATEMENT ( Estimated Costs = 263,249 , Estimated #Rows = 13,842,540 )
            7 HASH JOIN
              ( Estim. Costs = 263,249 , Estim. #Rows = 13,842,540 )
              Estim. CPU-Costs = 59,041,893,935 Estim. IO-Costs = 260,190
              Access Predicates
                1 TABLE ACCESS FULL LFA1
                  ( Estim. Costs = 63 , Estim. #Rows = 812 )
                  Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
                  Filter Predicates
                6 HASH JOIN
                  ( Estim. Costs = 263,113 , Estim. #Rows = 13,842,540 )
                  Estim. CPU-Costs = 57,640,387,953 Estim. IO-Costs = 260,127
                  Access Predicates
                    4 HASH JOIN
                      ( Estim. Costs = 2,127 , Estim. #Rows = 194,660 )
                      Estim. CPU-Costs = 513,706,489 Estim. IO-Costs = 2,100
                      Access Predicates
                        2 TABLE ACCESS FULL EIKP
                          ( Estim. Costs = 351 , Estim. #Rows = 54,830 )
                          Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 348
                          Filter Predicates
                        3 TABLE ACCESS FULL EKKO
                          ( Estim. Costs = 1,534 , Estim. #Rows = 194,660 )
                          Estim. CPU-Costs = 401,526,622 Estim. IO-Costs = 1,513
                          Filter Predicates
                    5 TABLE ACCESS FULL EKPO
                      ( Estim. Costs = 255,339 , Estim. #Rows = 3,631,800 )
                      Estim. CPU-Costs = 55,204,047,516 Estim. IO-Costs = 252,479
                      Filter Predicates
    One more bit of information, PRD was copied to TST about a month ago and this one is also slow.
    I did almost anything I could think of.

    > DB stats have all been recalculated on both systems
    > initSID.ora values are almost identical. only memory related parameters (and SID) are different.
    > Obviously, data is different
    Ok, so you say: the parameters are different, the data is different and the statistics are different.
    I'm surprised that you still expect the plans to be the same...
    > For you info, view ZBW_VIEW_EKPO fetched its info from tables EIKP, LFA1, EKKO and EKPO.
    We will need to see the view definition !
    > Starting on September 15th, a query that used to take 10 minutes started taking over 120 minutes.
    Oh - Sep. 15th - that explains it ... just kiddin'.
    Ok, so it appears to be obvious that from that day on, the execution plan for the query was changed.
    If you're on Oracle 10g you may look it up again and also recall the CBO stats that had been used back then.
    > I compared explain plans on both system and they are really different:
    >
    > SQL request:
    >
    SELECT
    >   "MANDT" , "EBELN" , "EBELP" , "SAISO" , "SAISJ" , "AEDAT" , "AUREL" , "LOEKZ" , "INCO2" ,
    >   "ZZTRANSPORT" , "PRODA" , "ZZPRDHA" , "ZZMEM_DATE" , "KDATE" , "ZZHERKL" , "KNUMV" , "KTOKK"
    > FROM
    >   "ZBW_VIEW_EKPO"
    > WHERE
    >   "MANDT" = :A0#
    Ok - basically you fetch all rows from this view as MANDT is usually not a selection criteria at all.
    > Explain plan for PRD:

    SELECT STATEMENT ( Estimated Costs = 300,452 , Estimated #Rows = 0 )
    >
    >         8 HASH JOIN
    >           ( Estim. Costs = 300,451 , Estim. #Rows = 4,592,525 )
    >           Estim. CPU-Costs = 9,619,870,571 Estim. IO-Costs = 299,921
    >           Access Predicates
    >
    >             1 TABLE ACCESS FULL EIKP
    >               ( Estim. Costs = 353 , Estim. #Rows = 54,830 )
    >               Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 350
    >               Filter Predicates
    >             7 HASH JOIN
    >               ( Estim. Costs = 300,072 , Estim. #Rows = 4,592,525 )
    >               Estim. CPU-Costs = 9,093,820,218 Estim. IO-Costs = 299,571
    >               Access Predicates
    >
    >                 2 TABLE ACCESS FULL LFA1
    >                   ( Estim. Costs = 63 , Estim. #Rows = 812 )
    >                   Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
    >                   Filter Predicates
    >                 6 HASH JOIN
    >                   ( Estim. Costs = 299,983 , Estim. #Rows = 4,592,525 )
    >                   Estim. CPU-Costs = 8,617,899,244 Estim. IO-Costs = 299,508
    >                   Access Predicates
    >
    >                     3 TABLE ACCESS FULL EKKO
    >                       ( Estim. Costs = 2,209 , Estim. #Rows = 271,200 )
    >                       Estim. CPU-Costs = 561,938,609 Estim. IO-Costs = 2,178
    >                       Filter Predicates
    >                     5 TABLE ACCESS BY INDEX ROWID EKPO
    >                       ( Estim. Costs = 290,522 , Estim. #Rows = 4,592,525 )
    >                       Estim. CPU-Costs = 6,913,020,784 Estim. IO-Costs = 290,141
    >
    >                         4 INDEX SKIP SCAN EKPO~Z02
    >                           ( Estim. Costs = 5,144 , Estim. #Rows = 4,592,525 )
    >                           Search Columns: 2
    >                           Estim. CPU-Costs = 789,224,817 Estim. IO-Costs = 5,101
    >                          Access Predicates Filter Predicates
    Ok, we've no restriction to the data, so Oracle chooses the access methods it thinks are best for large volumes of data - Full table scans and HASH JOINS. The index skip scan is quite odd - maybe this is due to one of the join conditions.
    > Explain plan for QAS:

    SELECT STATEMENT ( Estimated Costs = 263,249 , Estimated #Rows = 13,842,540 )
    >
    >         7 HASH JOIN
    >           ( Estim. Costs = 263,249 , Estim. #Rows = 13,842,540 )
    >           Estim. CPU-Costs = 59,041,893,935 Estim. IO-Costs = 260,190
    >           Access Predicates
    >
    >             1 TABLE ACCESS FULL LFA1
    >               ( Estim. Costs = 63 , Estim. #Rows = 812 )
    >               Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
    >               Filter Predicates
    >             6 HASH JOIN
    >               ( Estim. Costs = 263,113 , Estim. #Rows = 13,842,540 )
    >               Estim. CPU-Costs = 57,640,387,953 Estim. IO-Costs = 260,127
    >               Access Predicates
    >
    >                 4 HASH JOIN
    >                   ( Estim. Costs = 2,127 , Estim. #Rows = 194,660 )
    >                   Estim. CPU-Costs = 513,706,489 Estim. IO-Costs = 2,100
    >                   Access Predicates
    >
    >                     2 TABLE ACCESS FULL EIKP
    >                       ( Estim. Costs = 351 , Estim. #Rows = 54,830 )
    >                       Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 348
    >                       Filter Predicates
    >                     3 TABLE ACCESS FULL EKKO
    >                       ( Estim. Costs = 1,534 , Estim. #Rows = 194,660 )
    >                       Estim. CPU-Costs = 401,526,622 Estim. IO-Costs = 1,513
    >                       Filter Predicates
    >
    >                 5 TABLE ACCESS FULL EKPO
    >                   ( Estim. Costs = 255,339 , Estim. #Rows = 3,631,800 )
    >                   Estim. CPU-Costs = 55,204,047,516 Estim. IO-Costs = 252,479
    >                   Filter Predicates
    Ok, we see significantly different table sizes here, but at least this second plan leaves out the superfluous Index Skip Scan.
    How to move on from here?
    1. Check whether you've installed all the current patches. Not all bugs that are in the system are hit all the time, so it may very well be that after new CBO stats were calculated you just begin to hit one of it.
    2. Make sure that all parameter recommendations are implemented on the systems. This is crucial for the CBO.
    3. Provide a description of the Indexes and the view definition.
    The easiest would be: perform an Oracle CBO trace and provide a download link to it.
    regards,
    Lars

Maybe you are looking for

  • Events before a specific date not showing up on Calendar List View

    I used to be able to view events as far back as they go in Calendar's List View. Since upgrading to iOS 7, I can't see events before September 23, 2012 in the List View (by pressing Search icon). Is it because there technically is no longer a List Vi

  • Output error in procedure

    I have successfuly compiled the following procedure. When I execute in SQL Worksheet I get the following error. I have tried everything I could think of over the last hour or so to resolve. Anyhelp is greatly appreciated. Thanks in advance. Procedure

  • SYSTEM ERROR : com/adobe.processingexception:Problem accessing data from De

    Hi SAP Guru, We have configured ADOBE for separate ABAP server and Java Server. But when I am execution report FP_TEST_00 getting below error ADS: com.adobe.ProcessingException: Problem accessing d(200101) Message no. FPRUNX001 FP_CHECK_DESTINATION_S

  • "ORA-28578: protocol error..." before calling an external procedure

    Hello, Oracle reports this error Error report: SQL Error: ORA-29856: error occurred in the execution of ODCIINDEXCREATE routine ORA-28578: protocol error during callback from an external procedure 29856. 00000 - "error occurred in the execution of OD

  • HT201263 how to exit from recovery mode to normal mode

    i have trouble with my ipnone 4S 5.1.1.... recovery mode dan can't to normal mode...it always blang...when  i was restore from alway shown error 3491 or restor dan upgrading... Thank you max how to fixing error 3194?