Disable logging of SELECT statements

I would like to disable the logging of SELECT statements without disabling the logging of other SQL. From what I can see in the documentation, I am limited to a level of all SQL or no SQL.
Reasoning: There are 100X as many SELECT statements as INSERT or UPDATE generated by TopLink for my app, and it bloats my log files enormously. I use the INSERT / UPDATE statements for debugging from time to time so I still want to log them, but the SELECTS are never useful.
Can I define some custom logger class that handles this?
Thanks,
Stucco

You could write your own SessionLog subclass for this. You would need to overwrite the log method and search the SessionLogEntry message for SELECT and ignore the message.

Similar Messages

  • Redo log contains select statement ??

    If i do a select command will that be stored in redolog file?
    If it gets storeed there then during recovery will that select statement will also run and use time in recovery?

    When either more then 1 MB of changes or 1/3 buffer
    if full or other reasons.
    Means uncommited data also get flush to online redo
    log file.
    But ssome persons says only commited sql command gets
    store in log buffer...
    Few other , including my oracle facluty says only
    commited changes are flush from redo log buffer to
    online redo log file.......
    Any one 100% Surety ......I know reading document is not your cup of tea, unfortunately it's most reliable source of information concerning Oracle.
    Oracle Concept mentioned
    Note:
    Sometimes, if more buffer space is needed, LGWR writes redo log entries before a transaction is committed. These entries become permanent only if the transaction is later committed.

  • How a select statement generates redo logs

    Can some one explain how a select statement generates redo logs
    Naveen

    Redo with a select statement happens when dirty blocks get written to the database, and are then "cleaned up" when next read of that data block. This could happen when a large DML statement does not commit before the DB Writer needs to write modified blocks to disk. The next time the blocks are read by a select statement they get modified, hence REDO is generated by the select statement.
    Read the following for more and better understandings.
    http://www.dbspecialists.com/specialists/specialist2003-10.htm
    Jaffar

  • Excessive flashback log generates with select statement

    Hi everyone;
    We have some extractions taken from a "flashback on" database.
    Extractions are just select statements but when they are run, database produces excessive flashback logs.
    What may be the reason database produce flashback logs with just select statements?
    (It's certain that there are no insert-update-delete operations)
    Version: 10.2.0.4.3
    Thanks...

    Do you do heavy update/delete before you select the statements ?
    I am not very sure if delayed block cleanout also have the same effect on flashback logs but the output below is leading me to think that way
    HR@ORACOS> select * from v$flashback_database_stat;
    BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
    20100527 15:32:53 20100527 15:50:16      875266048 1207132160 2038729728                        0
    20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
    20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
    20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
    20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10
    HR@ORACOS> set autotrace traceonly statistics
    HR@ORACOS>  update base_table_np set y='INVALID';
    commit;
    4021808 rows updated.
    Statistics
           2512  recursive calls
        8341430  db block gets
        4069140  consistent gets
         120569  physical reads
    1908471980  redo size
            848  bytes sent via SQL*Net to client
            793  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
        4021808  rows processed
    HR@ORACOS> set autotrace off;
    HR@ORACOS> select * from v$flashback_database_stat; 
    HR@ORACOS>
    BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
    20100527 15:32:53 20100527 16:00:36     1236664320 2021974016 4019910656                        0
    20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
    20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
    20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
    20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10
    HR@ORACOS> set autotrace traceonly statistics
    HR@ORACOS> select * from base_table_np;
    4021808 rows selected.
    Statistics
            139  recursive calls
              0  db block gets
          53908  consistent gets
           4404  physical reads
        1652384  redo size                                                  ------->delayed block cleanout effect
      175008833  bytes sent via SQL*Net to client
          88996  bytes received via SQL*Net from client
           8045  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
        4021808  rows processed
    HR@ORACOS> set autotrace off
    HR@ORACOS> select * from v$flashback_database_stat;    ----flashback data size increases
    HR@ORACOS>
    BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
    20100527 15:32:53 20100527 16:01:11     1305264128 2054594560 4021728256                        0
    20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
    20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
    20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
    20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10Basically what I do is I update a 4 million table big redo generated with flashback logs
    When I do select after the update I still see the redo generated because of delayed block cleanout but what I also see is the slight increase in flashback data size (check the first row of flashback_database_stat) which suits what you asking for. Select statement generates flashback log
    Tested on 11.2.0.1 with single active session on the db
    Coskan Gundogar
    Blog: http://coskan.wordpress.com
    Twitter: http://www.twitter.com/coskan
    Linkedin: http://uk.linkedin.com/in/coskan
    ---------

  • Stop auditing select statements issued against SYS objects

    Hi,
    My current client has a requirement to track destructive updates (i.e. insert, update, delete) issued by users who can connect directly to the database. At the moment though, SELECT statements issued against SYS-owned objects are also being captured to the Oracle audit trail. For the time being at least these need to be disabled.
    I've issued NOAUDIT SELECT TABLE/SEQUENCE and NOAUDIT SELECT ANY TABLE/SEQUENCE commands, as has a user with the SYSDBA privilege, and they're still being logged. Is there any way to switch these off? I don't know if it's significant (I'm not a DBA by trade) but the audit_sys_operations parameter is set to True.
    My client is currently running Oracle Database 10.2.0.5.0 standard edition.
    If anyone has any suggestions I'd be grateful.
    Thanks in advance,
    Steve

    Hi,
    Thanks for the input so far ...
    @Eduardo and KarK ...
    show parameter audit
    audit_file_dest string D:\ORACLE\PRODUCT\10.2.0\ADMIN\USSUPM2\ADUMP
    audit_sys_operations boolean TRUE
    audit_trail string DB, EXTENDED
    If we set audit_sys_operations to FALSE, won't that stop auditing of all actions carried out by, for example, someone who connects as SYSDBA? That is something that's still needed to be captured. Unfortunately they go to the WIndows Event Log but at least they're captured somewhere.
    @Hemant
    This auditing was in place before my client took me on, so I can't say what was used to initiate it unfortunately. What I can say though is that they absolutely don't want to turn off auditing by SYS- type users, just SELECT against SYS-owned objects.
    Thinking simplistically, could I just write a script which trawls dba_objects for sys-owned tables, views and sequences and explicitly issues a noaudit select against what's found, and get one of the sysdba-type people we have access to to run it?
    Thanks in advance (again)
    Steve

  • DML Error logging for Update statement

    Hello,
    I am facing a problem with regard to DML Error logging with Update statement .
    oracle : 10.2
    I am executing following DML update:
    BEGIN
    UPDATE
    table_1  a
    SET a.Exp_DATE =a.EFF_DATE
    WHERE  a.col_a1 != (SELECT b.colb1
                         FROM table_2  b
                         WHERE  a.msisdn =b.msisdn )
    LOG ERRORS INTO table_1_ERR REJECT LIMIT UNLIMITED;                        
    END ;    I was expecting that "ORA-01427: single-row subquery returns more than one row" would be captured in LOG error table "table_1_err"
    but instead I got run time error and whole dml was rolled back.
    Please let me know whether this exception is not captured by DML error logging.
    Thanks,
    Abhishek

    *Oracle logs the following errors during DML operations:** Column values that are too large.
    * Constraint violations (NOT NULL, unique, referential, and check constraints).
    * Errors raised during trigger execution.
    * Errors resulting from type conversion between a column in a subquery and the corresponding column of the table.
    * Partition mapping errors.
    >
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/transform.htm#sthref777

  • ABAP select statements takes too long

    Hi,
    I have a select statement as shown below.
    SELECT * FROM BSEG INTO TABLE ITAB_BSEG
                         WHERE  BUKRS = CO_CODE
                         AND    BELNR IN W_DOCNO
                         AND    GJAHR = THISYEAR
                         AND    AUGBL NE SPACE.
    This select statement runs fine in all of R/3 systems except for 1. The problem that is shown with this particular system is that the query takes very long ( up to an hour for if W_DOCNO consists of 5 entries). Sometimes, before the query can complete, an ABAP runtime error is encountered as shown below:
    <b>Database error text........: "ORA-01555: snapshot too old: rollback segment   
    number 7 with name "PRS_5" too small?"                                       
    Internal call code.........: "[RSQL/FTCH/BSEG ]"                              
    Please check the entries in the system log (Transaction SM21).  </b> 
    Please help me on this issue. However, do not give me suggestions about selecting from a smaller table (bsik, bsak) as my situation does not permit it.
    I will reward points.

    dont use select * ....
    instead u declare ur itab with the required fields and then in select refer to the fields in the select .
    data : begin of itab,
             f1
             f2
             f3
             f4
             end of itab.
    select f1 f2 f3 f4 ..
         into table itab
    from bseg where ...
    . this improves the performance .
    select * is not advised .
    regards,
    vijay

  • Getting Username to pass into LOV select statement

    Hello!
    I'm wondering if its possible to get the username of the current user logged in and pass it as a variable into a select statement used in a dynamic LOV in Oracle AS Portal?
    What I'm attempting to do is pull all the values from a table that equal the current user's username to user on a portal report
    so (as a rough example)
    select color from mytable where username = 'whatever the user name is would be here'
    And then the current user would get a list of values from which to select based off of the values entered in this table.
    The issue I'm having is determing how to fill the 'whatever the user name is would be here' portion with the actual logged in user's username (or even if its possible). I know on the actual portal one can do #USER.FULLNAME# to display their username, is there a similar "variable" one may use to get the username for a LOV sql call?
    I can get it to work if I statically fix the username to a particular value (ex: where username = 'Joe.Hacker') but I'm unsure if theres a variable or bind value (for lack of a better term) to grab the username on the fly.. dynamically.

    portal.wwctx_api.get_user can be used in the SQL query of your portal report to get the user_name of the currently logged-in portal user. For more info on wwctx_api, see the 10.1.2 or 10.1.4 portal API docs at http://www.oracle.com/technology/products/ias/portal/html/plsqldoc/pldoc1012/index.html or http://www.oracle.com/technology/products/ias/portal/html/plsqldoc/pldoc1014/index.html

  • Can I disable logging for session in Oracle 10g?

    I use a procedure to delete a lot of row for an application repeatedly. Because the DELETE statement is time consuming and the data don't need to be archived, I decide to use the nologging option.
    How to do it?
    What is the best choice? Can I disable logging from session in Oracle 10g?
    Thank you
    Edited by: jetq on Jul 23, 2009 9:46 AM

    Hi,
    "Delete" without generating redo-log is not possible.
    If you are on 10g, one way of making this thing efficiant is partition the table, with range-list partitioning. Partitioning existing table will be an excercise in itself, but that will be one time activity.
    In partitioned (or sub-partitioned) tables, you can truncate a partition (or subpartition). That won't generate any redo log (or very very less redo log) and that runs in seconds.
    In your case if you range partition INCOMING table by datetime (1 partition per day) and list sub-partition it by STATUS, that would help.
    Another approach is, if you are deleteing, say 80% records every day and leaving 20% (or very less) records. What you can do is, partition the table only by range on datetime. Then, every time you want to delete data, copy the rows you want to keep in some other table (or temporary table), truncate partition for that day and insert rows back (which you want to keep).
    I have done a similar thing and it works very quickly and generates very less redo log. Redo log generated in case of truncating partition or creating new partitions is just for Oracle internal commands (like data dictionary update etc).
    Have fun.

  • Error logging in merge statement

    how to handle error logging in merge statement??
    thanks in advance!!!

    Welcome to the forum!
    Whenever you post please provide your 4 digit Oracle version (result of SELECT * FROM V$VERSION).
    >
    how to handle error logging in merge statement??
    >
    Do it the way the documentation tells you to.
    See the error_logging_clause of the MERGE statement in the SQL Language doc
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm
    It contains an example of using error logging with MERGE
    >
    error_logging_clause
    The error_logging_clause has the same behavior in a MERGE statement as in an INSERT statement. Refer to the INSERT statement error_logging_clause for more information.
    See Also:
    "Inserting Into a Table with Error Logging: Example"

  • Toplink Essentials creates not usable select statement

    My problem is the following:
    I have the following NamedQuery statement in an JPA Entity Class:
    @NamedQuery(name = "Leasingteilvertrag.findSearch",
    query = "select distinct o " +
    " from Leasingteilvertrag o " +
    " left outer join o.sachbearbeiterList s " +
    " where (:wtvStatusBearb1 is null or :wtvStatusBearb2 = -1 or o.wtvStatusBearb =
    :wtvStatusBearb3)" +
    " and (:wtvStatusVerwert1 is null or :wtvStatusVerwert2 = -1 or o.wtvStatusVerwert = :wtvStatusVerwert3)" +
    " and (:wtvAdressNr1 is null or :wtvAdressNr2 = -1 or o.wtvAdressNr =
    :wtvAdressNr3)" +
    " and (:wtvEingangsdatum1 is null or o.wtvEingangsdatum >= :wtvEingangsdatum2)" +
    " and (:wtvEingangsdatumBis1 is null or o.wtvEingangsdatum <= :wtvEingangsdatumBis2)"
    +
    " and (:wtvLlvNr1 is null or o.wtvLlvNr = :wtvLlvNr2)" +
    " and (:wtvFirma1 is null or o.wtvFirma = :wtvFirma2)" +
    " and (:wsbId1 is null or :wsbId2 = -1 or s.wsbSbId = :wsbId3)")
    Oracle Toplink translates this (according to to opmn log of the Application Server)
    to:
    SELECT DISTINCT t0.WTV_ID, t0.WTV_SL_PLUS_KNZ, t0.WTV_ABGESCHLOSSENDATUM, t0.WTV_SL_TECHNIK_DATE, t0.WTV_ADRESS_POOL, t0.WTV_SL_TECHNIK_KNZ,
    t0.WTV_AKTENZEICHEN_RA, t0.WTV_SONDERAFA_OBJEKTE_AKTUELL,
    t0.WTV_ANZAHLRUECKSTAENDIGERRATEN, t0.WTV_SONDERAFA_OBJEKTE_GEBUCHT,
    t0.WTV_BANKAUSKUNFT_KNZ, t0.WTV_STATUS_BEARB, t0.WTV_EINGANGSDATUM,
    t0.WTV_STATUS_BEARB_DATUM, t0.WTV_EINSCHAETZUNG_BONI, t0.WTV_STATUS_VERWERT,
    t0.WTV_EWB_DATUM_ERFASSUNG, t0.WTV_STATUS_VERWERT_DATUM, t0.WTV_EWB_GEBUCHT,
    t0.WTV_STATUS_FREIGABE, t0.WTV_EWB_SB_ERFASSUNG, t0.WTV_STATUS_FREIGABE_DATUM,
    t0.WTV_FIRMA, t0.WTV_VERBLEIB_AKTE, t0.WTV_WAEHRUNG_AUSFALL,
    t0.WTV_KUENDIGUNGSFORDERUNG, t0.WTV_WAEHRUNG_EWB, t0.WTV_LLV_NR,
    t0.WTV_WAEHRUNG_RUECKST_ANR, t0.WTV_LTV_NR, t0.WTV_WAEHRUNG_SONDERAFA_OBJEKTE,
    t0.WTV_PROZESSKOSTEN_RISIKO, t0.WTV_WAE_EINSCHAETZUNG_BONI,
    t0.WTV_RUECKST_ANRECHNUNG_GEBUCHT, t0.WTV_WAE_KUENDIGUNGSFORDERUNG,
    t0.WTV_SL_KASKO_DATE, t0.WTV_WIEDERGESUNDUNGSDATUM, t0.WTV_SL_PLUS_DATE,
    t0.WTV_ABGESCHLOSSEN_KNZ, t0.WTV_AKTENZEICHEN_FAV, t0.WTV_TEILRISIKO_KNZ,
    t0.WTV_AUSFALL, t0.WTV_BETRUG_KNZ, t0.WTV_EINGANGSDATUM_ALT,
    t0.WTV_CHANGE_USER, t0.WTV_EWB_DATUM_FREIGABE, t0.WTV_CHANGE_DATE,
    t0.WTV_EWB_SB_FREIGABE, t0.WTV_FREIGABE_KOMMENTAR, t0.WTV_KUENDIGUNGSDATUM,
    t0.WTV_ADRESS_NR, t0.WTV_ALTFALL_KNZ, t0.WTV_OPERATIONELLES_RISIKO,
    t0.WTV_BEMERKUNG, t0.WTV_SACHSTAND, t0.WTV_EWB_AKTUELL,
    t0.WTV_LLV_NR_UMFINANZIERUNG, t0.WTV_EWB_KORREKTUR, t0.WTV_SL_KASKO_KNZ,
    t0.WTV_FIRMA_UMFINANZIERUNG, t0.WTV_RUECKST_ANRECHNUNG_AKTUELL,
    t0.WTV_KUENDIGUNGSFORDERUNG_ALT, t0.WTV_LEASINGVERTRAG_ID,
    t0.WTV_RUECKST_ANRECHNUNG_BUC_ID, t0.WTV_EWB_BUC_ID,
    t0.WTV_SONDERAFA_OBJEKTE_BUC_ID FROM VWDB_LEASINGTEILVERTRAG t0,
    VWDB_LEASINGTEILVERTRAG t2, VWDB_SACHBEARBEITER t1 WHERE (((((((((((? IS NULL)
    OR (? = (? - ?))) OR (t0.WTV_STATUS_BEARB = ?)) AND (((? IS NULL) OR (? = (? -
    ?))) OR (t0.WTV_STATUS_VERWERT = ?))) AND (((? IS NULL) OR (? = (? - ?))) OR
    (t0.WTV_ADRESS_NR = ?))) AND ((? IS NULL) OR (t0.WTV_EINGANGSDATUM > ?))) AND
    ((? IS NULL) OR (t0.WTV_EINGANGSDATUM < ?))) AND ((? IS NULL) OR (t0.WTV_LLV_NR
    = ?))) AND ((? IS NULL) OR (t0.WTV_FIRMA = ?))) AND (((? IS NULL) OR (? = (? -
    ?))) OR (t1.WSB_SB_ID = ?))) AND (t1.WSB_LTV_ID (+) = t0.WTV_ID))
    The Problem is the "VWDB_LEASINGTEILVERTRAG t2" entry in the FROM clause of the generated select statement. This causes the select to generate the cartesian product.
    Has anyone had such a problem before? How can this be solved?

    Hello,
    I have exactly the same problem (with a simpler query though). I'm running my webapp on a GlassFish V2 (build b09d-fcs), Toplink Essentials JPA impl. and a MySQL 6.0.4 database server.
    I'm trying to run the following JPQL query: select f from Foo f where (1=1) and f.title = :title
    After having set persistence log levels to FINE, the following SQL is displayed:
    select t0.XXX, t0.YYY from Foo t0, Foo t1 where ((?=?) and (t0.title= ?))
    bind => [1, 1, Bar]
    (1 = 1) is used because of dynamic query generation (application code)
    The problem is that the additional from clause is generating a cartesian product on my Foo table, which causes many duplicated results to be returned.
    I have simplified the select part of the query but the actual query is of the same kind: no join, only 1 entity, no inheritance, a single N:1 lazy-initialized entity (*Foo-1FooParent). The only "exotic" facet of my Foo mapping is the use of an @Enumerated column.
    Is it the expected behavior?
    -Titix

  • Select statement is slow

    Hi Folks,
    I thought maybe you could help me here -
    Oracle version is 9.2.0.7
    I am running a select statement against a table. There are no filter conditions
    The statement is
    SELECT * from Table1
    This statement takes 500 sec to execute before I see any data. The table has around 1 million records.
    There is no VPD on the table; there are no locks or latches on the table when the query is executed.
    Other issues with the table are:
    1. SQL*Loader takes 2-3 hours to load data
    2. A simple delete of 1 record takes 1 hour (there are no constraints on this table).
    I monitored the WAIT events: I see db file scattered read and a lot of time is spent on db_single_file_read.
    This happens in production. The server has 8 CPUs and I am the only user logged in.
    These issues are not observed in UAT environment.
    In production, workarea_size_policy is set to AUTO and db_cache_advice is ON.
    The same parameters are set to MANUAL and READY, respectively, in UAT.
    Any suggestions on why getting the first record using a straight SELECT statement would take 500 sec?
    Thanks

    Justin here are the Trace details:
    The wait is on db_file_scattered_read: 700+sec
    The SELECT statement executed is
    SELECT * from tmpfeedsettlement where rownum < 10
    Used 10046 events with level 12
    TKPROF: Release 9.2.0.1.0 - Production on Thu May 24 12:44:19 2007
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Trace file: v:\shakti\o01scb3_ora_14197.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    alter session set sql_trace=true
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer goal: CHOOSE
    Parsing user id: 296  (P468707)
    alter session set events '10046 trace name context forever,level 12'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 296  (P468707)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1       22.45         22.45
    select *
    from
    tmpfeedsettlement where rownum < 10
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2    102.42     765.63     977140     977202          0           9
    total        4    102.43     765.63     977140     977202          0           9
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 296  (P468707)
    Rows     Row Source Operation
          9  COUNT STOPKEY
          9   TABLE ACCESS FULL TMPFEEDSETTLEMENT
    Rows     Execution Plan
          0  SELECT STATEMENT   GOAL: CHOOSE
          9   COUNT (STOPKEY)
          9    TABLE ACCESS   GOAL: ANALYZED (FULL) OF 'TMPFEEDSETTLEMENT'
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net more data to client                     2        0.00          0.00
      db file scattered read                      61181        5.62        719.27
      db file sequential read                         7        0.00          0.00
      SQL*Net message from client                     2      336.11        336.12
    alter session set sql_trace=false
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 296  (P468707)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        3      0.01       0.00          0          0          0           0
    Execute      4      0.00       0.00          0          0          0           0
    Fetch        2    102.42     765.63     977140     977202          0           9
    total        9    102.43     765.64     977140     977202          0           9
    Misses in library cache during parse: 3
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       3        0.00          0.00
      SQL*Net message from client                     3      336.11        358.57
      SQL*Net more data to client                     2        0.00          0.00
      db file scattered read                      61181        5.62        719.27
      db file sequential read                         7        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        4  user  SQL statements in session.
        0  internal SQL statements in session.
        4  SQL statements in session.
        1  statement EXPLAINed in this session.
    Trace file: v:\shakti\o01scb3_ora_14197.trc
    Trace file compatibility: 9.00.01
    Sort options: default
           1  session in tracefile.
           4  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           4  SQL statements in trace file.
           4  unique SQL statements in trace file.
           1  SQL statements EXPLAINed using schema:
               P468707.prof$plan_table
                 Default table was used.
                 Table was created.
                 Table was dropped.
       61246  lines in trace file.

  • Checking conditions in SELECT statement

    Hi All,
    I am relative new to ABAP and I would like to ask a question about checking conditions in SELECT statement in the "WHERE" part.
    There are two checkboxes at the selection screen and each should disable one of  conditions (marked with two stars) in the SELECT mentioned below.
    My question is, whether there exists an option how to solve this problem without using solution like:
    IF checkobx1.
    SELECT (without one condition)
    ELSEIF checkbox2.
    SELECT(without other condition).
    ELSE.
    SELECT (with both conditions)
      SELECT  qprueflos qherkunft qaufnr qsa_aufnr qmatnr qwerkvorg
              qpastrterm  qpaendterm
              qverid qobjnr vobjnr AS objnr_fa vauart
        FROM qals AS q INNER JOIN vkaufk AS v
        ON qaufnr = vaufnr
        INTO CORRESPONDING FIELDS OF TABLE gt_qals
        WHERE q~prueflos IN s_pruefl
          AND q~stat35     EQ space
          AND q~werk       EQ loswk
          AND q~herkunft IN s_herk
          AND q~offennlzmk EQ 0
          AND q~offen_lzmk EQ 0
          AND q~pastrterm IN s_startt
          AND q~paendterm LE s_endt
          AND v~auart IN s_auart.    "('ZCPA', 'ZCPK', 'ZCBA').

    Hi,
    With this, I think u can directly read into WHERE clause
    IF checkbox1.
        v_where = '& BETWEEN ''&'' AND ''&'' '.
        REPLACE '&' WITH key_field INTO v_where.
        REPLACE '&' WITH field_LOW INTO v_where.
        REPLACE '&' WITH field_HIGH INTO v_where.
        CONDENSE v_where.
    ELSEIF  checkbox2.
        v_where = '& BETWEEN ''&'' AND ''&'' '.
        REPLACE '&' WITH key_field INTO v_where.
        REPLACE '&' WITH field_LOW INTO v_where.
        REPLACE '&' WITH field_HIGH INTO v_where.
        CONDENSE v_where.
    ENDIF.
    select * into corresponding fields of table ITAB
                 from (table_name)
                where (v_where).
    In this key_field is your fieldname in the where clause and field_low, field_high are range of values.
    If i write static query it looks like this
    RANGES: MATNR1 FOR MARA-MATNR.
      MATNR1-LOW = MATNR_LOW.
      MATNR1-HIGH = MATNR_HIGH.
      MATNR1-SIGN = 'I'.
      MATNR1-OPTION = 'BT'.
      APPEND MATNR1.
    select * into corresponding fields of table itab
    from mara where matnr BETWEEN 'M100' AND 'M200'.
    Hope it helps u
    thanks\
    Mahesh
    Edited by: Mahesh Reddy on Jan 30, 2009 11:23 AM

  • How to disable a standard selection screen of LDB?

    Hi Friends,
       My requirement is to disable a standard selection screen of a standard LDB and use my own Selection screen instead.How to go about it?.
    Prompt replies would be rewarded.
    Regards,
    Tamilarasan.

    Hi Tamilarasan,
      U can hide LDB field, in the following way.
    1.In tables statement remove the table name for the fields
      you done require.
    2.You can modify the screen fields. LOOP AT SCREEN
    3.In the program attributes you can choose the SAP defined
      selection screen if provided.
    Add can add new field, in the following way,
    1.If it is Customer program then as normal way like
    SELECTION-SCREEN: BEGIN OF BLOCK 1
    SELECT-OPTIONS:
    SELECTION-SCREEN END OF BLOCK 1.
    2. Goto SE36 and modify the selection views by creating  'CUS'.
    All LBD will not have dynamic selection. If you want you can copy to Z* version and add the following statement to have dynamic selction
    "SELECTION-SCREEN DYNAMIC SELECTIONS FOR TABLE" XXXX
    Regards,
    Prabhu Rajesh.

  • Cascading Select Statements - problem with blank drop-downs

    Hello,
    I have posted a number of questions about Cascading Select Statements in APEX and though I've received some good information, I still get a blank drop-down when I select the first LOV.
    I also found "How to test an On-Demand Process used for AJAX" on the web. Here is the link to the web page:
    http://www.inside-oracle-apex.com/2006/12/how-to-test-on-demand-process.html
    When I try to test the ON-DEMAND Application Process in the Address Bar of my browser by typing the following, I get an error:
    http://beta.biztech.net:2020/pls/apex/f?p=4000:0:211233229176642:APPLICATION_PROCESS=CASCADING_SELECT_LIST:::P6_PROJECT_ID:CASCADING_SELECTLIST_ITEM_1
    The error I get is:
    Unexpected error, unable to find item name at application or page level.
    ERR-1002 Unable to find item ID for item "P6_PROJECT_ID" in application "4000".
    As perhaps a last ditch effort, I will post all the steps, all the code and a link to my application.
    Here is a link you can visit to view my application:
    http://beta.biztech.net:2020/pls/apex/f?p=112:1
    You can log in with the following ID and Password
    ID: tsimkiss
    PW: TS92
    Here are the steps that I have followed and the code that I have used.
    ++++++++++++++++++++++++++++++++++++++++++++++++++
    1. Create an application process in Shared Components
    - On Demand CASCADING_SELECT_LIST - like this:
    Process Point: On Demand
    Name: CASCADING_SELECT_LIST
    TYPE: PL/SQL Anonymous Block
    BEGIN
    OWA_UTIL.mime_header ('text/xml', FALSE);
    HTP.p ('Cache-Control: no-cache');
    HTP.p ('Pragma: no-cache');
    OWA_UTIL.http_header_close;
    HTP.prn ('<select>');
    HTP.prn ('<option value="' || 1 || '">' || '- select tasks -' || '</option>');
    FOR c IN (SELECT newops.task_name AS task_name,
    newops.task_id AS task_id
    FROM NEW_OPPORTUNITIES newops
    UNION
    SELECT DISTINCT pt.task_name AS task_name,
    pt.task_id AS task_id
    FROM pa_tasks@bizdev pt,
    pa.pa_projects_all@bizdev prj
    WHERE prj.project_id = pt.project_id
    AND prj.project_id =
    CASE
    WHEN TO_NUMBER(:cascading_selectlist_item_1)=1
    THEN prj.project_id
    ELSE TO_NUMBER(:cascading_selectlist_item_1)
    END)
    LOOP
    HTP.prn ('<option value="' || c.task_id || '">' || c.task_name || '</option>');
    END LOOP;
    HTP.prn ('</select>');
    END;
    2. Create an application item in Shared Components:
    Name: CASCADING_SELECTLIST_ITEM_1
    3. Create an LOV in Shared Components
    - This is the Primary LOV (name it similar to it's select list page item):
    List of Values Name: PROJECT_ID
    Source: Lists of Values Query
    SELECT newops.CLIENT AS project_name, newops.PROJECT_ID AS project_id FROM NEW_OPPORTUNITIES newops
    UNION
    SELECT ppa.NAME AS project_name, ppa.PROJECT_ID AS project_id FROM pa.pa_projects_all@bizdev ppa
    WHERE ppa.project_status_code='APPROVED'
    AND (ppa.COMPLETION_DATE IS NULL or ppa.completion_date > sysdate)
    AND (ppa.CLOSED_DATE IS NULL or ppa.closed_date > sysdate)
    ORDER BY project_name asc
    4. Create a javascript and put it in the header of the page where cascading drop-downs are:
    <script>
    function get_select_list_xml(pThis,pSelect){
    var l_Return = null;
    var l_Select = html_GetElement(pSelect);
    var get = new htmldb_Get(null,html_GetElement('pFlowId').value,
    'APPLICATION_PROCESS=CASCADING_SELECT_LIST',0);
    get.add('CASCADING_SELECTLIST_ITEM_1',pThis.value);
    gReturn = get.get('XML');
    if(gReturn && l_Select){
    var l_Count = gReturn.getElementsByTagName("option").length;
    l_Select.length = 0;
    for(var i=0;i<l_Count;i++){
    var l_Opt_Xml = gReturn.getElementsByTagName("option");
    appendToSelect(l_Select, l_Opt_Xml.getAttribute('value'),
    l_Opt_Xml.firstChild.nodeValue)
    get = null;
    function appendToSelect(pSelect, pValue, pContent) {
    var l_Opt = document.createElement("option");
    l_Opt.value = pValue;
    if(document.all){
    pSelect.options.add(l_Opt);
    l_Opt.innerText = pContent;
    }else{
    l_Opt.appendChild(document.createTextNode(pContent));
    pSelect.appendChild(l_Opt);
    </script>
    5. Create two Select List page items:
    P6_PROJECT_ID <-- This is the primary drop-down
    P6_TASK_ID <-- This is the secondary drop-down
    6. In your primary select list, put the following into HTML Form Element Attributes:
    HTML Form Element Attributes: onchange="get_select_list_xml(this,'P6_TASK_ID')"
    Other settings on the page:
    Name: P6_PROJECT_ID
    Display As: Select List
    Source Used: Always, replacing any existing values in session state
    Source Type: Database Column
    Source value or expression: PROJECT_ID
    Named LOV: PROJECT_ID <--- Choose from drop-down (this is the Application LOV created earlier)
    Null display values: - select project -
    Display Null: Yes
    7. The second select list is based on an LOV and depends on the value of the first select list:
    Name: P6_TASK_ID
    Display As: Select List
    Source Used: Always, replacing any existing values in session state
    Source Type: Database Column
    Source value or expression: TASK_ID
    Null display values: - select project -
    Display Null: Yes
    List of values definition:
    SELECT newops.task_name AS task_name,
    newops.task_id AS task_id
    FROM NEW_OPPORTUNITIES newops
    UNION
    SELECT DISTINCT pt.task_name AS task_name,
    pt.task_id AS task_id
    FROM pa_tasks@bizdev pt,
    pa.pa_projects_all@bizdev prj
    WHERE prj.project_id=pt.project_id
    AND prj.project_id=:P6_PROJECT_ID
    ORDER BY task_name asc
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    If you need an actual running copy of my application, I'm not sure I can upload to the Oracle APEX website since uses datalinks to some tables. If necessary, I will give you my login into if you email me directly, however.
    If someone could just straighten my code out especially on the ON-DEMAND Application Process, I think that would really help me out.
    Hope someone out there can help me.
    Thanks
    LEH

    Sorry, looking at your code that testing URL is still incorrect. You should be passing name / value pairs in the last arguments, and your passing P6_PROJECT_ID as the name part and CASCADING_SELECTLIST_1 as the value part. In your application process you are using CASCADING_SELECTLIST_1 as the parent ID for the P6_TASK_ID dropdown, so it is this name / value pair that you'll need to test. So your URL should look something like this...
    http://beta.biztech.net:2020/pls/apex/f?p=112:0:211233229176642:APPLICATION_PROCESS=CASCADING_SELECT_LIST:::CASCADING_SELECTLIST_ITEM_1:[some project id]
    (Note: Where [some project id] should be an ID for a project in your database, that has tasks.)
    And I'm with Dan here, I still can't access that link you provided. apex.oracle.com should be your next move if you can't resolve it, as you've got at least two people willing to go and have a look at your code.
    Hope it helps,
    Anthony.

Maybe you are looking for

  • How can I delete an e-mail without opening it?

    How can I delete an e-mail without opening it?

  • Open Modal Dialog on Page Load of MOSS 2010

    Hi All We are trying to open modal dialog when home page opens on Sharepoint portal 2010 We followed these steps: 1. Created Page 2. Created HTML Form Editor web part 3. Added following code to its source <script type="text/javascript" language="java

  • Pages, two programs on screen at the same time

    Hi there, I'm new to pages, so please forgive me if I am asking an obvious question. How do I turn off the function that automatically minimizes my pages window when I open a window up from another open program. This function is a little silly if you

  • MY IPOD IS DISCONNECTING AND RECONNECTING ITSELF.

    Okay I've been having problems with my iPod ever since I did the loading my whole library onto my iPod. I've been getting the stupid ! with the folder symbol. It always gets frozen and I can't get it out of being frozen. When I connect it to my Compu

  • I don't get spotlight suggestions in search bar

    Hello, I have the release version of Yosemite installed since yesterday. I never got the famous spotlight suggestions that supposed to be a part of the new safari search engine, until now. Any ideas? Thanks in advance!