Dump for a select query

Hi Gurus,
I have a select query as follows:
    SELECT
               vbap~vbeln
               vbap~posnr
               vbap~matnr
               vbap~arktx
               vbap~pstyv
               vbap~abgru
               vbap~prodh
               vbap~netwr
               vbap~werks
               vbap~kwmeng
               vbap~prctr
               vbap~ps_psp_pnr
                           FROM vbap
            INTO TABLE i_item_vbap
            FOR ALL ENTRIES IN i_reg_vbak
            WHERE
            vbap~vbeln EQ i_reg_vbak-vbeln AND
            vbap~matnr IN s_matnr          AND
            vbap~werks IN s_werks          .
However, when I execute the program, I get the dump saying as:
I get this exception "DBIF_RSQL_INVALID_RSQL", assigned to CLASS  "CX_SY_OPEN_SQL_DB"
Please help me in understanding this.
Also, there are only 85 entries in i_reg_vbak, plus not many values are there for s_matnr and s_werks.
I tried executing this query w/o for all entries, but still it gave a dump.
Waiting for your reply, thanks!

hello
SELECT
vbap~vbeln
vbap~posnr
vbap~matnr
vbap~arktx
vbap~pstyv
vbap~abgru
vbap~prodh
vbap~netwr
vbap~werks
vbap~kwmeng
vbap~prctr
vbap~ps_psp_pnr
from vbap into corresponding fields of table it_vbap.
if u use any  fields of vbap on selection scrren then mention as
eg: where vbeln in s_vbeln etc.
with regards,
sumanth reddy

Similar Messages

  • Short Dump for Dynamic Select Query

    Hello all,
    I get a short dump for my dynamic select query at the end of the code. The error is "The types of operands "dbtab" and "itab" cannot be converted into one another."
    My code looks like below.
    FORM get_ccnum_2  USING    p_tabname TYPE dd03l-tabname.
    DATA: p_table(30)  TYPE c.
      FIELD-SYMBOLS:  <dyn_wa>,
                                   <t> TYPE table.
      DATA: it_fldcat    TYPE lvc_t_fcat.
      TYPE-POOLS : abap.
      DATA: it_details   TYPE abap_compdescr_tab,
            wa_details   TYPE abap_compdescr.
      DATA: ref_descr    TYPE REF TO cl_abap_structdescr.
      DATA: new_table    TYPE REF TO data,
            new_line     TYPE REF TO data,
            wa_it_fldcat TYPE lvc_s_fcat.
    p_table = p_tabname.
      ref_descr ?= cl_abap_typedescr=>describe_by_name( p_table ).
      it_details[] = ref_descr->components[].
      LOOP AT it_details INTO wa_details.
        CLEAR wa_it_fldcat.
        wa_it_fldcat-fieldname = wa_details-name .
        wa_it_fldcat-datatype  = wa_details-type_kind.
        wa_it_fldcat-intlen    = wa_details-length.
        wa_it_fldcat-decimals  = wa_details-decimals.
        APPEND wa_it_fldcat TO it_fldcat .
      ENDLOOP.
      CALL METHOD cl_alv_table_create=>create_dynamic_table
        EXPORTING
          it_fieldcatalog = it_fldcat
        IMPORTING
          ep_table        = new_table.
    ASSIGN new_table->* TO <t>.
    CREATE DATA new_line LIKE LINE OF <t>.
      ASSIGN new_line->* TO <dyn_wa>.
    wa_cond = 'CCNUM <> '' '' '.
    APPEND wa_cond TO tab_cond.
          SELECT * INTO TABLE <t>
                   FROM     (p_table)
                   WHERE    (tab_cond)
                   ORDER BY (tab_ord).
    ENDFORM.                    " GET_CCNUM_2

    Hi,
    I tried to execute the code using table BSEGC and it gave a short dump..
    the actual exception that shows in ST22 IS ..UNICODE_TYPES_NOT_CONVERTIBLE..
    I think there is something wrong in the internal table creation..
    Instead of using the method cl_alv_table_create=>create_dynamic_table to create the dynamic table I used the following and it worked..
    CREATE DATA new_table TYPE TABLE OF (p_table).
    * Comment begin  " Naren
    *  ref_descr ?= cl_abap_typedescr=>describe_by_name( p_table ).
    *  it_details[] = ref_descr->components[].
    *  LOOP AT it_details INTO wa_details.
    *    CLEAR wa_it_fldcat.
    *    wa_it_fldcat-fieldname = wa_details-name .
    *    wa_it_fldcat-datatype  = wa_details-type_kind.
    *    wa_it_fldcat-intlen    = wa_details-length.
    *    wa_it_fldcat-decimals  = wa_details-decimals.
    *    APPEND wa_it_fldcat TO it_fldcat .
    *  ENDLOOP.
    *  CALL METHOD cl_alv_table_create=>create_dynamic_table
    *    EXPORTING
    *      it_fieldcatalog = it_fldcat
    *    IMPORTING
    *      ep_table        = new_table.
    * Comment End.  " Naren
    CREATE DATA new_table TYPE TABLE OF (p_table).   " New code by naren
    Please Try this..
    Thanks
    Naren

  • Dump error in select query

    Hey Gurus,
    i am working on a requirement in which select query is fetching 8 fields from a ZTABLE.
    IF NOT IT_ZQAPP1[] IS INITIAL.
      SELECT AUFNR VORNR PROBNR PIPENO NVORNR SHIFT
      PSTAT PRODAT FROM ZQAPP INTO CORRESPONDING FIELDS OF TABLE IT_ZQAPP_B
      FOR ALL ENTRIES IN IT_ZQAPP1
      WHERE AUFNR = IT_ZQAPP1-AUFNR
      AND WERKS IN P_WERKS
      AND PRODAT LE P_PRODAT.
    it works fine for lesser data but throws a dump error TSV_TNEW_PAGE_ALLOC_FAILED for entries more than 70000,
    and my requirement is to fetch more than a lac record.
    Kindly Suggest the corrections.
    Thanks in Advance...

    Hi!
    This error occurs typically, when there is no more memory for your ABAP session. This means, you have to do one of the followings:
    - restrict your report for smaller intervals, like process only 1 month instead of 6 months together
    - rewrite your program, and eliminate/refresh/free the unneccesary internal tables, or columns from internal tables. You might even try to remove unneccesary lines from internal tables
    - use SELECT - ENDSELECT instead of SELECT ... INTO statement. This could slower your program, but you'll need less memory usage
    Check your memory usage always, with SM04 transaction (Goto - Memory menu).
    Regards
    Tamá

  • Dump in a Select query

    Hi All,
    I am getting a dump & the dump says " TIME LIMIT EXCEEDED". The code as  below:-
    LOOP AT it_gen1 INTO wa_gen1.
            SELECT  vbelv posnv
              FROM  vbfa
        INTO TABLE  li_vbfa
             WHERE  vbeln = wa_gen1-bil_number
               AND  posnn = wa_gen1-itm_number.
            DESCRIBE TABLE li_vbfa  LINES gv_count .
            READ TABLE  li_vbfa  INDEX gv_count  .
            IF sy-subrc =  0 .
              MOVE: li_vbfa-vbelv TO gv_vbelv1,
                    li_vbfa-posnv TO gv_posnv.
            ENDIF .
            CALL FUNCTION 'SD_DOCUMENT_PARTNER_READ'
              EXPORTING
                i_parvw   = 'ZS'
                i_posnr   = gv_posnv
                i_vbeln   = gv_vbelv1
              IMPORTING
                e_vbpa    = li_vbpa
                e_vbadr   = li_vbadr
              EXCEPTIONS
                not_found = 1
                OTHERS    = 2.
            IF sy-subrc <> 0.
              MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
            WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
            ENDIF.
            MOVE  li_vbpa-kunnr  TO  wa_gen2-kunnr .
            MOVE-CORRESPONDING wa_gen1 TO wa_gen2.
            APPEND  wa_gen2 TO it_gen2  .
          ENDLOOP.
    If I use Function Module   CALL FUNCTION 'SAPGUI_PROGRESS_INDICATOR' before ENDLOOP this will overcome the error says one of my friend.
    Is this a feasible solution for this error?
    Thanks in dvance....

    Hi All,
    Thanks for all your inputs. This issue is resolved now. I have used ranges in the select query & removed FOR ALL ENTRIES construct. The select query is working very fastly now with correct data.
    The code with the changes as below :-
            LOOP AT it_gen1 INTO wa_gen1.
              CLEAR r_bil_number.
              CLEAR r_itm_number.
              r_bil_number-option = 'EQ'.
              r_bil_number-sign = 'I'.
              r_bil_number-low = wa_gen1-bil_number.
              COLLECT r_bil_number.
              r_itm_number-option = 'EQ' .
              r_itm_number-sign = 'I'.
              r_itm_number-low = wa_gen1-itm_number.
              COLLECT r_itm_number.
            ENDLOOP.
            SELECT vbelv
                   posnv
                   vbeln
                   posnn
                   vbtyp_n
                   vbtyp_v                     
                   FROM vbfa
                   INTO TABLE li_vbfa
             WHERE vbeln IN r_bil_number
             AND posnn   IN r_itm_number
             AND vbtyp_n = c_m.
    Regards
    Abhii.

  • Getting Dump from a select query

    Dear All,
    Am selecting records from KEKO  and storing into  E1KEKO  internal table. But am getting dump error. I need to send those E1KEKO records as idoc. Kindly help me out how to fix this dump.
      SELECT * INTO CORRESPONDING FIELDS OF e1keko
        FROM keko
        WHERE keko~matnr = p_matnr
          AND keko~werks = p_werks
          AND keko~bwtar = 'Z06'
          AND keko~bwvar = 'Z06'.
        CLEAR t_idoc_data.
        t_idoc_data-segnam = c_segnam_e1keko.
        t_idoc_data-mandt = sy-mandt.
        t_idoc_data-sdata  = e1keko.
        APPEND t_idoc_data.
        CLEAR e1keph.
    endselect
    Thanks in advance.
    Anandhan

    Hi,
    If i use the below select query in a test program,  that time also am getting the same dump.
    DATA: e1keph TYPE e1keph,
          e1keko TYPE e1keko.
       SELECT * INTO CORRESPONDING FIELDS OF e1keko
        FROM keko
        WHERE keko~matnr = '000000000010801071'.
      endselect.
    (code}
    The dump showing the below message
      The reason for the exception is:
      In a SELECT access, the read file could not be placed in the target
      field provided.
      Either the conversion is not supported for the type of the target field
      the target field is too small to include the value, or the data does no
      have the format required for the target field.
    Thanks in advance
    Anandhan

  • Need help in optimisation for a select query on a large table

    Hi Gurus
    Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
    My Select is reading from a table which contains 10 Million records.
    I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
    I am pasting the code. please help
    Data: wa_i_tab1 type tys_tg_1 .
    DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
    Data : wa_result_pkg type tys_tg_1,
    wa_result_pkg1 type tys_tg_1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
    /BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
    into CORRESPONDING FIELDS OF table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
    sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    loop at RESULT_PACKAGE into wa_result_pkg.
    read TABLE i_tab INTO wa_i_tab1 with key
    /BIC/ZREB_SDAT =
    wa_result_pkg-/BIC/ZREB_SDAT
    AGREEMENT = wa_result_pkg-AGREEMENT
    /BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
    IF SY-SUBRC = 0.
    move wa_i_tab1-/BIC/ZSETLRUN to
    wa_result_pkg-/BIC/ZSETLRUN.
    wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
    modify RESULT_PACKAGE from wa_result_pkg1
    TRANSPORTING /BIC/ZSETLRUN.
    ENDIF.
    CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
    endloop.

    Hi,
    1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
    2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
    refer the below code is
    RESULT_PACKAGE1[] = RESULT_PACKAGE[].
    sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
    from /BIC/PZREB_SDAT
    into table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE1
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE1-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
    and one more thing your getting 10 million records so use package size in you select query.
    Refer the following link also For All Entry for 1 Million Records
    Regards,
    Dhina..
    Edited by: Dhina DMD on Sep 15, 2011 7:17 AM

  • I need to know the proper syntax for my SELECT query, please.

    Hello All,
    Quick one for you:
    Let's say that I have several columns in a table with names such as subject_1, subject_2, subject_3, etc. The table's name is subject_names.
    The number in each of the three column name examples is also a value passed along a query string, the user can select choices, 1, 2 or 3. That query string's variable is $qs.
    So, what I want is a SELECT query that uses the query string value as follows (KEEP IN MIND, I know this is not the proper syntax):
    "SELECT subject_[$qs]
    FROM subject_names";
    I have tried all sorts of cominations of quotes (single and double), dots, brackets, braces and parenthesis. I just want to know how to include such a variable within this code.
    Any and all help is sincerely appreciated!
    Cheers,
    wordman

    Well, I did give you the syntax though.
    $query = 'SELECT ' . $qs . ' FROM tbl_name';
    I put spaces between the periods this time to make it more clear.
    If you put the actual word 'subject' in there and just want your form to name it's options as the numbers available you could do this:
    $query = 'SELECT subject_' . $qs . ' FROM tbl_name';
    In PHP you can use either single or double quotes around your query string, I always just use single quotes. I see a lot of other use double quotes.
    Double quotes would look like:
    $query = "SELECT subject_' . $qs . ' FROM tbl_name";
    Or when using double quotes you can actually just place the variable right in the string without having to concatenate multiple strings like above.
    Since you mentioned that you are good with passing variables I probably don't have to mention that you need to set the value attribute of your option tags (if you are using a select) to the value you want them to pass.
    Ex:
    <select name="choices">
         <option value="1">1</option>
         <option value="2">2</option>
         <option value="3">3</option>
    </select>
    If you have that part all figured out then you can use the syntax above for your query string.
    Good luck.

  • How can replace IF condition for a Select query in my reports?

    IF s_bukrs-LOW = '4312' OR s_bukrs-LOW = '4313' OR s_bukrs-LOW = '4349'  .
    ELSEIF s_bukrs-LOW = '4310' OR s_bukrs-LOW = '4311' OR s_bukrs-LOW = '4587'.
    ENDIF.
    What if I want to use select query in place of IF condition, in my report for a Z -table in which I have made entries of ZZUSEREXIT-my progam name ,VAR1-4310,VAR2-4311,VAR3-4312,VAR4-4313,VAR5-4349,VAR6-4587?

    HI
    U can do this by two ways-
    (1) Using Two SELECT statementsie. one for IF statements and second for ELSEIF statements.
    Like--
        SELECT < field name>,
    WHERE s_bukrs-LOW = '4312' OR s_bukrs-LOW = '4313' OR s_bukrs-LOW = '4349' 
    SELECT < field name>,
    WHERE s_bukrs-LOW = '4310' OR s_bukrs-LOW = '4311' OR s_bukrs-LOW = '4587'.
    (2) U can do this in using IF with select statements.
    This will help u

  • "System Resource Exceeded" for simple select query in Access 2013

    Using Access 2013 32-bit on a Windows Server 2008 R2 Enterprise. This computer has
    8 GB of RAM.
    I am getting:
    "System Resource Exceeded"  errors in two different databases
    for simple queries like:
    SELECT FROM .... GROUP BY ...
    UPDATE... SET ... WHERE ...
    I compacted the databases several times, no result. One database size is approx 1 GB, the other one is approx. 600 MB.
    I didn't have any problems in Office 2010
    so I had to revert to this version.
    Please advise.
    Regards,
    M.R.

    Hi Greg. I too am running Access on an RDP server. Checking Task Manager, I can see many copies of MSACCESS running in the process list, from all users on the server. We typically have 40-60 users on that server. I am only changing the Processor Affinity
    for MY copy, and only when I run into this problem. Restarting Access daily, I always get back to multi-processor mode soon thereafter.
    As this problem only seems to happen on very large Access table updates, and as there are only three of us performing those kind of updates, we have good control on who might want to change the affinity setting to solve this problem. However, I
    understand that in other environments this might not be a good solution. In my case, we have 16 processors on the server, so I always take #1, my co-worker here in the US always takes #2, etc. This works for us, and I am only describing it here in case it
    works for someone else.
    The big question in my mind is what multi-threading methods are employed by Microsoft for Access that would cause this problem for very large datasets. Processing time for an update query on, say, 2 million records is massively improved by going down
    to 1 processor. The problem is easily reproduced, and so far I have not seen it in Excel even when working with very large worksheets. Also have not seen it in MS SQL. It is just happening in Access.

  • Need help for SQL SELECT query to fetch XML records from Oracle tables having CLOB field

    Hello,
    I have a scenario wherein i need to fetch records from several oracle tables having CLOB fields(which is holding XML) and then merge them logically to form a hierarchy XML. All these tables are related with PK-FK relationship. This XML hierarchy is having 'OP' as top-most root node and ‘DE’ as it’s bottom-most node with One-To-Many relationship. Hence, Each OP can have multiple GM, Each GM can have multiple DM and so on.
    Table structures are mentioned below:
    OP:
    Name                             Null                    Type        
    OP_NBR                    NOT NULL      NUMBER(4)    (Primary Key)
    OP_DESC                                        VARCHAR2(50)
    OP_PAYLOD_XML                           CLOB       
    GM:
    Name                          Null                   Type        
    GM_NBR                  NOT NULL       NUMBER(4)    (Primary Key)
    GM_DESC                                       VARCHAR2(40)
    OP_NBR               NOT NULL          NUMBER(4)    (Foreign Key)
    GM_PAYLOD_XML                          CLOB   
    DM:
    Name                          Null                    Type        
    DM_NBR                  NOT NULL         NUMBER(4)    (Primary Key)
    DM_DESC                                         VARCHAR2(40)
    GM_NBR                  NOT NULL         NUMBER(4)    (Foreign Key)
    DM_PAYLOD_XML                            CLOB       
    DE:
    Name                          Null                    Type        
    DE_NBR                     NOT NULL           NUMBER(4)    (Primary Key)
    DE_DESC                   NOT NULL           VARCHAR2(40)
    DM_NBR                    NOT NULL           NUMBER(4)    (Foreign Key)
    DE_PAYLOD_XML                                CLOB    
    +++++++++++++++++++++++++++++++++++++++++++++++++++++
    SELECT
    j.op_nbr||'||'||j.op_desc||'||'||j.op_paylod_xml AS op_paylod_xml,
    i.gm_nbr||'||'||i.gm_desc||'||'||i.gm_paylod_xml AS gm_paylod_xml,
    h.dm_nbr||'||'||h.dm_desc||'||'||h.dm_paylod_xml AS dm_paylod_xml,
    g.de_nbr||'||'||g.de_desc||'||'||g.de_paylod_xml AS de_paylod_xml,
    FROM
    DE g, DM h, GM i, OP j
    WHERE
    h.dm_nbr = g.dm_nbr(+) and
    i.gm_nbr = h.gm_nbr(+) and
    j.op_nbr = i.op_nbr(+)
    +++++++++++++++++++++++++++++++++++++++++++++++++++++
    I am using above SQL select statement for fetching the XML records and this gives me all related xmls for each entity in a single record(OP, GM, DM. DE). Output of this SQL query is as below:
    Current O/P:
    <resultSet>
         <Record1>
              <OP_PAYLOD_XML1>
              <GM_PAYLOD_XML1>
              <DM_PAYLOD_XML1>
              <DE_PAYLOD_XML1>
         </Record1>
         <Record2>
              <OP_PAYLOD_XML2>
              <GM_PAYLOD_XML2>
              <DM_PAYLOD_XML2>
              <DE_PAYLOD_XML2>
         </Record2>
         <RecordN>
              <OP_PAYLOD_XMLN>
              <GM_PAYLOD_XMLN>
              <DM_PAYLOD_XMLN>
              <DE_PAYLOD_XMLN>
         </RecordN>
    </resultSet>
    Now i want to change my SQL query so that i get following output structure:
    <resultSet>
         <Record>
              <OP_PAYLOD_XML1>
              <GM_PAYLOD_XML1>
              <GM_PAYLOD_XML2> .......
              <GM_PAYLOD_XMLN>
              <DM_PAYLOD_XML1>
              <DM_PAYLOD_XML2> .......
              <DM_PAYLOD_XMLN>
              <DE_PAYLOD_XML1>
              <DE_PAYLOD_XML2> .......
              <DE_PAYLOD_XMLN>
         </Record>
         <Record>
              <OP_PAYLOD_XML2>
              <GM_PAYLOD_XML1'>
              <GM_PAYLOD_XML2'> .......
              <GM_PAYLOD_XMLN'>
              <DM_PAYLOD_XML1'>
              <DM_PAYLOD_XML2'> .......
              <DM_PAYLOD_XMLN'>
              <DE_PAYLOD_XML1'>
              <DE_PAYLOD_XML2'> .......
              <DE_PAYLOD_XMLN'>
         </Record>
    <resultSet>
    Appreciate your help in this regard!

    Hi,
    A few questions :
    How's your first query supposed to give you an XML output like you show ?
    Is there something you're not telling us?
    What's the content of, for example, <OP_PAYLOD_XML1> ?
    I don't think it's a good idea to embed the node level in the tag name, it would make much sense to expose that as an attribute.
    What's the db version BTW?

  • High # fetches for a select query

    Following is the tkprof output. Can anybody tell why the fetches are too high and what should be done? Thanks.
    SELECT COUNT (*)
    FROM
    ACA_PI_ALLOC_CVE A WHERE A.PRGRM_INDEX = :B2 AND A.ALCTN_CODE = :B1 AND
    TRUNC (SYSDATE) BETWEEN TRUNC (A.FROM_DATE) AND TRUNC (A.TO_DATE) AND
    A.STATUS_CID = 2 AND A.OPRTNL_FLAG = 'A' AND A.FISCAL_MNTH =
    FN_FISCAL_MONTH (SYSDATE)
    call count cpu elapsed disk query current rows
    Parse 6494 0.32 0.31 0 0 0 0
    Execute 18760 1.69 1.67 0 0 0 0
    Fetch 18760 604.39 593.07 801 22993642 0 18760
    total 44014 606.40 595.05 801 22993642 0 18760
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 277 (recursive depth: 1)
    Rows Row Source Operation
    5 SORT AGGREGATE (cr=2242 pr=17 pw=0 time=302663 us)
    5 TABLE ACCESS BY INDEX ROWID ACA_PI_ALLOC_CVE (cr=2242 pr=17 pw=0 time=302541 us)
    3265 INDEX RANGE SCAN TUNE_ACA_PI_ALLOC_CVE_01 (cr=30 pr=0 pw=0 time=26431 us)(object id 1331914)

    FeNiCrC_Neil wrote:
    Thanks Enrique. Is there a way to decrease the elapsed time for the fetches?I think that Mark pointed you in the right direction. Reposting what you previously posted:
    SELECT
      COUNT (*)
    FROM
      ACA_PI_ALLOC_CVE A
    WHERE
      A.PRGRM_INDEX = :B2
      AND A.ALCTN_CODE = :B1
      AND TRUNC (SYSDATE) BETWEEN TRUNC (A.FROM_DATE) AND TRUNC (A.TO_DATE)
      AND A.STATUS_CID = 2
      AND A.OPRTNL_FLAG = 'A'
      AND A.FISCAL_MNTH = FN_FISCAL_MONTH (SYSDATE);
    call     count    cpu elapsed disk    query current  rows
    Parse     6494   0.32    0.31    0        0       0     0
    Execute  18760   1.69    1.67    0        0       0     0
    Fetch    18760 604.39  593.07  801 22993642       0 18760
    total    44014 606.40  595.05  801 22993642       0 18760
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 277 (recursive depth: 1)
    Rows Row Source Operation
       5 SORT AGGREGATE (cr=2242 pr=17 pw=0 time=302663 us)
       5  TABLE ACCESS BY INDEX ROWID ACA_PI_ALLOC_CVE (cr=2242 pr=17 pw=0 time=302541 us)
    3265   INDEX RANGE SCAN TUNE_ACA_PI_ALLOC_CVE_01 (cr=30 pr=0 pw=0 time=26431 us)(object id 1331914)The query, which is executed 18,760 times, is performing 22,993,642 logical IOs and is experiencing CPU limitations (606.40 CPU seconds, about 1/3 of a second per execution) due to the number of logical IOs. The execution plan that you posted is for a single execution, using a very unselective index. In this case, the unselective index returned 3,265 rows, of which only 5 survived the other restrictions specified by the WHERE clause.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Concurrent access issue with JPA  for a Select query?

    Hi All,
    I have been trying to understand why this code,
    public <T> List<T> findManyNativeSql(String queryString, Class<T> resultClass)
                Query aQuery = getEntityManager().createNativeQuery(queryString,resultClass); // Throwing the following exceptionis causing this exception.
    <openjpa-1.1.0-r422266:657916 fatal general error> org.apache.openjpa.persistence.PersistenceException: Multiple concurrent th
    reads attempted to access a single broker. By default brokers are not thread safe; if you require and/or intend a broker to be
    accessed by more than one thread, set the openjpa.Multithreaded property to true to override the default behavior.
            at org.apache.openjpa.kernel.BrokerImpl.endOperation(BrokerImpl.java:1789)
            at org.apache.openjpa.kernel.BrokerImpl.isActive(BrokerImpl.java:1737)
            at org.apache.openjpa.kernel.DelegatingBroker.isActive(DelegatingBroker.java:428)
            at org.apache.openjpa.persistence.EntityManagerImpl.isActive(EntityManagerImpl.java:606)
            at org.apache.openjpa.persistence.PersistenceExceptions$2.translate(PersistenceExceptions.java:66)
            at org.apache.openjpa.kernel.DelegatingBroker.translate(DelegatingBroker.java:102)
            at org.apache.openjpa.kernel.DelegatingBroker.newQuery(DelegatingBroker.java:1227)I have tried looking at the query which gets printed in the logs when the exception is thrown
    [[ACTIVE] ExecuteThread: '32' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR jpa
    - ID: 133  queryString= select * from Details where cust_name='SETH'Any suggestions on the following would be very helpful
    Also, AppServer: WL10.3 is being used.
    VR

    I'm not sure what a Broker is in OpenJPA so you may want to post in an OpenJPA forum. I would suspect though that a broker is underneath the EntityManager, and it might suggest that this EntityManager instance is being shared among threads. Verify that the EntityManager returned is not being used in multiple threads; if it is used in multiple threads concurrently, this needs to be changed to obtian a new one and release it when done as they are not thread safe. You might also try using EclipseLink as the JPA provider to see if you get a different error message that might point out the problem.
    Best Regards,
    Chris

  • How to reduce logical count and scan count for a select query

    hi,
    I have two tables one is master and other is history. i need to combine this two tables into one temporary table.
    I am using the below query to create temp table.
    Select * into temporders
    from
    (select * from orders
    union
    select * from ordershistory) b
    where updateon= (select max(updateon)from (select updateon,name,units,subunits from orders
    union
    select updateon,name,units,subunits from ordershistory) a
    where updateon <='11/08/2008 11:18 AM' and a.name=b.name and a.units=b.units and a.subunits=b.subunits group by name,units,subunits)
    order by report,subunitsorder
    the statistics for this query:
    SQL Server parse and compile time:
    CPU time = 47 ms, elapsed time = 62 ms.
    Table 'Worktable'. Scan count 556, logical reads 1569, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'ORDERSHISTORY'. Scan count 116, logical reads 339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'ORDERS'. Scan count 116, logical reads 285, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 32 ms, elapsed time = 63 ms.
    (115 row(s) affected)
    you see logical reads and scan count for worktable(temporary) is quite high.
    So anyone can give a solution for reduce the scan count and logical reads.
    NOTE: name,units, subunits,updateon columns have primarykey

    SQL ServerAm i reading it properly? :(
    This is Oracle Forum And not the SQL Server.
    Regards.
    Satyaki De.

  • Displaying different no. of  records for a select Query for the same dbuser

    Hi,
    While querying from my machine after connecting to custom schema it is displaying 3000 rows.
    When the same query was executed from different machine it is displaying just 31 records.
    1. Connect as xxyy user and executed the below command
    select count(*) from xxyy. XXYY_APAR_SUPP_CUST_MAST_V.
    output is showing some 3000 records
    2. Connect as xxto user and executed the below commands for a specific user and specific machine
    select count(*) from xxyy. XXYY_APAR_SUPP_CUST_MAST_V.
    output is showing just 31 records
    Please help...!!

    Hi,
    I hope you would have resolved the problem, if not, next step could be to check the TNS on both machines because now i suspect that your TNS on other machine points to some idfferent database.
    What do you mean by following
    2. Connect as xxto user and executed the below commands for a specific user and specific machineSo you connect with different users? If yes, then from both machines, if you connect with the same user, is result same?
    When the same query was executed from different machine it is displaying just 31 records.Please paste the TNs entry for the database from both of your client machines from where you run the query.
    Salman

  • Simple SELECT query to generate  INSERT for a table

    Hi,
    I am looking for a SELECT query which generates INSERT statements for a table.Please guide.
    Thanks
    PG.

    Like this?
    SQL> SELECT * from kons;
         COL1      COL2
            1         1
            2         2
    SQL> SELECT 'INSERT INTO kons VALUES('||col1||','||col2||');' statement FROM kons;
    STATEMENT
    INSERT INTO kons VALUES(1,1);
    INSERT INTO kons VALUES(2,2);
    SQL> If you have character and / or date columns you will have to change my example,
    but you might get the idea from it.
    You can spool the result to a file.
    Regards,
    Guido
    Edit: ';' added to end of statement...

Maybe you are looking for

  • Help with display quality (LCD TV)

    I've hooked up my Mac mini to my Samsung 32" LCD TV via HDMI cable. The overall quality is just far from desireable and is taking away all of the fun from using this new mac. I set the resolution output to 720p, which is what the TV supports. So that

  • Security change hdd. MacBook will not boot.

    Hi all, I am in some real need for help. Yesterday i changed the security of my macbook(aluminium) because i am at a hotel with wifi. (selected in Finder "this computer" pressed command-i, and in information checked the option "secure.") At night i s

  • Blending similar colours into a solid colour?

    I've scanned a map (actually a nautical chart) that I want to print; see link below to a very small portion of the RGB 8-bit/Chan image. http://i77.photobucket.com/albums/j54/nigelmercier/sample_zps69cffb91.png~original In this I can identify white,

  • How to lighten the front portion of a seashell?

    i would like to lighten the front portion of a seashell. how would i do this/ what selection tool etc.  Attached is a photo of a seashell floating on some clouds. Thanks for your help!

  • Listening to iPod nano while recharging?

    Hi everyone. I have my iPod nano connected to a set of speakers in my office. When I need to recharge it, I connect it via the USB port to my PC (no iTunes installed) in my office. My question is: if the iPod is playing music while connected to the U