Huge number of records got failed in psa

Hi,
i got one error in psa, more records (around 1000)got failed for ZQCUSTNM infoobject due to alpha conversion problem. i know manually edit one by one, but it will take more time ,so i want quick solution , if any thing is there pls tell me .
thanks
reddy

go to the transfer rules for this data load's infosource..
go to 'transfer rules' tab there..
For this infoobject..scroll to right and tick the last checkbox for 'apply conversion automatically'.This will automatically apply alpha conversion to all values of that field,which come in the source records.
save tr rules and reactivate them.
then reload..
cheers,
Vishvesh

Similar Messages

  • Query using system parameter LEVEL returns incorrect huge number of records

    We migrate our database from Oracle *9.2.0.6* to *11.2.0.1*
    The query below throws "ORA-01788: CONNECT BY clause required in this query block".
    select * from (
    +select a.BOARD_ID, code, description, is_displayable, order_seq,  board_parent_id, short_description, IS_SUB_BOARD_DISPLAYABLE, <font color=blue>LEVEL</font> child_level, sp_board.get_parent_id(a.board_id) top_parent_id, is_top_selected isTopSelected+
    from boards a, ALERT_MESSAGE_BOARD_TARGETS b
    where a.board_id = b.board_id and is_displayable = 'Y' and alert_message_id = 5202) temp
    start with board_parent_id = 0
    connect by prior board_id = board_parent_id
    ORDER SIBLINGS BY order_seq;
    Based from online resources we modified "*_allow_level_without_connect_by*" by executing the statement.
    alter system set "_allow_level_without_connect_by"=true scope=spfile;
    After performing the above, ORA-01788 is resolved.
    The new issue is that the same query above returns *9,015,853 records in 11g* but in *9i it returns 64 records*. 9i returns the correct number of records. And the cause for 11g returning greater number of records is due to system parameter <font color=blue>LEVEL</font> used in the query.
    Why 11g is returning an incorrect huge number of records?
    Any assistance to address this is greatly appreciated. Thanks!

    The problem lies in th query.
    Oracle <font color=blue>LEVEL</font> should not be used inside a subquery. After <font color=blue>LEVEL</font> is moved in the main query, the number of returned records is the same as in 9i.
    select c.BOARD_ID, c.code, c.description, c.is_displayable, c.order_seq, c.board_parent_id, c.short_description, c.IS_SUB_BOARD_DISPLAYABLE, <font color=blue>LEVEL</font> child_level, c.top_parent_id, c.isTopSelected
    from (
    select a.BOARD_ID, code, description, is_displayable, order_seq, board_parent_id, short_description, IS_SUB_BOARD_DISPLAYABLE, sp_board.get_parent_id(a.board_id) top_parent_id, is_top_selected isTopSelected
    from boards a, ALERT_MESSAGE_BOARD_TARGETS b
    where a.board_id = b.board_id and is_displayable = 'Y' and alert_message_id = 5202
    ) c
    start with c.board_parent_id = 0
    connect by prior c.board_id = c.board_parent_id
    ORDER SIBLINGS BY c.order_seq

  • Process huge number of records in XI

    Hello
    I am having a simple scenario where I am sending details from SAP to legacy system on weekly basis. I am having SAP ABAP proxy as a sender and file as a receiver.
    My problem is when the data is huge (around 600,000 records) the its giving RFC timeout error on SAP ECC side. I tried to sending limited number of records per submit to XI but it creates multiple files on receiving FTP server.
    Instead of that can I use collect pattern of BPM to collect messages in BPM? Will it not be an overhead using BPM in this case as the data is tooo huge?
    If someone has already tackeled this problem pl respond.
    Thanks in advance.
    Regards
    Rajeev

    Hi Rajeev,
    dont use BPM for solving that, BPM itself has still a lot performance problems.
    If possible, split the 600.000 at R/3 side into several proxy requests. If required, use modus "add lines" of the file adapter to collect the messages again. You should think about to send the messages "Exacly Once In Order" to avoid processing more than one message at the same time.
    Regards,
    Udo

  • During Import if one record got failed so all records will not import?

    Hi All,
    I want to clear on the following situation:
    I have 10 records in a file and due to 2 records some structural reason import got failed.. So do MDIS will import 8 records and create a file with two records in error folder? or it will not import at all and create a file in the error folder?
    How does it works?  basically I want to know if because of any reason 1-2 records in a 1000 records file got failed do entire file willnot import or only those errored records are not imported?
    THanks
    Rajeev

    Hi Rajeev,
    have 10 records in a file and due to 2 records some structural reason import got failed.. So do MDIS will import 8 records and create a file with two records in error folder? or it will not import at all and create a file in the error folder?
    This totally depends on your business scenario,
    1. If you want that if there are expection in even one file also all the records should not be imported than also it is possible
    2. If you want that if the exception is there than only the exception records should be sent to the Exception Folder and rest of the records should be imported by MDIS service, this process is also possible.
    For implementing the second scenario you just have to set the property in Console -> Admin -> Ports
    Block on Structural Exception = No
    And if you want that the files should not be imported even if one file has exception then set
    Block on Structural Exception = Yes
    Hope this would be helpful.
    Best Regards,
    Parul Malhotra

  • Time Limit exceeded Error while updating huge number of records in MARC

    Hi experts,
    I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
    used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
    getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
    as we want to update.
    Below is the part of code in my proxy
    Call the BAPI update the safety stock Value.
        CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
          EXPORTING
            headdata                    = gs_headdata
            CLIENTDATA                  =
            CLIENTDATAX                 =
           plantdata                   = gs_plantdata
           plantdatax                  = gs_plantdatax
           IMPORTING
              return              = ls_return.
        IF ls_return-type <> 'S'.
          CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
          MOVE ls_return-message TO lv_message.
    Populate the error table and process next record.
          CALL METHOD me->populate_error
            EXPORTING
              message = lv_message.
          CONTINUE.
        ENDIF.
    Can any one please let me know what could be the best possible approach for this issue.
    Thanks in Advance,
    Jitender
    Hi experts,
    I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
    used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
    getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
    as we want to update.
    Below is the part of code in my proxy
    Call the BAPI update the safety stock Value.
        CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
          EXPORTING
            headdata                    = gs_headdata
            CLIENTDATA                  =
            CLIENTDATAX                 =
           plantdata                   = gs_plantdata
           plantdatax                  = gs_plantdatax
           IMPORTING
              return              = ls_return.
        IF ls_return-type <> 'S'.
          CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
          MOVE ls_return-message TO lv_message.
    Populate the error table and process next record.
          CALL METHOD me->populate_error
            EXPORTING
              message = lv_message.
          CONTINUE.
        ENDIF.
    Can any one please let me know what could be the best possible approach for this issue.
    Thanks in Advance,
    Jitender

    Hi Raju,
    Use the following routine to get fiscal year/period using calday.
    *Data definition:
    DATA: l_Arg1 TYPE RSFISCPER ,
          l_Arg2 TYPE RSFO_DATE ,
          l_Arg3 TYPE T009B-PERIV .
    *Calculation:
    l_Arg2  = TRAN_STRUCTURE-POST_DATE. (<b> This is the date that u have to give</b>)
    l_Arg3  = 'V3'.
    CALL METHOD CL_RSAR_FUNCTION=>DATE_FISCPER(
      EXPORTING I_DATE = l_Arg2
                I_PER = l_Arg3
      IMPORTING E_FISCPER = l_Arg1  ).
    RESULT = l_Arg1 .
    Hope it will sove ur problem....!
    Please Assign points.......
    Best Regards,
    SG

  • Performance issue fetching huge number of record with "FOR ALL ENTRIES"

    Hello,
    We need to extract an huge amount of data (about 1.000.000 records) from VBEP table, which overall dimension is about 120 milions records.
    We actually use this statements:
    CHECK NOT ( it_massive_vbep[] IS INITIAL ) .
    SELECT (list of fields) FROM vbep JOIN vbap
                 ON vbepvbeln = vbapvbeln AND
                  vbepposnr = vbapposnr
                 INTO CORRESPONDING FIELDS OF  w_sched
                 FOR ALL ENTRIES IN it_massive_vbep
                 WHERE    vbep~vbeln   = it_massive_vbep-tabkey-vbeln
                    AND    vbep~posnr   = it_massive_vbep-tabkey-posnr
                    AND    vbep~etenr   = it_massive_vbep-tabkey-etenr.
    notice that internal table it_massive_vbep contains always records with fully specified key.
    Do you think this query could be further optimized?
    many thanks,
    -Enrico

    the are 2 option to improve performance:
    + you should work in blocks of 10.000 to 50.000
    + you should check archiving options, does this really make sense
    > VBEP table, which overall dimension is about 120 milions records.
    it_massive_vbep  into it_vbep_notsomassive (it_vbep_2)
    CHECK NOT ( it_vbep_2[] IS INITIAL ) .
      get runtime field start.
    SELECT (+list of fields+)
                  INTO CORRESPONDING FIELDS OF TABLE w_sched
                  FROM vbep JOIN vbap
                  ON vbep~vbeln = vbap~vbeln AND
                       vbep~posnr = vbap~posnr
                  FOR ALL ENTRIES IN it_vbep_2
                  WHERE vbep~vbeln = it_vbep_2-vbeln
                  AND      vbep~posnr = it_vbep_2-posnr
                  AND      vbep~etenr  = it_vbep_2-etenr.
      get runtime field stop.
    t = stop - start.
    write: / t.
    Be aware that even 10.000 will take some time.
    Other question, how did you get the 1.000.000 records in it_massive_vbep. They are not typed in, but somehow select.
    Change the FAE into a JOIN and it will be much faster.
    Siegfried

  • Rows to column for huge number of records

    my database version is 10gr2
    i want to transfer the rows to column .....i have seen the examples for small no of records but how can it be done if there are more the 1000 records in a table ...???
    here is the sample data that i would like to change it to column
    SQL> /
    NE              RAISED                         CLEARED                        RTTS_NO        RING                                                                              
    10100000-1LU    22-FEB-2011 22:01:04/28-FEB-20 22-FEB-2011 22:12:27/28-FEB-20                SR-10/ ER-16/ CR-25/ CR-29/ CR-26/ RIDM-1/ NER5/ CR-31/ RiC600-1                  
                    11 01:25:22/                   11 02:40:06/
    10100000-2LU    01-FEB-2011 12:15:58/06-FEB-20 05-FEB-2011 10:05:48/06-FEB-20                RIMESH/ RiC342-1/ 101/10R#10/ RiC558-1/ RiC608-1                                  
                    11 07:00:53/18-FEB-2011 22:04: 11 10:49:18/18-FEB-2011 22:15:
                    56/19-FEB-2011 10:36:12/19-FEB 17/19-FEB-2011 10:41:35/19-FEB
                    -2011 11:03:13/19-FEB-2011 11: -2011 11:08:18/19-FEB-2011 11:
                    16:14/28-FEB-2011 01:25:22/    21:35/28-FEB-2011 02:40:13/
    10100000-3LU    19-FEB-2011 20:18:31/22-FEB-20 19-FEB-2011 20:19:32/22-FEB-20                INR-1/ ISR-1                                                                      
                    11 21:37:32/22-FEB-2011 22:01: 11 21:48:06/22-FEB-2011 22:12:
                    35/22-FEB-2011 22:20:03/28-FEB 05/22-FEB-2011 22:25:14/28-FEB
                    -2011 01:25:23/                -2011 02:40:20/
    10100000/10MU   06-FEB-2011 07:00:23/19-FEB-20 06-FEB-2011 10:47:13/19-FEB-20                101/IR#10                                                                         
                    11 11:01:50/19-FEB-2011 11:17: 11 11:07:33/19-FEB-2011 11:21:
                    58/28-FEB-2011 02:39:11/01-FEB 30/28-FEB-2011 04:10:56/05-FEB
                    -2011 12:16:21/18-FEB-2011 22: -2011 10:06:10/18-FEB-2011 22:
                    03:27/                         13:50/
    10100000/11MU   01-FEB-2011 08:48:45/22-FEB-20 02-FEB-2011 13:15:17/22-FEB-20 1456129/       101IR11 RIMESH                                                                    
                    11 21:59:28/22-FEB-2011 22:21: 11 22:08:49/22-FEB-2011 22:24:
                    52/01-FEB-2011 08:35:46/       27/01-FEB-2011 08:38:42/
    10100000/12MU   22-FEB-2011 21:35:34/22-FEB-20 22-FEB-2011 21:45:00/22-FEB-20                101IR12 KuSMW4-1                                                                  
                    11 22:00:04/22-FEB-2011 22:21: 11 22:08:21/22-FEB-2011 22:22:
                    23/28-FEB-2011 02:39:53/       26/28-FEB-2011 02:41:07/
    10100000/13MU   22-FEB-2011 21:35:54/22-FEB-20 22-FEB-2011 21:42:58/22-FEB-20                LD MESH                                                                           
                    11 22:21:55/22-FEB-2011 22:00: 11 22:24:52/22-FEB-2011 22:10:

    could you do something like this?
    with t as (select '10100000-1LU' NE,   '22-FEB-2011 22:01:04/28-FEB-2011 01:25:22/' raised ,  '22-FEB-2011 22:12:27/28-FEB-2011 02:40:06/' cleared from dual union
                  select '10100000-2LU', '01-FEB-2011 12:15:58/06-FEB-2011 07:00:53/18-FEB-2011 22:04:56/19-FEB-2011 10:36:12/19-FEB-2011 11:03:13/19-FEB-2011 11:16:14/28-FEB-2011 01:25:22/',
                  '05-FEB-2011 10:05:48/06-FEB-2011 10:49:18/18-FEB-2011 22:15:17/19-FEB-2011 10:41:35/19-FEB-2011 11:08:18/19-FEB-2011 11:21:35/28-FEB-2011 02:40:13/' from dual
    select * from(
    select NE,   regexp_substr( raised,'[^/]+',1,1) raised, regexp_substr( cleared,'[^/]+',1,1) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,2) , regexp_substr( cleared,'[^/]+',1,2) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,3) , regexp_substr( cleared,'[^/]+',1,3) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,4) , regexp_substr( cleared,'[^/]+',1,4) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,5) , regexp_substr( cleared,'[^/]+',1,5) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,6) , regexp_substr( cleared,'[^/]+',1,6) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,7) , regexp_substr( cleared,'[^/]+',1,7) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,8) , regexp_substr( cleared,'[^/]+',1,8) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,9) , regexp_substr( cleared,'[^/]+',1,9) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,10) , regexp_substr( cleared,'[^/]+',1,10) cleared  from t
    union
    select NE,   regexp_substr( raised,'[^/]+',1,11) , regexp_substr( cleared,'[^/]+',1,11) cleared  from t
    where nvl(raised,cleared) is not null
    order by ne
    NE     RAISED     CLEARED
    10100000-1LU     28-FEB-2011 01:25:22     28-FEB-2011 02:40:06
    10100000-1LU     22-FEB-2011 22:01:04     22-FEB-2011 22:12:27
    10100000-2LU     28-FEB-2011 01:25:22     28-FEB-2011 02:40:13
    10100000-2LU     19-FEB-2011 10:36:12     19-FEB-2011 10:41:35
    10100000-2LU     19-FEB-2011 11:03:13     19-FEB-2011 11:08:18
    10100000-2LU     19-FEB-2011 11:16:14     19-FEB-2011 11:21:35
    10100000-2LU     06-FEB-2011 07:00:53     06-FEB-2011 10:49:18
    10100000-2LU     01-FEB-2011 12:15:58     05-FEB-2011 10:05:48
    10100000-2LU     18-FEB-2011 22:04:56     18-FEB-2011 22:15:17you should be able to do it without all those unions using a connect by but I can't quite get it to work
    the following doesn't work but maybe someone can answer.
    select NE,   regexp_substr( raised,'[^/]+',1,level) raised, regexp_substr( cleared,'[^/]+',1,level) cleared from t
    connect by  prior  NE = NE and   regexp_substr( raised,'[^/]+',1,level) = prior regexp_substr( raised,'[^/]+',1,level + 1)Edited by: pollywog on Mar 29, 2011 9:38 AM
    here it is with the model clause which gets rid of all the unions.
    WITH t
            AS (SELECT '10100000-1LU' NE,
                       '22-FEB-2011 22:01:04/28-FEB-2011 01:25:22/' raised,
                       '22-FEB-2011 22:12:27/28-FEB-2011 02:40:06/' cleared
                  FROM DUAL
                UNION
                SELECT '10100000-2LU',
                       '01-FEB-2011 12:15:58/06-FEB-2011 07:00:53/18-FEB-2011 22:04:56/19-FEB-2011 10:36:12/19-FEB-2011 11:03:13/19-FEB-2011 11:16:14/28-FEB-2011 01:25:22/',
                       '05-FEB-2011 10:05:48/06-FEB-2011 10:49:18/18-FEB-2011 22:15:17/19-FEB-2011 10:41:35/19-FEB-2011 11:08:18/19-FEB-2011 11:21:35/28-FEB-2011 02:40:13/'
                  FROM DUAL)
      SELECT *
        FROM (SELECT NE, raised, cleared
                FROM t
              MODEL RETURN UPDATED ROWS
                 PARTITION BY (NE)
                 DIMENSION BY (0 d)
                 MEASURES (raised, cleared)
                 RULES
                    ITERATE (1000) UNTIL raised[ITERATION_NUMBER] IS NULL
                    (raised [ITERATION_NUMBER + 1] =
                          REGEXP_SUBSTR (raised[0],
                                         '[^/]+',
                                         1,
                                         ITERATION_NUMBER + 1),
                    cleared [ITERATION_NUMBER + 1] =
                          REGEXP_SUBSTR (cleared[0],
                                         '[^/]+',
                                         1,
                                         ITERATION_NUMBER + 1)))
       WHERE raised IS NOT NULL
    ORDER BY NEEdited by: pollywog on Mar 29, 2011 10:34 AM

  • Max number of records in MDM workflow

    Hi All
    Need urgent recommendations.
    We have a scenario where we need to launch a workflow upon import of records. The challenge is source file contains 80k records and its always a FULL load( on daily basis) in MDM. Do we have any limitation in MDM workflow for the max number of records? Will there be significant performance issues if we have a workflow with such huge number of records in MDM?
    Please share your inputs.
    Thanks-Ravi

    Hi Ravi,
    Yes it can cause performance overhead and you will also have to optimise MDIS parametrs for this.
    Regarding WF i think normally it is 100 records per WF.I think you can set a particular threshold for records after which the WF will autolaunch.
    It is difficult to say what optimum number of records should be fed in Max Records per WF so I would suggest having a test run of including 100/1000 records per WF.Import Manager guide say there are several performance implications of importing records in a WF,so it is better to try for different ranges.
    Thanks,
    Ravi

  • Create huge number of IDOC

    Hi All,
    I am asked to create a file to IDOC scenario in PI. The problem is, the file will have around 200,000 records, 96MB. That means I have to get the 200,000 records from the file and create 200,000 PO IDOC at once. I know this is not possible. Does any one have this experience? How did you solve the problem?
    Thanks a lot!
    Charles

    Few ways to implement this.
    Though the file has huge number of records, you can tweak  or control the number of idocs creating at the reciever side.
    Refer michal blog for edit the occurence of target idoc structure to sent the number of idocs as per the need.
    The specified item was not found.
    https://wiki.sdn.sap.com/wiki/display/XI/File%20to%20Multiple%20IDOC%20Splitting%20without%20BPM
    if your sender side is flat file then in the content conversion you set the parameter Recordsets per message like 100 or so.. so that you create 100 idocs each time from the sender message structure. Refer SDN forum for fcc parameters and sender fcc adapter scenario.
    Refer this thread
    Recordsets per Message in File adapter

  • Huge number of unprocessed logging table records found

    Hello Experts,
    I am facing one issue where huge number of unprocessed logging table records were found in SLT system for one table. I have check all setting and error logs but not found any evidence that causing the unprocessed records. In HANA system also it shows in replicated status. Could you please suggest me something other than to replicate same table again, as that option is not possible at this moment.

    Hi Nilesh,
    What are the performance impacts on the SAP ECC system when multiple large SAP tables like BSEG are replicated at the same time? Is there a guideline for a specific volume or kind of tables?
    There is no explicit guideline since aspects as server performance as well as change rate of the tables are also relevant. As a rule of thumb, one dedicated replication job per large table is recommended.
    from SLT
    How to enable parallel replication before DMIS 2011 SP6    do not ignore its for SP06 == go through
    How to improve the initial load
    Regards,
    V Srinivasan

  • Master Data loading got failed: error "Update mode R is not supported by th

    Hello Experts,
    I use to load master data for 0Customer_Attr though daily process chain, it was running successfully.
    For last 2 days master data loading for 0Customer_Attr got failed and it gives following error message:
    "Update mode R is not supported by the extraction API"
    Can anyone tell me what is that error for? how to resolve this issue?
    Regards,
    Nirav

    Hi
    Update mode R error will come in the below case
    You are running a delta (for master data) which afils due to some error. to resolve that error, you make the load red and try to repeat the load.
    This time the load will fail with update mode R.
    As repeat delta is not supported.
    So, now, the only thing you can do is to reinit the delta(as told in above posts) and then you can proceed. The earlier problem has nothing to do with update mode R.
    example your fiorst delta failed with replication issue.
    only replicating and repeaing will not solve the update mode R.
    you will have to do both replication of the data source and re-int for the update mode R.
    One more thing I would like to add is.
    If the the delat which failed with error the first time(not update mode R), then
    you have to do init with data transfer
    if it failed without picking any records,
    then do init without data transfer.
    Hope this helps
    Regards
    Shilpa
    Edited by: Shilpa Vinayak on Oct 14, 2008 12:48 PM

  • How to find total number of records in a BDoc?

    Dear all,
    I have replicated about BP 1088 records from ISU into CRM system with block size 100. Technically on SMW01, for each successfully processed BDoc, there will be 100 records (corresponds to 100 block size). But due to some failed BDocs, not all "successfully" BDocs will have 100 records each, some may have only 1 record inside...or 30...or 88 for example. So, may i know how to find or is there a report i can look into to find the total number of records clearly shown for each of the successfully processed green status BDocs???
    Please help and points will be rewards!!
    Thank You
    Best Regards,
    CK

    I am just showing this to show how to get the rowcount along with the cursor, if the program has so much gap of between verifying the count(*) and opening the cursor.
    Justin actually covered this, he said, oracle has to spend some resources to build this functionality. As it is not most often required, it does not makes much sence to see it as a built-in feature. However, if we must see the rowcount when we open the cursor, here is a way, but it is little bit expensive.
    SQL> create table emp_crap as select * from emp where 1 = 2;
    Table created.
    SQL> declare
      2   v_cnt     number := 0;
      3   zero_rows         exception;
      4  begin
      5    for rec in (select * from (select rownum rn, e.ename from emp_crap e) order by 1 desc)
      6     loop
      7        if v_cnt = 0 then
      8           v_cnt := rec.rn;
      9        end if;
    10     end loop;
    11     if v_cnt = 0 then
    12        raise zero_rows;
    13     end if;
    14   exception
    15    when zero_rows then
    16      dbms_output.put_line('No rows');
    17   end;
    18  /
    No rows
    PL/SQL procedure successfully completed.
    -- Now, let us use the table, which has the data
    SQL> declare
      2   v_cnt     number := 0;
      3   zero_rows         exception;
      4  begin
      5    for rec in (select * from
      6          (select rownum rn, e.ename from emp e)
      7          order by 1 desc)
      8     loop
      9        if v_cnt = 0 then
    10           v_cnt := rec.rn;
    11           dbms_output.put_line(v_cnt);
    12        end if;
    13     end loop;
    14     if v_cnt = 0 then
    15        raise zero_rows;
    16     end if;
    17   exception
    18    when zero_rows then
    19      dbms_output.put_line('No rows');
    20   end;
    21  /
    14
    PL/SQL procedure successfully completed.Thx,
    Sri

  • CLIENT_TEXT_IO - Hanging on "PUT" for large number of records

    I have successfully used CLIENT_TEXT_IO but my users have run into an error where the Form hangs and spits out details such:
    "oracle.forms.net.HTTPNStream.doFlush"
    etc....
    This happens when the number of records in the datablock is high (ex: 70,000 recs). So my question is: Is there a limit on how many lines you can write to a file?
    I'm just creating a CSV file on the client's machine using CLIENT_TEXT_IO.PUT_LINE. It works fine on say a few thousand recs but after that it hangs.
    I'm on Oracle Application Server 10g, Release 9.0.4 on Windows Server 2003, and forms compiled using Oracle Developer Suite 9.0.4.
    Thanks,
    Gio

    Hello,
    When playing with huge data, it is better to generate the file on the A.S. then get it back to the client.
    <p>Read this article</p>
    Francois

  • How to print new main report page based on number of records in subreport?

    Hi,
    I've a main report which prints on a pre-printed stationary with 2 subreports in the detail section.  The subreport prints multiple records.  If there are more than 6 records in the subreports, a new page should be printed with all the contents of the main report and the subreports should continue with the remaining records.  How can I do this?  I've tried incrementing a shared variable counter in the subreport and accessed that variable in the detail section of the main report to check the counter and reset the counter to 0 if it exceeds 6.  But this does not work.  All the lines till it can hit page footer are printing on first page and the second page prints only the remaining subreport lines without the other details of the main report page. 
    The main report page uses a view which returns only 1 record. The subreport returns multiple records. 
    I would certainly appreciate if any one can provide a solution for this problem. 
    Thanks,
    Ram

    hi Ram,
    sorry, i misunderstood exactly what you wanted.
    let me get this straight...
    1) if there's more than 6 records in the subreport then show only the first 6 records
    2) then (if there's more than 6 records) do a new page after the subreport
    3) then (if there's more than 6 records) show the subreport's remaining records and the rest of the report
    let know if i've got it straight.
    if that is correct, you will have received another sample report that does this. in total 3 subreports are used...one is a dummy report that adds up the number of records which is used later. there's more info on the report's design mode.
    cheers,
    jamie

  • How to determine number of records in recordset returned by stored procedure?

    In TestStand 3.0 I am calling an SQL stored procedure where the stored
    procedure returns a recordset. Everything appears to work (I can
    iterate through the recordset and see that the data is valid).
    However, I can not figure out how to easilly determine how many
    records are actually in the recordset. Unlike the 'Open SQL
    Statement' step, in the 'Data Operation' step that actually invokes
    the stored procedure, there is no 'Number of Records Selected' option
    to specify a TestStand variable to accept this value. I know I could
    iterate through the returned recordset incrementing a counter until a
    Fetch fails, but for larger recordsets, traversing the table multiple
    times would be quite time consuming
    . I am hoping to avoid this if
    possible. Is there an easier way to get the number of records in a
    recordset returned from a stored procedure call?
    Bob

    Bob -
    The cursor type of the ADO Recordset object affects whether the number of records can be determined. The Recordset.RecordCount property will return -1 for a forward-only cursor; the actual count for a static or keyset cursor; and either -1 or the actual count for a dynamic cursor, depending on the data source.
    Because ADO does not let me set the cursor type for command objects which is what a stored procedure requires, it is up to the data source to determine the type of cursor and the support for record count.
    Scott Richardson (NI)
    Scott Richardson
    National Instruments

Maybe you are looking for

  • Iphone 4s calendar alert issue

    I tend to put 2 alerts on my calendar events, one for 1 day before and the second one for 1 hour before (or some other on the day of event).  I have noticed that when I put in the first alert for 1 day before and then enter the second alert for 1 hou

  • Iplanet  web server 7 -how to get more information when a certificate is untrusted ?

    Hi When a client tries to access to iplanet 7.0.15, we only get a line in the errors log with a simple error., for instance SSL_ERROR_UNKOWN_CA_ALERT... We would like to know if it is possible to configure iplanet to get more information about this r

  • Need help with VBA to control PDF fields

    Hi, I am attempting to use Excel VBA to control Acrobat but cannot find any documentation on the syntax to modify fields within the PDF form. I am able to commnicate with Acrobat using CreateObject ( "AcroExch.App") and point to the AVDocs, PDDocs, P

  • Grouping records in an internal table

    Dear all, Pls follow the code and suggest me to get the specified format. Consider that  i have created a view to populate the reqd data and collect those in an internal table and do a calculation for one field and modify the internal table. when i p

  • [Query 3.5] Description in another languages ( English and German)

    Hello Guru'S, i have a BW QUery 3.5. I have to implement the english descriptions of the Colums and Rows. Where can I do that ? Thanks Semi