Problems processing a large batch of data within a Weblogic 5.1 cluster

I have about exhausted my sources of information on this, so I ask the
          experts:
          The project I am involved with is attempting to build a scheduled
          process on a 5.1 cluster which will retrieve hundreds to thousands of
          database rows from a small group of Oracle tables and then submit them
          into a legacy system for processing. The problems we face are:
          1. We cannot have duplicate data submitted from startup classes on
          different servers in the cluster
          2. We need the process to run if even only one machine in the cluster is
          available and therefore do not want to tie the process to a single
          machine
          3. We would prefer to distribute the workload across all servers
          currently available in the cluster, but this is not mandatory.
          One solution is to retrieve each record as an Entity bean, attempting to
          update it to "in process" and rejecting any failures thus utilizing
          database locking, but due to the large number of records being processed
          this is much too resource intensive as well as time consuming.
          We have looked into JMS based solutions, but it appears that JMS under
          5.1 is tied to a specific server (failing case 2).
          One acceptable approach that we have yet to find a way of implementing
          is to have a "singleton" session bean, only one of which exists on the
          cluster (though it may exist anywhere on the cluster). I have run
          across a number of other applications for just this sort of EJB, but is
          it possible to implement?
          Sadly, Weblogic 6.0 cannot be part of the solution as our company will
          not be adopting it until well after this project's delivery date.
          Are we missing something obvious?
          Many thanks!
          -Steve
          

You have to have all the servers in the cluster polling for work out of the
          database. Use the database as a scheduling / routing mechanism. Use timers
          on each server in the cluster to kick off a "find me one thing to do"
          process every n minutes or seconds.
          Peace,
          Cameron Purdy
          Tangosol, Inc.
          http://www.tangosol.com
          +1.617.623.5782
          WebLogic Consulting Available
          "Steven Wicklund" <[email protected]> wrote in message
          news:[email protected]...
          > I have about exhausted my sources of information on this, so I ask the
          > experts:
          > The project I am involved with is attempting to build a scheduled
          > process on a 5.1 cluster which will retrieve hundreds to thousands of
          > database rows from a small group of Oracle tables and then submit them
          > into a legacy system for processing. The problems we face are:
          > 1. We cannot have duplicate data submitted from startup classes on
          > different servers in the cluster
          > 2. We need the process to run if even only one machine in the cluster is
          > available and therefore do not want to tie the process to a single
          > machine
          > 3. We would prefer to distribute the workload across all servers
          > currently available in the cluster, but this is not mandatory.
          >
          > One solution is to retrieve each record as an Entity bean, attempting to
          > update it to "in process" and rejecting any failures thus utilizing
          > database locking, but due to the large number of records being processed
          > this is much too resource intensive as well as time consuming.
          >
          > We have looked into JMS based solutions, but it appears that JMS under
          > 5.1 is tied to a specific server (failing case 2).
          >
          > One acceptable approach that we have yet to find a way of implementing
          > is to have a "singleton" session bean, only one of which exists on the
          > cluster (though it may exist anywhere on the cluster). I have run
          > across a number of other applications for just this sort of EJB, but is
          > it possible to implement?
          >
          > Sadly, Weblogic 6.0 cannot be part of the solution as our company will
          > not be adopting it until well after this project's delivery date.
          >
          > Are we missing something obvious?
          > Many thanks!
          > -Steve
          >
          

Similar Messages

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

  • BAPI_GOODSMVT_CREATE - Batch Expiry Date......

    HI Im,
    using this BAPI to post a goods movement from one location to another in the same plant.
    The stock is being posted to a specific Batch.  When I do this the material document gets created okay and the stock moves.
    The problem is that the batch expiry date is not updating.  The stock I'm moving has a later expiry date than the current date.  If I do this manually in MIGO it works fine and updates the batch expiry date with the later date.
    Has anyone else experienced this or know of a work around ?

    Hi
    I am populating the item data like:
    *- Populate item data
      LOOP AT i_items_trans.
        CLEAR ibapigm_item.
    *- Convert the matnr backto 18 char form (External)
        CALL FUNCTION 'CONVERSION_EXIT_MATN2_INPUT'
          EXPORTING
            input            = i_items_trans-matnr
          IMPORTING
            output           = i_items_trans-matnr
          EXCEPTIONS
            number_not_found = 1
            length_error     = 2
            OTHERS           = 3.
        IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
        ibapigm_item-material   = i_items_trans-matnr.
        ibapigm_item-plant      = x_user-werks.
        ibapigm_item-stge_loc   = x_user-lgort.
        ibapigm_item-move_type  = '101'.   "Goods Receipt
        ibapigm_item-mvt_ind    = 'B'.     "Goods Movement for PO
        ibapigm_item-po_number  = i_items_trans-ebeln.
        ibapigm_item-po_item    = i_items_trans-ebelp.
        ibapigm_item-gr_rcpt    = sy-uname.
        ibapigm_item-quantity   = i_items_trans-ktmng.
        ibapigm_item-base_uom   = i_items_trans-meins.
        ibapigm_item-entry_qnt  = i_items_trans-ktmng.
        ibapigm_item-entry_uom  = i_items_trans-meins.
        ibapigm_item-batch      = i_items_trans-charg.
        APPEND ibapigm_item.
      ENDLOOP.
    Header data like:
      MOVE: sy-datum TO bapigm_head-pstng_date,
            sy-datum TO bapigm_head-doc_date,
            sy-uname TO bapigm_head-pr_uname,
            v_mblnr  TO bapigm_head-ref_doc_no,
            con_bfwms_bestand TO bapigm_head-ext_wms.
    *- Document Header Text
      IF NOT v_bktxt IS INITIAL.
    *- Preceed "INV=" to the Invoice number entered
        CONCATENATE 'INV='(003) v_bktxt INTO v_bktxt
        SEPARATED BY space.
        bapigm_head-header_txt = v_bktxt.
      ENDIF.
      MOVE gmcode_01 TO bapigm_code-gm_code.
    And calling the BAPI as:
      CALL FUNCTION 'BAPI_GOODSMVT_CREATE'
        EXPORTING
          goodsmvt_header  = bapigm_head
          goodsmvt_code    = bapigm_code
        IMPORTING
          goodsmvt_headret = bapigm_headret
        TABLES
          goodsmvt_item    = ibapigm_item
          return           = ibapigm_ret.
    *- Commit on Success
      IF NOT bapigm_headret-mat_doc IS INITIAL.
        CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'.
      ENDIF.
    Havent faced any problem so far...
    Did you populate <b>EXPIRYDATE</b> in  structure <b>BAPI2017_GM_ITEM_CREATE</b>? If yes, see in debugging, whatz happening to this value.
    Regards,
    Raj

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Problem while having a large set of data to work on!

    Hi,
    I am facing great problem with processing large set of data. I have a requirement in which i'm supposed to generate a report.
    I have a table and a MView, which i have joined to reduce the number of records to process. The MView holds 200,00,000 records while the table 18,00,000. Based on join conditions and where clause i'm able to break down the useful data to approx 4,50,000 and i'm getting 8 of my report columns from this join. I'm dumping these records into the table from where i'll be generating the report by spooling.
    Below is the block which takes 12mins to insert into the report table MY_ACCOUNT_PHOTON_DUMP:
    begin
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    insert into MY_ACCOUNT_PHOTON_DUMP --- Report table
    (SUBSCR_NO, ACCOUNT_NO, AREA_CODE, DEL_NO, CIRCLE, REGISTRATION_DT, EMAIL_ID, ALT_CNTCT_NO)
    select crm.SUBSCR_NO, crm.ACCOUNT_NO, crm.AREA_CODE, crm.DEL_NO, crm.CIRCLE_ID,
    aa.CREATED_DATE, aa.EMAIL_ID, aa.ALTERNATE_CONTACT
    from MV_CRM_SUBS_DTLS crm, --- MView
    (select /*+ ALL_ROWS */ A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
    from MCCI_PROFILE_DTLS a, MCCI_PROFILE_SUBSCR_DTLS b
    where A.PROFILE_ID = B.PROFILE_ID
    and B.ACE_STATUS = 'N'
    ) aa --- Join of two tables giviing me 18,00,000 recs
    where crm.SUBSCR_NO = aa.SUBSCR_NO
    and crm.SRVC_TYPE_ID = '125'
    and crm.END_DT IS NULL;
    INTERNET_METER_TABLE_PROC_1('MCCIPRD','MY_ACCOUNT_PHOTON_DUMP'); --- calling procedure to analyze the report table
    COMMIT;
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    end; --- 12 min 04 secFor the rest of the 13 columns required i am running a block which has a FOR UPDATE cursor on the report table:
    declare
    cursor cur is
    select SUBSCR_NO, ACCOUNT_NO, AREA_CODE, DEL_NO,
    CIRCLE, REGISTRATION_DT, EMAIL_ID, ALT_CNTCT_NO
    from MCCIPRD.MY_ACCOUNT_PHOTON_DUMP --where ACCOUNT_NO = 901237064
    for update of
    MRKT_SEGMNT, AON, ONLINE_PAY, PAID_AMNT, E_BILL, ECS, BILLED_AMNT,
    SRVC_TAX, BILL_PLAN, USAGE_IN_MB, USAGE_IN_MIN, NO_OF_LOGIN, PHOTON_TYPE;
    v_aon VARCHAR2(10) := NULL;
    v_online_pay VARCHAR2(10) := NULL;
    v_ebill VARCHAR2(10) := NULL;
    v_mkt_sgmnt VARCHAR2(50) := NULL;
    v_phtn_type VARCHAR2(50) := NULL;
    v_login NUMBER(10) := 0;
    v_paid_amnt VARCHAR2(50) := NULL;
    v_ecs VARCHAR2(10) := NULL;
    v_bill_plan VARCHAR2(100):= NULL;
    v_billed_amnt VARCHAR2(10) := NULL;
    v_srvc_tx_amnt VARCHAR2(10) := NULL;
    v_usg_mb NUMBER(10) := NULL;
    v_usg_min NUMBER(10) := NULL;
    begin
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    for rec in cur loop
    begin
    select apps.TTL_GET_DEL_AON@MCCI_TO_PRD591(rec.ACCOUNT_NO, rec.DEL_NO, rec.CIRCLE)
    into v_aon from dual;
    exception
    when others then
    v_aon := 'NA';
    end;
    SELECT DECODE(COUNT(*),0,'NO','YES') into v_online_pay
    FROM TTL_DESCRIPTIONS@MCCI_TO_PRD591
    WHERE DESCRIPTION_CODE IN(SELECT DESCRIPTION_CODE FROM TTL_BMF_TRANS_DESCR@MCCI_TO_PRD591
    WHERE BMF_TRANS_TYPE
    IN (SELECT BMF_TRANS_TYPE FROM
    TTL_BMF@MCCI_TO_PRD591 WHERE ACCOUNT_NO = rec.ACCOUNT_NO
    AND POST_DATE BETWEEN
    TO_DATE('01-'||TO_CHAR(SYSDATE,'MM-YYYY'),'DD-MM-YYYY') AND SYSDATE
    AND DESCRIPTION_TEXT IN (select DESCRIPTION from fnd_lookup_values@MCCI_TO_PRD591 where
    LOOKUP_TYPE='TTL_ONLINE_PAYMENT');
    SELECT decode(count( *),0,'NO','YES') into v_ebill
    FROM TTL_CUST_ADD_DTLS@MCCI_TO_PRD591
    WHERE CUST_ACCT_NBR = rec.ACCOUNT_NO
    AND UPPER(CUSTOMER_PREF_MODE) ='EMAIL';
    begin
    select ACC_SUB_CAT_DESC into v_mkt_sgmnt
    from ttl_cust_dtls@MCCI_TO_PRD591 a, TTL_ACCOUNT_CATEGORIES@MCCI_TO_PRD591 b
    where a.CUST_ACCT_NBR = rec.ACCOUNT_NO
    and a.market_code = b.ACC_SUB_CAT;
    exception
    when others then
    v_mkt_sgmnt := 'NA';
    end;
    begin
    select nvl(sum(TRANS_AMOUNT),0) into v_paid_amnt
    from ttl_bmf@MCCI_TO_PRD591
    where account_no = rec.ACCOUNT_NO
    AND POST_DATE
    BETWEEN TO_DATE('01-'||TO_CHAR(SYSDATE,'MM-YYYY'),'DD-MM-YYYY')
    AND SYSDATE;
    exception
    when others then
    v_paid_amnt := 'NA';
    end;
    SELECT decode(count(1),0,'NO','YES') into v_ecs
    from ts.Billdesk_Registration_MV@MCCI_TO_PRD591 where ACCOUNT_NO = rec.ACCOUNT_NO
    and UPPER(REGISTRATION_TYPE ) = 'ECS';
    SELECT decode(COUNT(*),0,'PHOTON WHIZ','PHOTON PLUS') into v_phtn_type
    FROM ts.ttl_cust_ord_prdt_dtls@MCCI_TO_PRD591 A, ttl_product_mstr@MCCI_TO_PRD591 b
    WHERE A.SUBSCRIBER_NBR = rec.SUBSCR_NO
    and (A.prdt_disconnection_date IS NULL OR A.prdt_disconnection_date > SYSDATE )
    AND A.prdt_disc_flag = 'N'
    AND A.prdt_nbr = b.product_number
    AND A.prdt_type_id = b.prouduct_type_id
    AND b.first_level LIKE 'Feature%'
    AND UPPER (b.product_desc) LIKE '%HSIA%';
    SELECT count(1) into v_login
    FROM MCCIPRD.MYACCOUNT_SESSION_INFO a
    WHERE (A.DEL_NO = rec.DEL_NO or A.DEL_NO = ltrim(rec.AREA_CODE,'0')||rec.DEL_NO)
    AND to_char(A.LOGIN_TIME,'Mon-YYYY') = to_char(sysdate-5,'Mon-YYYY');
    begin
    select PACKAGE_NAME, BILLED_AMOUNT, SERVICE_TAX_AMOUNT, USAGE_IN_MB, USAGE_IN_MIN
    into v_bill_plan, v_billed_amnt, v_srvc_tx_amnt, v_usg_mb, v_usg_min from
    (select rank() over(order by STATEMENT_DATE desc) rk,
    PACKAGE_NAME, USAGE_IN_MB, USAGE_IN_MIN
    nvl(BILLED_AMOUNT,'0') BILLED_AMOUNT, NVL(SRVC_TAX_AMNT,'0') SERVICE_TAX_AMOUNT
    from MCCIPRD.MCCI_IM_BILLED_DATA
    where (DEL_NUM = rec.DEL_NO or DEL_NUM = ltrim(rec.AREA_CODE,'0')||rec.DEL_NO)
    and STATEMENT_DATE like '%'||to_char(SYSDATE,'Mon-YY')||'%')
    where rk = 1;
    exception
    when others then
    v_bill_plan := 'NA';
    v_billed_amnt := '0';
    v_srvc_tx_amnt := '0';
    v_usg_mb := 0;
    v_usg_min := 0;
    end;
    -- UPDATE THE DUMP TABLE --
    update MCCIPRD.MY_ACCOUNT_PHOTON_DUMP
    set MRKT_SEGMNT = v_mkt_sgmnt, AON = v_aon, ONLINE_PAY = v_online_pay, PAID_AMNT = v_paid_amnt,
    E_BILL = v_ebill, ECS = v_ecs, BILLED_AMNT = v_billed_amnt, SRVC_TAX = v_srvc_tx_amnt,
    BILL_PLAN = v_bill_plan, USAGE_IN_MB = v_usg_mb, USAGE_IN_MIN = v_usg_min, NO_OF_LOGIN = v_login,
    PHOTON_TYPE = v_phtn_type
    where current of cur;
    end loop;
    COMMIT;
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    exception when others then
    dbms_output.put_line(SQLCODE||'::'||SQLERRM);
    end;The report takes >6hrs. I know that most of the SELECT queries have ACCOUNT_NO as WHERE clause and can be joined, but when i joining few of these blocks with the initial INSERT query it was no better.
    The individual queries within the cursor loop dont take more then 0.3 sec to execute.
    I'm using the FOR UPDATE as i know that the report table is being used solely for this purpose.
    Can somebody plz help me with this, i'm in desperate need of good advice here.
    Thanks!!
    Edited by: user11089213 on Aug 30, 2011 12:01 AM

    Hi,
    Below is the explain plan for the original query:
    select /*+ ALL_ROWS */  crm.SUBSCR_NO, crm.ACCOUNT_NO, ltrim(crm.AREA_CODE,'0'), crm.DEL_NO, >crm.CIRCLE_ID
    from MV_CRM_SUBS_DTLS crm,
            (select /*+ ALL_ROWS */  A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
            from MCCIPRD.MCCI_PROFILE_DTLS a, MCCIPRD.MCCI_PROFILE_SUBSCR_DTLS b
            where A.PROFILE_ID = B.PROFILE_ID
            and   B.ACE_STATUS = 'N'
            ) aa
    where crm.SUBSCR_NO    = aa.SUBSCR_NO
    and   crm.SRVC_TYPE_ID = '125'
    and   crm.END_DT IS NULL
    | Id  | Operation              | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                          |  1481K|   100M|       |   245K  (5)| 00:49:09 |
    |*  1 |  HASH JOIN             |                          |  1481K|   100M|    46M|   245K  (5)| 00:49:09 |
    |*  2 |   HASH JOIN            |                          |  1480K|    29M|    38M| 13884   (9)| 00:02:47 |
    |*  3 |    TABLE ACCESS FULL   | MCCI_PROFILE_SUBSCR_DTLS |  1480K|    21M|       |  3383  (13)| 00:00:41 |
    |   4 |    INDEX FAST FULL SCAN| SYS_C002680              |  2513K|    14M|       |  6024   (5)| 00:01:13 |
    |*  5 |   MAT_VIEW ACCESS FULL | MV_CRM_SUBS_DTLS_08AUG   |  1740K|    82M|       |   224K  (5)| 00:44:49 |
    Predicate Information (identified by operation id):
       1 - access("CRM"."SUBSCR_NO"="B"."SUBSCR_NO")
       2 - access("A"."PROFILE_ID"="B"."PROFILE_ID")
       3 - filter("B"."ACE_STATUS"='N')
       5 - filter("CRM"."END_DT" IS NULL AND "CRM"."SRVC_TYPE_ID"='125')Whereas for the modified MView query, the plane remains the same:
    select /*+ ALL_ROWS */ crm.SUBSCR_NO, crm.ACCOUNT_NO, ltrim(crm.AREA_CODE,'0'), crm.DEL_NO, >crm.CIRCLE_ID
    from    (select * from MV_CRM_SUBS_DTLS
             where SRVC_TYPE_ID = '125'
             and   END_DT IS NULL) crm,
            (select /*+ ALL_ROWS */  A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
            from MCCIPRD.MCCI_PROFILE_DTLS a, MCCIPRD.MCCI_PROFILE_SUBSCR_DTLS b
            where A.PROFILE_ID = B.PROFILE_ID
            and   B.ACE_STATUS = 'N'
            ) aa
    where crm.SUBSCR_NO  = aa.SUBSCR_NO
    | Id  | Operation              | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                          |  1481K|   100M|       |   245K  (5)| 00:49:09 |
    |*  1 |  HASH JOIN             |                          |  1481K|   100M|    46M|   245K  (5)| 00:49:09 |
    |*  2 |   HASH JOIN            |                          |  1480K|    29M|    38M| 13884   (9)| 00:02:47 |
    |*  3 |    TABLE ACCESS FULL   | MCCI_PROFILE_SUBSCR_DTLS |  1480K|    21M|       |  3383  (13)| 00:00:41 |
    |   4 |    INDEX FAST FULL SCAN| SYS_C002680              |  2513K|    14M|       |  6024   (5)| 00:01:13 |
    |*  5 |   MAT_VIEW ACCESS FULL | MV_CRM_SUBS_DTLS_08AUG   |  1740K|    82M|       |   224K  (5)| 00:44:49 |
    Predicate Information (identified by operation id):
       1 - access("CRM"."SUBSCR_NO"="B"."SUBSCR_NO")
       2 - access("A"."PROFILE_ID"="B"."PROFILE_ID")
       3 - filter("B"."ACE_STATUS"='N')
       5 - filter("CRM"."END_DT" IS NULL AND "CRM"."SRVC_TYPE_ID"='125')Also took your advice and tried to merge all the queries into single INSERT SQL, will be posting the results shortly.
    Edited by: BluShadow on 30-Aug-2011 10:21
    added {noformat}{noformat} tags.  Please read {message:id=9360002} to learn to do this yourself                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Process order release date as batch manufacturing date

    Hi All,
    We have automatic batch creation during the release of the process order. Now the requirement is that proess order release date should be updated as Batch manufacturing date in the corresponding Batch data (MSC3N). How this can be done.
    Regards
    Vinamrath

    Hi Vinamrath,
    Hope your proess order release date and the batch creation date are same.
    In such case go to txn CT04 (create Characteristic) and create a  Characteristic say  "Batch manufacturing date". Then in the tab Addnl data maintain the table name as "MCH1 or MCHA" and Field name as "ERSDA" (created on-this is the batch creation date).
    In case your  proess order release date and the batch creation date are not same, then there is a field called " HSDAT" -date of manufacture.you can maintain this field name in addnl data.
    Maintain this Characteristic in classification (CL02)
    Then this will get updated in the batch and same can be viewed in MSC3N
    Regards
    Hari
    Edited by: Harikris_83 on Sep 29, 2011 3:05 PM

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • Problem processing large message using dbadapter.

    I have a process which is initiated by dbadapter fetch from table.
    Its working fine when the records are less. But when the number of records
    are more than 6000(more than 4MB) I am getting errors as below.
    The process goes to off state after these errors.
    Any body have any suggestions on how to process large messages ?
    <2006-08-02 11:55:25,172> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "cube delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:36,473> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:42,689> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [OracleDB_ptt::receive(HccIauHdrCollection)] - JCA Activation Agent was unable to perform delivery of inbound message to BPEL Process 'bpel://localhost/default/IAUProcess~1.0/' due to: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:56:22,573> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound>
    com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    <2006-08-02 11:57:52,341> <ERROR> <default.collaxa.cube.ws> <Database Adapter::Outbound> <oracle.tip.adapter.db.InboundWork runOnce> Non retriable exception during polling of the database ORABPEL-11624 DBActivationSpec Polling Exception.
    Query name: [OracleDB], Descriptor name: [IAUProcess.HccIauHdr]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by javax.resource.ResourceException: ORABPEL-12509 Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:684)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: ORABPEL-12509
    Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:628)
         ... 9 more
    Caused by: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         ... 9 more
    .

    To process 6000 messages in one shot is not the best practice in BPEL. For that you have to choose concepts like datawarehouse or so.
    But you might want to process it in batch mode. So think of using batch option in DB adapter and try to define MaxRaiseSize and MaxTransactionSize for your DB adapter. Further explanation is here
    http://download-west.oracle.com/docs/cd/B14099_19/integrate.1012/b25307/adptr_db.htm#CHDHAIHA

  • Population o batch manufacturing date automatically through process mesages

    Hi gurus,
    We sre using control recipe to download data into MES system and uploading data through process message to do the confirmation and goods receipt. We are using batch management and calculating shelf life expiration date at the time of confirmation using manufaturing date + shelf life. Now the issue is, through process message we are not able to populate batch manufacturing date automatically and getting confirmation error after sending the process message to SAP. In process messgae category we are using characterics PPPI_END_DATE,PPPI_EVENT_DATE, PPPI_START_DATE as a date field.
    Can anybody suggest how can we populate batch manufacturing date automatically through process mesages.
    With thanks
    Rajib Pathak

    thanks

  • Problem to update very large volume of data for 2LIS_04* extr.

    Hi
    I have problem with jobs for 2LIS_04* extractors using Queued Delta.
    There are interface between R3 system and other production system and 3 or 4 times in the month very large volumen of data has been send to R3.
    Then job runs very long and not pull data to RSA7.
    How to resolve this problem.
    Our R3 system is PI_BASIS 2005_1_620.
    Thanks
    Adam

    U can check these SAP Notes..........it will help u........
    How can downtime be reduced for setup table update
    SAP Note Number: 753654
    Performance improvement for filling the setup tables
    SAP Note Number: 436393
    LBWE: Performance for setup of extract structures
    SAP Note Number: 437672

  • Email Optimization for Large batch

    I'm sending about 7,000 emails to clients each afternoon.
    This is all opt-in.
    The problem is the time it takes to process the emails out
    of the spool, and the effect it has on other emails from the
    system.
    Regular CF business emails get held up for 30 or 45 minutes
    waiting to get through the spool.
    I have multiple smtp servers so I can send the messages to
    different smtp servers, but I don't know how to assign a higher
    priority to emails that are not in the large batch (get them
    through the spool).
    Any ideas?
    Thanks.

    You could a) dynamically redefine the smtp server to be used
    within each iteration of the loop creating your emails, or b) write
    a cf process which only sends out messages in blocks of 200 every
    30 seconds, or c) a combination of a and b (I would opt for
    'c').

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Uploading of large amount of data

    Hi all,
    i really hope you can help me. I have to upload quite large amount of data from flat files to ODS (via PSA of course). But the process takes very long time. I used method of loadin to PSA and then packet by packet into ODS. Loading of cca 1.300.000 lines from flat file takes about 6 or more hours. It seems strange for me. Is it normal or not?? Or should I use another uploading method or set up ODS some way ?? thanks

    hi jj,
    welcome to the SDN!
    in my limited experience, 6hrs for 1.3M records is a bit too long. here are some things you could try and look into:
    - load from the application server, not from the client computer (meaning, move your file to the server where BW is running, to minimize network traffic).
    - check your transfer rules and any customer exits related to loading, as the smallest performance-inefficient bits of code can cause a lot of problems.
    - check the size of data packets you're transmitting, as it could also cause problems, via tcode RSCUSTA2 (i think, but i'm not 100% sure).
    hope ths helps you out - please remember to give out points as a way of saying thanks to those that help you out okay? =)
    ryan.

  • BDC - Session Method - No batch input data for screen  SAPMZ_TPSCREEN02 100

    Hi ABAP Experts,
    I have written a Dialog Program For a Screen Which contains 5 fields namely,
    carrid,
    connid,
    fldate,
    price,
    planetype.
    I have written the PAI logic to insert whatever entries entered in the fields, into Database Table SFLIGHT.
    I created a Transaction and Tested whether the entries are succesfully entetered into the Database Table and it works just fine.
    Now i planned to write a BDC program for the above Transaction so that i can upload data to the Database table from a flat file.
    I went to SHDB transaction and created a new recording and transferred the program to generate a source code.
    include bdcrecx1.
    start-of-selection.
    loop at itab.
    perform open_group.
    perform bdc_dynpro      using 'SAPMZ_TPSCREEN02' '1000'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=CREA'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'SFLIGHT-PLANETYPE'.
    perform bdc_field       using 'SFLIGHT-CARRID'
                                  'AA'.
    perform bdc_field       using 'SFLIGHT-CONNID'
                                  '0017'.
    perform bdc_field       using 'SFLIGHT-FLDATE'
                                  '11/01/2007'.
    perform bdc_field       using 'SFLIGHT-PRICE'
                                  '767'.
    perform bdc_field       using 'SFLIGHT-PLANETYPE'
                                  'A310-200F'.
    perform bdc_transaction using 'Z_TPSCREEN02'.
    perform close_group.
    Then i defined an internal table which contains the same fields as those in my Screen and Transaction.
    I populated the internal table from a flat file using GUI_UPLOAD function module.
    I want to clarify - I got this flat file by using the GUI_DOWNLOAD module and later i uploaded the same file using GUI_UPLOAD.
    I tested whether the internal table is populated or not using LOOP  AT ITAB. WRITE Statements.
    Its working just fine.
    Finally my code look like this.
    report ZVMREC
           no standard page heading line-size 255.
    TABLES: sflight.
    DATA: BEGIN OF itab OCCURS 0,
    carrid LIKE sflight-carrid,
    connid LIKE sflight-connid,
    fldate LIKE sflight-fldate,
    price TYPE sflight-price,
    planetype TYPE sflight-planetype,
          END OF itab.
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        filename                      = 'C:\users\vamc\documents\flightinfo.txt'
        FILETYPE                      = 'ASC'
        HAS_FIELD_SEPARATOR           = 'X'
      tables
        data_tab                      = itab.
    include bdcrecx1.
    start-of-selection.
    loop at itab.
    perform open_group.
    perform bdc_dynpro      using 'SAPMZ_TPSCREEN02' '1000'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=CREA'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'SFLIGHT-PLANETYPE'.
    perform bdc_field       using 'SFLIGHT-CARRID'
                                  'AA'.
    perform bdc_field       using 'SFLIGHT-CONNID'
                                  '0017'.
    perform bdc_field       using 'SFLIGHT-FLDATE'
                                  '11/01/2007'.
    perform bdc_field       using 'SFLIGHT-PRICE'
                                  '767'.
    perform bdc_field       using 'SFLIGHT-PLANETYPE'
                                  'A310-200F'.
    perform bdc_transaction using 'Z_TPSCREEN02'.
    perform close_group.
    endloop.
    I checked for errors, activated and executed.
    I gave session name and executed, it generated same number of sessions as number of records.
    I went to SM35 and Processed one of the sessions in foreground.
    It brought my screen up with all fields fill up by the fields of first record in the internal table and with
    OK Code popping up.
    I check the OK Code.
    Now the problem has come up, its says
    *No batch input data for screen SAPMZ_TPSCREEN02 1000 *
    My session was now incorrectly processed,
    Please help me to fix this problem.
    I searched so many forums and google it a lot.
    But i didn't find any clue.
    Kindly take your time and have a look at this problem and let me know how can i fix it.
    Thank you very much all.
    Shiv
    Edited by: Sivaram  Naga on Apr 15, 2008 5:57 AM

    I used this code to convert the date format. I'm still getting the short dump.
       DATA: v_yyyy(4) TYPE c,
           v_mm(2) TYPE c,
           v_dd(2) TYPE c,
           v_date(8) TYPE c.
    v_yyyy = itab-fldate(4).
    v_mm = itab-fldate+4(2).
    v_dd = itab-fldate+6(2).
    concatenate v_yyyy v_mm v_dd  into v_date. 
    I'm once again, putting my code.
    report ZVMREC
           no standard page heading line-size 255.
    TABLES: sflight.
    DATA: BEGIN OF itab OCCURS 0,
    carrid LIKE sflight-carrid,
    connid LIKE sflight-connid,
    fldate LIKE sflight-fldate,
    price TYPE sflight-price,
    planetype TYPE sflight-planetype,
          END OF itab.
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        filename                      = 'C:\users\vamc\documents\flightinfo.txt'
        FILETYPE                      = 'ASC'
        HAS_FIELD_SEPARATOR           = 'X'
      tables
        data_tab                      = itab.
    include bdcrecx1.
    start-of-selection.
    loop at itab.
    DATA: v_yyyy(4) TYPE c,
           v_mm(2) TYPE c,
           v_dd(2) TYPE c,
           v_date(8) TYPE c.
    v_yyyy = itab-fldate(4).
    v_mm = itab-fldate+4(2).
    v_dd = itab-fldate+6(2).
    concatenate v_yyyy v_mm v_dd  into v_date.
    perform open_group.
    perform bdc_dynpro      using 'SAPMZ_TPSCREEN02' '1000'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=CREA'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'SFLIGHT-PLANETYPE'.
    perform bdc_field       using 'SFLIGHT-CARRID'
                                   ITAB-CARRID.
    perform bdc_field       using 'SFLIGHT-CONNID'
                                   ITAB-CONNID.
    perform bdc_field       using 'SFLIGHT-FLDATE'
                                  V_DATE.
    perform bdc_field       using 'SFLIGHT-PRICE'
                                   ITAB-PRICE.
    perform bdc_field       using 'SFLIGHT-PLANETYPE'
                                  ITAB-PLANETYPE.
    perform bdc_transaction using 'Z_TPSCREEN02'.
    perform close_group.
    endloop.
    Kindly take a look at it and please help me out. I tried very hard. But i dont understand why?
    Thanks
    Shiv
    Edited by: Sivaram  Naga on Apr 15, 2008 5:46 PM

Maybe you are looking for