Problem to update very large volume of data for 2LIS_04* extr.

Hi
I have problem with jobs for 2LIS_04* extractors using Queued Delta.
There are interface between R3 system and other production system and 3 or 4 times in the month very large volumen of data has been send to R3.
Then job runs very long and not pull data to RSA7.
How to resolve this problem.
Our R3 system is PI_BASIS 2005_1_620.
Thanks
Adam

U can check these SAP Notes..........it will help u........
How can downtime be reduced for setup table update
SAP Note Number: 753654
Performance improvement for filling the setup tables
SAP Note Number: 436393
LBWE: Performance for setup of extract structures
SAP Note Number: 437672

Similar Messages

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • Problem with updating oracle DB with java date thru resultset.updateDate()

    URGENT Please
    I am facing problem in updating oracle database with java date through resultset.updateDate() method. Can anybody help me please
    following code is saving wrong date value (dec 4, 2006 instead of java date jul 4, 2007) in database:
    ResultSet rs = stmt.executeQuery("SELECT myDate FROM myTable");
    rs.first();
    SimpleDateFormat sqlFormat = new SimpleDateFormat("yyyy-mm-dd");
    java.util.Date myDate = new Date();
    rs.updateDate("myDate", java.sql.Date.valueOf(sqlFormat.format(myDate)));
    rs.updateRow();

    I believe you should use yyyy-MM-dd instead of yyyy-mm-dd. I think MM stands for month while mm stands for minute as per
    http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html
    (If this works, after spending so much of your time trying to solve it, don't hit yourself in the head too hard. I find running out of the room laughing hysterically feels better).
    Here is a more standard(?) way of updating:
    String sqlStatement=
    "update myTable set myDate=? where personID=?"
    PreparedStatement p1= connection.prepareStatement(sqlStatement);
    p1.setDate(1,new java.sqlDate());
    p1.setInt(2, personID);
    p1.executeUpdate();

  • Problem while having a large set of data to work on!

    Hi,
    I am facing great problem with processing large set of data. I have a requirement in which i'm supposed to generate a report.
    I have a table and a MView, which i have joined to reduce the number of records to process. The MView holds 200,00,000 records while the table 18,00,000. Based on join conditions and where clause i'm able to break down the useful data to approx 4,50,000 and i'm getting 8 of my report columns from this join. I'm dumping these records into the table from where i'll be generating the report by spooling.
    Below is the block which takes 12mins to insert into the report table MY_ACCOUNT_PHOTON_DUMP:
    begin
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    insert into MY_ACCOUNT_PHOTON_DUMP --- Report table
    (SUBSCR_NO, ACCOUNT_NO, AREA_CODE, DEL_NO, CIRCLE, REGISTRATION_DT, EMAIL_ID, ALT_CNTCT_NO)
    select crm.SUBSCR_NO, crm.ACCOUNT_NO, crm.AREA_CODE, crm.DEL_NO, crm.CIRCLE_ID,
    aa.CREATED_DATE, aa.EMAIL_ID, aa.ALTERNATE_CONTACT
    from MV_CRM_SUBS_DTLS crm, --- MView
    (select /*+ ALL_ROWS */ A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
    from MCCI_PROFILE_DTLS a, MCCI_PROFILE_SUBSCR_DTLS b
    where A.PROFILE_ID = B.PROFILE_ID
    and B.ACE_STATUS = 'N'
    ) aa --- Join of two tables giviing me 18,00,000 recs
    where crm.SUBSCR_NO = aa.SUBSCR_NO
    and crm.SRVC_TYPE_ID = '125'
    and crm.END_DT IS NULL;
    INTERNET_METER_TABLE_PROC_1('MCCIPRD','MY_ACCOUNT_PHOTON_DUMP'); --- calling procedure to analyze the report table
    COMMIT;
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    end; --- 12 min 04 secFor the rest of the 13 columns required i am running a block which has a FOR UPDATE cursor on the report table:
    declare
    cursor cur is
    select SUBSCR_NO, ACCOUNT_NO, AREA_CODE, DEL_NO,
    CIRCLE, REGISTRATION_DT, EMAIL_ID, ALT_CNTCT_NO
    from MCCIPRD.MY_ACCOUNT_PHOTON_DUMP --where ACCOUNT_NO = 901237064
    for update of
    MRKT_SEGMNT, AON, ONLINE_PAY, PAID_AMNT, E_BILL, ECS, BILLED_AMNT,
    SRVC_TAX, BILL_PLAN, USAGE_IN_MB, USAGE_IN_MIN, NO_OF_LOGIN, PHOTON_TYPE;
    v_aon VARCHAR2(10) := NULL;
    v_online_pay VARCHAR2(10) := NULL;
    v_ebill VARCHAR2(10) := NULL;
    v_mkt_sgmnt VARCHAR2(50) := NULL;
    v_phtn_type VARCHAR2(50) := NULL;
    v_login NUMBER(10) := 0;
    v_paid_amnt VARCHAR2(50) := NULL;
    v_ecs VARCHAR2(10) := NULL;
    v_bill_plan VARCHAR2(100):= NULL;
    v_billed_amnt VARCHAR2(10) := NULL;
    v_srvc_tx_amnt VARCHAR2(10) := NULL;
    v_usg_mb NUMBER(10) := NULL;
    v_usg_min NUMBER(10) := NULL;
    begin
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    for rec in cur loop
    begin
    select apps.TTL_GET_DEL_AON@MCCI_TO_PRD591(rec.ACCOUNT_NO, rec.DEL_NO, rec.CIRCLE)
    into v_aon from dual;
    exception
    when others then
    v_aon := 'NA';
    end;
    SELECT DECODE(COUNT(*),0,'NO','YES') into v_online_pay
    FROM TTL_DESCRIPTIONS@MCCI_TO_PRD591
    WHERE DESCRIPTION_CODE IN(SELECT DESCRIPTION_CODE FROM TTL_BMF_TRANS_DESCR@MCCI_TO_PRD591
    WHERE BMF_TRANS_TYPE
    IN (SELECT BMF_TRANS_TYPE FROM
    TTL_BMF@MCCI_TO_PRD591 WHERE ACCOUNT_NO = rec.ACCOUNT_NO
    AND POST_DATE BETWEEN
    TO_DATE('01-'||TO_CHAR(SYSDATE,'MM-YYYY'),'DD-MM-YYYY') AND SYSDATE
    AND DESCRIPTION_TEXT IN (select DESCRIPTION from fnd_lookup_values@MCCI_TO_PRD591 where
    LOOKUP_TYPE='TTL_ONLINE_PAYMENT');
    SELECT decode(count( *),0,'NO','YES') into v_ebill
    FROM TTL_CUST_ADD_DTLS@MCCI_TO_PRD591
    WHERE CUST_ACCT_NBR = rec.ACCOUNT_NO
    AND UPPER(CUSTOMER_PREF_MODE) ='EMAIL';
    begin
    select ACC_SUB_CAT_DESC into v_mkt_sgmnt
    from ttl_cust_dtls@MCCI_TO_PRD591 a, TTL_ACCOUNT_CATEGORIES@MCCI_TO_PRD591 b
    where a.CUST_ACCT_NBR = rec.ACCOUNT_NO
    and a.market_code = b.ACC_SUB_CAT;
    exception
    when others then
    v_mkt_sgmnt := 'NA';
    end;
    begin
    select nvl(sum(TRANS_AMOUNT),0) into v_paid_amnt
    from ttl_bmf@MCCI_TO_PRD591
    where account_no = rec.ACCOUNT_NO
    AND POST_DATE
    BETWEEN TO_DATE('01-'||TO_CHAR(SYSDATE,'MM-YYYY'),'DD-MM-YYYY')
    AND SYSDATE;
    exception
    when others then
    v_paid_amnt := 'NA';
    end;
    SELECT decode(count(1),0,'NO','YES') into v_ecs
    from ts.Billdesk_Registration_MV@MCCI_TO_PRD591 where ACCOUNT_NO = rec.ACCOUNT_NO
    and UPPER(REGISTRATION_TYPE ) = 'ECS';
    SELECT decode(COUNT(*),0,'PHOTON WHIZ','PHOTON PLUS') into v_phtn_type
    FROM ts.ttl_cust_ord_prdt_dtls@MCCI_TO_PRD591 A, ttl_product_mstr@MCCI_TO_PRD591 b
    WHERE A.SUBSCRIBER_NBR = rec.SUBSCR_NO
    and (A.prdt_disconnection_date IS NULL OR A.prdt_disconnection_date > SYSDATE )
    AND A.prdt_disc_flag = 'N'
    AND A.prdt_nbr = b.product_number
    AND A.prdt_type_id = b.prouduct_type_id
    AND b.first_level LIKE 'Feature%'
    AND UPPER (b.product_desc) LIKE '%HSIA%';
    SELECT count(1) into v_login
    FROM MCCIPRD.MYACCOUNT_SESSION_INFO a
    WHERE (A.DEL_NO = rec.DEL_NO or A.DEL_NO = ltrim(rec.AREA_CODE,'0')||rec.DEL_NO)
    AND to_char(A.LOGIN_TIME,'Mon-YYYY') = to_char(sysdate-5,'Mon-YYYY');
    begin
    select PACKAGE_NAME, BILLED_AMOUNT, SERVICE_TAX_AMOUNT, USAGE_IN_MB, USAGE_IN_MIN
    into v_bill_plan, v_billed_amnt, v_srvc_tx_amnt, v_usg_mb, v_usg_min from
    (select rank() over(order by STATEMENT_DATE desc) rk,
    PACKAGE_NAME, USAGE_IN_MB, USAGE_IN_MIN
    nvl(BILLED_AMOUNT,'0') BILLED_AMOUNT, NVL(SRVC_TAX_AMNT,'0') SERVICE_TAX_AMOUNT
    from MCCIPRD.MCCI_IM_BILLED_DATA
    where (DEL_NUM = rec.DEL_NO or DEL_NUM = ltrim(rec.AREA_CODE,'0')||rec.DEL_NO)
    and STATEMENT_DATE like '%'||to_char(SYSDATE,'Mon-YY')||'%')
    where rk = 1;
    exception
    when others then
    v_bill_plan := 'NA';
    v_billed_amnt := '0';
    v_srvc_tx_amnt := '0';
    v_usg_mb := 0;
    v_usg_min := 0;
    end;
    -- UPDATE THE DUMP TABLE --
    update MCCIPRD.MY_ACCOUNT_PHOTON_DUMP
    set MRKT_SEGMNT = v_mkt_sgmnt, AON = v_aon, ONLINE_PAY = v_online_pay, PAID_AMNT = v_paid_amnt,
    E_BILL = v_ebill, ECS = v_ecs, BILLED_AMNT = v_billed_amnt, SRVC_TAX = v_srvc_tx_amnt,
    BILL_PLAN = v_bill_plan, USAGE_IN_MB = v_usg_mb, USAGE_IN_MIN = v_usg_min, NO_OF_LOGIN = v_login,
    PHOTON_TYPE = v_phtn_type
    where current of cur;
    end loop;
    COMMIT;
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    exception when others then
    dbms_output.put_line(SQLCODE||'::'||SQLERRM);
    end;The report takes >6hrs. I know that most of the SELECT queries have ACCOUNT_NO as WHERE clause and can be joined, but when i joining few of these blocks with the initial INSERT query it was no better.
    The individual queries within the cursor loop dont take more then 0.3 sec to execute.
    I'm using the FOR UPDATE as i know that the report table is being used solely for this purpose.
    Can somebody plz help me with this, i'm in desperate need of good advice here.
    Thanks!!
    Edited by: user11089213 on Aug 30, 2011 12:01 AM

    Hi,
    Below is the explain plan for the original query:
    select /*+ ALL_ROWS */  crm.SUBSCR_NO, crm.ACCOUNT_NO, ltrim(crm.AREA_CODE,'0'), crm.DEL_NO, >crm.CIRCLE_ID
    from MV_CRM_SUBS_DTLS crm,
            (select /*+ ALL_ROWS */  A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
            from MCCIPRD.MCCI_PROFILE_DTLS a, MCCIPRD.MCCI_PROFILE_SUBSCR_DTLS b
            where A.PROFILE_ID = B.PROFILE_ID
            and   B.ACE_STATUS = 'N'
            ) aa
    where crm.SUBSCR_NO    = aa.SUBSCR_NO
    and   crm.SRVC_TYPE_ID = '125'
    and   crm.END_DT IS NULL
    | Id  | Operation              | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                          |  1481K|   100M|       |   245K  (5)| 00:49:09 |
    |*  1 |  HASH JOIN             |                          |  1481K|   100M|    46M|   245K  (5)| 00:49:09 |
    |*  2 |   HASH JOIN            |                          |  1480K|    29M|    38M| 13884   (9)| 00:02:47 |
    |*  3 |    TABLE ACCESS FULL   | MCCI_PROFILE_SUBSCR_DTLS |  1480K|    21M|       |  3383  (13)| 00:00:41 |
    |   4 |    INDEX FAST FULL SCAN| SYS_C002680              |  2513K|    14M|       |  6024   (5)| 00:01:13 |
    |*  5 |   MAT_VIEW ACCESS FULL | MV_CRM_SUBS_DTLS_08AUG   |  1740K|    82M|       |   224K  (5)| 00:44:49 |
    Predicate Information (identified by operation id):
       1 - access("CRM"."SUBSCR_NO"="B"."SUBSCR_NO")
       2 - access("A"."PROFILE_ID"="B"."PROFILE_ID")
       3 - filter("B"."ACE_STATUS"='N')
       5 - filter("CRM"."END_DT" IS NULL AND "CRM"."SRVC_TYPE_ID"='125')Whereas for the modified MView query, the plane remains the same:
    select /*+ ALL_ROWS */ crm.SUBSCR_NO, crm.ACCOUNT_NO, ltrim(crm.AREA_CODE,'0'), crm.DEL_NO, >crm.CIRCLE_ID
    from    (select * from MV_CRM_SUBS_DTLS
             where SRVC_TYPE_ID = '125'
             and   END_DT IS NULL) crm,
            (select /*+ ALL_ROWS */  A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
            from MCCIPRD.MCCI_PROFILE_DTLS a, MCCIPRD.MCCI_PROFILE_SUBSCR_DTLS b
            where A.PROFILE_ID = B.PROFILE_ID
            and   B.ACE_STATUS = 'N'
            ) aa
    where crm.SUBSCR_NO  = aa.SUBSCR_NO
    | Id  | Operation              | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                          |  1481K|   100M|       |   245K  (5)| 00:49:09 |
    |*  1 |  HASH JOIN             |                          |  1481K|   100M|    46M|   245K  (5)| 00:49:09 |
    |*  2 |   HASH JOIN            |                          |  1480K|    29M|    38M| 13884   (9)| 00:02:47 |
    |*  3 |    TABLE ACCESS FULL   | MCCI_PROFILE_SUBSCR_DTLS |  1480K|    21M|       |  3383  (13)| 00:00:41 |
    |   4 |    INDEX FAST FULL SCAN| SYS_C002680              |  2513K|    14M|       |  6024   (5)| 00:01:13 |
    |*  5 |   MAT_VIEW ACCESS FULL | MV_CRM_SUBS_DTLS_08AUG   |  1740K|    82M|       |   224K  (5)| 00:44:49 |
    Predicate Information (identified by operation id):
       1 - access("CRM"."SUBSCR_NO"="B"."SUBSCR_NO")
       2 - access("A"."PROFILE_ID"="B"."PROFILE_ID")
       3 - filter("B"."ACE_STATUS"='N')
       5 - filter("CRM"."END_DT" IS NULL AND "CRM"."SRVC_TYPE_ID"='125')Also took your advice and tried to merge all the queries into single INSERT SQL, will be posting the results shortly.
    Edited by: BluShadow on 30-Aug-2011 10:21
    added {noformat}{noformat} tags.  Please read {message:id=9360002} to learn to do this yourself                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • What is the best way to extract large volume of data from a BW InfoCube?

    Hello experts,
    Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube.  I've tried OpenHub and APD but not working.  I always need to separate the extracts into small datasets.  Any advice is greatly appreciated.
    Thanks,
    David

    Hi David,
    We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
    But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
    What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
    yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
    Regards
    BN

  • Jtree - Problem creating a very large one

    Hi,
    I need to create a multi-level JTree that is very large (aroung 600-650 nodes all in all). At present I am doing the same by creating DefaultMutableTreeNodes and adding them one by one to the tree. It takes a very long time to complete(obviously).
    Is there any way of doing this faster? The tree is being constructed from run-time data.
    Thanks in advance,
    Regards,
    Achyuth

    Two thoughts. First, I create 100's of nodes and it's pretty fast. It could be how you are creating/adding them. Make sure your code is not the bottleneck. If you are traversing the entire tree or some other odd way of doing it, it could be the problem. I only say that because I am able to create an object structre, parse XML as I create that structe and create all the nodes and its less than 1 second on a PIV 1.7Ghz system. Maybe on slower systems its a lot slower, but for me its up immediately.
    Another way, however, is to keep a Map of "paths" to child items in memory. As you build your object from whever, use a Map to keep path info. Now, add listeners to the tree for expand/collapse. Each time it is about to expand, build up the nodes and add them to the expanding node at that point. If it collapses, discard them. This does one of two things. First, for large trees, you aren't wasting tons of time building the tree at the start, and more so, its probably likely that all those nodes aren't going to be expanded/shown right away. Second, it conserves resources. If your tree MUST open in a fully epxanded state, well, then you may be out of luck. But I would much prefer epxanding and building the child nodes at that moment, rather than do the whole thing first.

  • Problem In Update Statement In Multiple Record Data Block

    Hi Friends,
    I have problem in update Statement for updating the record in multiple record data Block.
    I have two data Block the master block is single Record block and the 2nd data block is Multiple Record data Block.
    I am inserting the fields like category,and post_no for partiular job in single data block
    Now in second Multiple Record Data Block,i am inserting the multiple record for above fileds like no. of employees work in the position
    There is no problem in INSERT Statement as it is inerting all record But whenever i want to update particular Record (in Multiple Block) of employee for that category and Post_no
    then its updating all the record.
    my code is Bellow,
    IF v_count <> 0 THEN
    LOOP
    IF :SYSTEM.last_record <> 'TRUE' THEN
    UPDATE post_history
    SET idcode = :POST_HISTORY_MULTIPLE.idcode,
    joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
    leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
    entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
    WHERE post_no = :POST_HISTORY_SINGLE.post_no
    AND category = :POST_HISTORY_SINGLE.category
    AND roster_no = :POST_HISTORY_SINGLE.roster_no;
    AND idcode = :POST_HISTORY_MULTIPLE.idcode;
    IF SQL%NOTFOUND THEN
    INSERT INTO post_history(post_no,roster_no,category,idcode,joining_post_dt,leaving_post_dt,entry_gp_stage)
    VALUES(g_post_no, g_roster_no, g_category, :POST_HISTORY_MULTIPLE.idcode, :POST_HISTORY_MULTIPLE.joining_post_dt,
    :POST_HISTORY_MULTIPLE.leaving_post_dt,:POST_HISTORY_MULTIPLE.entry_gp_stage);
    END IF;
    next_record;
    ELSIF :SYSTEM.last_record = 'TRUE' THEN
    UPDATE post_history
    SET idcode = :POST_HISTORY_MULTIPLE.idcode,
    joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
    leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
    entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
    WHERE post_no = :POST_HISTORY_SINGLE.post_no
    AND category = :POST_HISTORY_SINGLE.category
    AND roster_no = :POST_HISTORY_SINGLE.roster_no;
    AND idcode = :POST_HISTORY_MULTIPLE.idcode;
    IF SQL%NOTFOUND THEN
    INSERT INTO post_history(post_no,roster_no,category,idcode,joining_post_dt,leaving_post_dt,entry_gp_stage)
         VALUES (g_post_no,g_roster_no,g_category,:POST_HISTORY_MULTIPLE.idcode,
              :POST_HISTORY_MULTIPLE.joining_post_dt,:POST_HISTORY_MULTIPLE.leaving_post_dt,:POST_HISTORY_MULTIPLE.entry_gp_stage);
    END IF;
    EXIT;
    END IF;
    END LOOP;
    SET_ALERT_PROPERTY('user_alert',ALERT_MESSAGE_TEXT, 'Record Updated successfuly' );
    v_button_no := SHOW_ALERT('user_alert');
    FORMS_DDL('COMMIT');
    CLEAR_FORM(no_validate);
    Please Guide me
    Thanks in advence

    UPDATE post_history
    SET idcode = :POST_HISTORY_MULTIPLE.idcode,
    joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
    leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
    entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
    WHERE post_no = :POST_HISTORY_SINGLE.post_no
    AND category = :POST_HISTORY_SINGLE.category
    AND roster_no = :POST_HISTORY_SINGLE.roster_no;
    AND idcode = :POST_HISTORY_MULTIPLE.idcode;
    UPDATE post_history
    SET idcode = :POST_HISTORY_MULTIPLE.idcode,
    joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
    leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
    entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
    WHERE post_no = :POST_HISTORY_SINGLE.post_no
    AND category = :POST_HISTORY_SINGLE.category
    AND roster_no = :POST_HISTORY_SINGLE.roster_no;
    AND idcode = :POST_HISTORY_MULTIPLE.idcode;These update statements are without where clause, so it will update all records.
    If it is specific to oracle forms then u may get better help at Forms section.

  • Receiving very large amounts of data via ibrd in Visual Basic

    I am trying to read anything from a single byte up to 2Mb of data back into a Visual Basic program. Until now I have been using something like
    Dim Response as string * 60000
    ibrda hDevice, Response
    which works fine up to the magic 65536ish string size limit, but I want more!
    Ideally reading into a very large byte array would suffice, e.g.
    Dim ByteBuffer(0 To 2000000 - 1) As Byte
    ibrd hDevice, ByteBuffer
    But I cannot find a way to get the gpib-32.dll to accept the ByteBuffer as a destination in Visual Basic, even though ibrd32 is declared in vbib-32.bas as accepting Any type as a destination, and a Long as a count, implying the the 65536 limit doesn't apply for that call.
    It may be possibl
    e to use repeated ibrd calls until one such call fails to fill the buffer, concatenating the results each time, but this seems a crude solution when the DLL would appear to be able to do the job in one go.
    Using ibrdf may work, but would rather not use a file as intermediate storage.
    TIA

    I'm wondering if that # 65536 is a VB cap for string type. Refering to the lang intf,
    Declare Function ibrd32 Lib "Gpib-32.dll" Alias "ibrd" (ByVal ud As Long, sstr As Any, ByVal cnt As Long) As Long
    Sub ibrd(ByVal ud As Integer, buf As String) <---
    Dim cnt As Long
    cnt = CLng(Len(buf))
    Call ibrd32(ud, ByVal buf, cnt)
    isn't buf a string type?

  • Populate large volume of data in the quickest way.

    In Oracle 9i, what is the best tools to populate large amount of data is minimium amount of time?
    I heard that SQLLDR DIRECT PATH loading could work. Is there any other tools available? What I need to do is to populate a large set of dummy data to stress test the perofrmance of the database system.
    Any suggestion will help!
    Thank you very much.

    Hi,
    There are various options provided by Oracle like External tables, SQL Loader etc.
    SQL Loader is the fastest from the lot. You may refer the docs for a full help on SQL Loader.
    Regards.

  • Performance Issue in Large volume of data in report

    Hi,
    I have a report that will process large amount of data, but it takes too long to process the data into final ALV table, currently im using this logic.
    Select ....
    Select for all entries...
    Loop at table into workarea...
    read table2 where key = workarea-key binary search.
    modify table.
    read table2 where key = workarea-key binary search.
    modify table.
    endloop.
    Currently i select all data that i need (only fields necessary) create a big loop and read other table to insert it to the fields in the final table
    Edited by: Alvin Rosales on Apr 8, 2009 9:49 AM

    Hi ,
    You can use field symbols instead of work area.
    If you use field symbols there is no need of modify statement.
    Here are two equivalent code:
    1) using work areas :
    types: begin of  lty_example,
    col1 type char1,
    col2 type char1,
    col3 type char1,
    end of lty-example.
    data:lt_example type standard table of lty_example,
           lwa_example type lty_example.
    field-symbols : <lfs_example> type lty_example.
    suppose if you have the following information in your internal table
    col1 col2 col3
    1      1    1
    1      2    2
    2      3    4
    Now you may use the modify statement using work areas
    loop at lt_example into lwa_example.
    lwa_example-col2 = '9'.
    modify lt_example index sy-tabix from lwa_example transporting col2.
    endloop.
    or better using field-symbols:
    loop at lt_example assigning <lfs_example>
    <lfs_example>-col2 = '9'.
    *here there is no need of modify statement.
    endloop.
    The code using field-symbols is about 10 times faster tahn using work areas and modify statement.

  • Best data provisioning tool for very large amount of data updated real time?

    about a few hundred million entries of data a day and it must be replicated to sap hana in real time, what would be the best option?

    Hi Wayne,
    If you are looking for real time replication, then SLT is the best option. What is the source system for this replication?
    Regards,
    Chandu.

  • Very high volumes of data transfer on ethernet.

    Since upgrading to Leopard the volume of network data transfer has gone through the roof! Enough to force me to go over my broadband limit and have to pay penalty charges. Just sitting here this morning for 4 hours the system clocked up 100MB out (Tx) and 300MB incoming (Rx).
    Does anyone know how to turn this off?

    As far as I can tell, there is nothing inherent in Leopard that makes it use more bandwidth than Tiger. You need to identify which applications are using the bandwidth and take appropriate action.
    Questions:
    1) Are you using a file syncing service such as DropBox or iDisk?
    2) Could it be iTunes?
    3) Did you download any software updates automatically?
    4) Do you have Mail set up with an IMAP account (like Gmail) with automatic polling?
    To identify which applications have a network connection established, open Terminal and enter:
    $ lsof | grep IPv | grep ESTABLISHED
    Don't enter the "$" - that represents the command prompt. THe command will list all "open files", using an Internet Protocol connection that is ESTABLISHED.
    Regards,
    Rodney

  • Perl API: growing memory problem in loops over large sets of data

    Hi,
    When going through all XmlResults like this:
    while ($results->next($val)) {
    print $val->asString, "\n";
    The process size keeps growing. It does not when I comment $val->asString method out, but then I have no way of getting the results.
    This becomes a significant problem when the number of results is huge. I am doing this on a database of over a million short XML documents (400-800 bytes each).
    The more complete code is here:
    eval {
    $env = new DbEnv();
    $env->open($dbDir, Db::DB_JOINENV | Db::DB_INIT_LOCK
    | Db::DB_INIT_MPOOL | Db::DB_CREATE, 0);
    my $mgr = new XmlManager($env, DbXml::DBXML_ADOPT_DBENV);
    my $db = $mgr->openContainer(undef, $dbName, Db::DB_RDONLY);
    my $context = $mgr->createQueryContext(XmlQueryContext::LiveValues,
    XmlQueryContext::Lazy);
    my $lookup = $mgr->createIndexLookup($db, "", $nodeName,
    "node-$nodeType-equality-$syntax",
    new XmlValue($types{$nodeType}, $value), XmlIndexLookup::GTE);
    my $results = $lookup->execute(undef, $context);
    my $val = new XmlValue();
    while ($results->next($val)) {
    print $val->asString, "\n";
    if (my $e = catch std::exception) {
    die $e->what() . "\n";
    The process size just grows until the system limit is reached, then the process quits saying 'Out of memory'.
    I suspect the problem is with the std::string result returned by C++ XmlValue::asString() const.
    The (left-hand-side) result string is likely allocated by new std::string and receives the value by calling the string copy operator. Then the Perl scalar result is prepared, but when it gets returned to my code, the C++ string is not deleted.
    Moving the Sleepycat::XmlValue Perl object inside the loop does not help either:
    while ($results->hasNext()) {
    my $val = new XmlValue();
    $results->next($val);
    print $val->asString, "\n";
    In fact, the process seems to grow faster, possibly because the old $val instances do not get destroyed by Perl at the end of the loop. Where is Perl's garbage collection?
    I am using DB XML version: 2.2.13; BDB version: 4.4.20.2; OS: FreeBSD 6-STABLE. However the problem seems to be common for any OS or BDB XML version as it involves Perl-to-C++ interface.
    Has anyone experienced similar problems?
    Thanks,
    Konstantin.
    Konstantin @ Chuguev.com

    Good catch - you found a memory leak. Luckily the fix is very straightforward. Edit the file
    dbxml/src/perl/common.h
    and find this line
    #define newSVfromString(str) newSVpvn(str.c_str(), str.length())
    Change it to this
    #define newSVfromString(str) sv_2mortal(newSVpvn(str.c_str(), str.length()))
    and recompile the module.
    Paul

Maybe you are looking for

  • Where do we need to place DLL files for webutil_c_api

    Hi.. (I posted this a few days back, but no reply..buddies, help me plz) In Forms 10g while using webutil_c_api to call c functions, where do we need to place the DLL files. Is it enough if we place in the AS or do we need to place the dll files in a

  • BDoc's not created when downloading certain partners

    Hi, we have a problem when downloading certain partners from R/3. Adapter Object  CUSTOMER_MAIN Source Site: ECC 6.0 Target Site: CRM 2007 For certain partners, BDocs are regularly created, for some the status is "OK - green" in R3AR but when trying

  • Credit Card charged fraud

    My card has been charged fraudulently for making a SKYPE call. I have NEVER registered payment details with SKYPE. Further I dont seem to be able to access a secure email or contact details with SKYPE to report this fraud. This is outrageous. I have

  • Swing Layout for different screen resolution

    Hi All, I am developing an application which will always run in a FULL SCREEN mode. I intend to make it working on all screen sizes (resoultions) and on different OS and devices. Since I am new to Swing, could anyone of you suggest a good layout mana

  • I Can't  log in to itune store.

    I Can't  log in to itune store. Tried it agian and again and even uninstalla and reinstall my itunes and it still doesn't work. any solutions?