Advice needed on large volume of data coming from aRFC.

Hi Experts,
Using NWDS 7.0.18, EP 7.00 SPS 18
I am calling an rfc to bring back a table structure to the front end and then I am building a tree hierarchy out of it. Everything was fine with small amounts of records (100-200).
But in real life, we will have approx 300,000 records coming in. As a test, we tried it with just 50,000 but the application wasnt able to cope. I just got the web dynpro spinning wheel for almost an hour and then nothing!
Can anyone provide me with some advice on how this can be resolved? Is there a way to preload the data before the user accesses the app? Can I bulk load small sections of data at a time?
Thanks in advance.
Marshall.

Hi,
As per my guess,
Most of the time is taking at recursive call and inner for loop at UI side.
Just put these statements and calcuate the time taking as below for both BAPI call and logic at WD side.
long before=System.currentTimeMillis();
long after=System.currentTimeMillis();
long totalTimeTaken=System.currentTimeMillis();
For example your method is
getDatafromBackend()
//Code for model execution
long before=System.currentTimeMillis();
long after=System.currentTimeMillis();
long totalTimeTaken=System.currentTimeMillis();
Similarly for your recursive logic also.
Using these checks you can findout..where is the issue.
Note if you are facing issue for 50,000 records..then check for 40000 first.
As per my analysis if you are calling the method recursively for 50,000 records.
*Then the for loop will execute 50,000 * 50,000 times. This is very minimum. There are many chances like one parent can have many childs. Those childs can have again childs. So this number will increate drastically based on the parent and child relationships in the data.*
Better not to use recursive for this much of data records. So dont populate the records at the beginning. Populate when user expands the parent.
Regards,
Charan

Similar Messages

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • How to store the data coming from network analyser into a text or excel file

    Hii everyone
    I'm using Agilent 8719ET network analyser and wish to store the data coming from netowrk analyser into a text file/ excel file.
    Presently I'm able to get the data on Labview graph using GPIB . Can anyone suggest how to go ahead after collect data sub vi. How can the data be stored into a file apart from showing on the graph?
    Attached is the vi for kind consideration...
    Looking for help
    Regards
    Rohit
    Attachments:
    Agilent 87XX Series Exceed Max Meas.vi ‏43 KB

    First let me say that your code really looks pretty good. The data handling could be made more efficient by calculating the number of datapoints that are going to be in the completed dataset and preallocating the entire array -- but depending upon your answer to my questions, the logic in the lower shift register may be going away - so we won't worry about that right now.
    The thing I need to know before addressing the data storage question is: Each time you call "Collect and Display Data.vi", how many element are in the array? Are you reading single data points, or a group of data? (BTW: if the answer to that question is obvious based on the way the other VIs are setup, I don't have the drivers so I can't tell what the setup values are.) Second, how fast does the loop iterate? Are we talking msec per loop?, seconds? fortnights?
    The issues here are two-fold: how much data? and how fast is it coming? The answer to these will tell you how to save the data.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • How to insert data coming from 2 different file adapters in to one DB adapt

    Hi
    i want insert data in to database containing two diffferent tables, so i imported tables in to DB adapter by creating relation ships.But, data for two tables are in xml format & two are in different locations.So, i used 2 file adapters to get data from 2 different & i used BPEL(Define service later) Service. now in bpel i used receive activity to receive one file adapter data ( i checked create instance in receive) then used transform activiy for tranformation & finally invoke activity to invoke DB adapteer........similarly i repeated sequetially to 2 file adapter, by keeping 2nd receive(no need to check create instance) next to invoke.*Problem is after deployment finished only data coming from 1st receive is inserting to the table...& 2 nd receive not working it showing as Pending & showing as Asynchronus Call back inte console*
    I configured all the adapters perfectly..........Can any one can help me how to Commit 2 nd receive to insert data in the 2nd table
    Regards,
    jay

    Thank u both for ur replay.........
    I am doing this in 11g there is no problem regarding transform activity.
    My requirement is
    two different files from two different folders in a drive & we can't use one file adapter bcoz both have different columns(only few are common columns) so we use two different xsd's .So, i am using two file adapters to insert in database having two different tables with respect to two different files data coming from two file adapters. i am using one DB adapter to insert bcoz both tables are in same database with relationships & i used BPEL(define service later) .
    NOW PLEASE SUGGEST ME THE FLOW IN BPEL TO INSERT BOTH FILES IN THERE RESPECTIVE TABLES IN DATABASE.
    The flow i did 1st file adater--->receive--->transform---->invoke----->DB adapter.....Then again repeating this as keeping 2nd receive below 1st invoke
    2nd file adapter-------->2nd receive---->2nd invoke------>same DB adapter
    MY problem is only data coming from 1st process is inserting & 2nd one is not working as i discussed earlier........I USED READ FILE OPTION, UNCHECKED DELETE FILES OPTION & SET DIFFERENT POLLING FREQUENCY FOR BOTH FILE ADAPTERS.
    I tried to set correlation but it is not working & later tried i kept non-blocking invoke as TRUE in DB adapter also didn't work...........also i tried this transaction property in bpel component _<property name="bpel.config.transaction"_
    many="false" type="xs:string">required / requiresNew</property>...............BUT NO CHANGE...........
    Regards,
    jay
    Edited by: 910162 on Apr 5, 2012 12:38 AM

  • How can i use an ao card (pci 6723) to output data and to trigger an ai o acquire the data coming from the ao card?card (pci 6254) t

    Hello
    I am trying to perform AO (pci 6723) and to trigger my AO card (pci
    6254) to read the data coming from the AO card. I am using LV7,1,
    win2000.
    Is an RTSI cable necessary or I can connect the trigger signals externally ?
    I am using the LV example " multi function - synch ai-ao.vi" but i can't for some reason configure the trigger lines.
    thank you in advance for your time.
    Yiannis

    Hello Yiannis,
    If I understand you correctly, you want to synchronize your analog input and analog output that are started by a trigger on the analog input board. If you don't want to use the RTSI lines, your best bet is to export the AI Sample Clock and then read it in to the AO board. There is an example on ni.com called DAQmx - Synchronized AIAO Shared Clock. It appears to be having techincal issues so I have attached the example below. To export ai/sampleclock, use DAQmx Export Signal.vi after Get Terminal Name with Device Prefix.vi. Export Sample Clock to PFI3 (There are only connections between the sample clock and PCI3/4/8/9). Connect PFI3 on your AI board to PFI0 on your AO board (with a wire). Then change the source for the AO Timing.vi to Devx/PFI0. If you want to do triggering, stick the DAQmx Trigger.vi between the Timing property node and the DAQmx Start.vi on the AI task. I have shown how to do this in the modified version below. Please take a look at it and let me know if you have any questions. If you still get an error please take a screenshot of it and post to the forum. Have a great day!
    Sincerely,
    Marni S.
    Attachments:
    Synchronized_AIAO_Shared_Clock[Modified].vi ‏140 KB

  • In the NI Library VI: Extract Numbers, the string control is automatically loaded each time the VI is opened with "Counting to five: one 2 three 4.0 five. Where is this string data coming from?

    Even after deleting the string "Counting to five: one 2 three 4. five." from the string control and replacing it with another string the original string returns after the VI is saved then reopened. Where is this string data coming from? I've attached a copy of the library function. I've been able in my application to get around the problem by replacing the string control with a string constant. But I'm still curious as to what's going on.
    Thanks,
    Chuck
    Solved!
    Go to Solution.
    Attachments:
    Extract Numbers Test.vi ‏9 KB

    Chuck,
    String control has been set to default with the string you are seeing.  To change this, enter the new string, right click the control and select
    Data Operations>>Make Current Value Default
    Now save your vi.

  • Reading data coming from a port

    How can I read the data coming from a webcam?

    Reading from a parallel port. But even if you do know how to read from a serial port that would be helpful

  • Fill a table with data coming from an RFC

    Hello everyone:
    I've followed the Weblog "How many lines of java code did i write for a simple Web Dynpro?"
    /people/durairaj.athavanraja/blog/2004/10/17/how-many-lines-of-java-code-did-i-write-for-a-simple-web-dynpro
    I've called an RFC and created a table with data coming from it (which is also a table). My question is, if in this table there's a field named "UserType" there are two possible values for this field:
    "userA"
    "userB"
    How can I get the table only show me the "userA" registers? The RFC does return all of the users, but when filling the table, can I put an if-else somewhere on my code?
    Thanks a lot
    Alejandro

    Hi Alejandro,
    Referring to the link provided "The logic of the filter process is not implemented in Web Dynpro. The application developer must implement the action to be executed."
    We would have to implement the action onFilter in the controller implementation. Ideally, we fill the data retrieved from backend into a List (java.util.List) (this could be done on init of view) and then subset the list after meeting the criteria in the action handler(say
    onActionFilterData(com.sap.tc.webdynpro.progmodel.api.IWDCustomEvent wdEvent).
    Having done this, you may bind the output list back to the node (shown in table)
    Regards,
    Chaitanya

  • Data deleted in ODS, data coming from the data source : 2LIS_11_VASCL

    Friends,
    I have situation :
    Data deleted in ODS, data coming from the data source : 2LIS_11_VASCL.
    when for the above ods and the data source when trying to delete the request whole data got delted.
    All the data got deleted in the fore ground. no background job has been generated for this.
    I ma really worried abt this issues. can u please tell me what should be the possibilities for this issue.
    Many Thanks
    VSM

    Hi,
    I suppose you want to know the possibilitiy of getting the data.
    If the entire data is being deleted, you can reload the data from source system.
    Load the setup table for your application. Then carry out init request.
    Please note that you would have to take a transaction-free period for carrying out this activity so that no data is missed.
    Once this is done, delta queues will again start filling up.

  • What is the best way to extract large volume of data from a BW InfoCube?

    Hello experts,
    Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube.  I've tried OpenHub and APD but not working.  I always need to separate the extracts into small datasets.  Any advice is greatly appreciated.
    Thanks,
    David

    Hi David,
    We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
    But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
    What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
    yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
    Regards
    BN

  • Performance Issue in Large volume of data in report

    Hi,
    I have a report that will process large amount of data, but it takes too long to process the data into final ALV table, currently im using this logic.
    Select ....
    Select for all entries...
    Loop at table into workarea...
    read table2 where key = workarea-key binary search.
    modify table.
    read table2 where key = workarea-key binary search.
    modify table.
    endloop.
    Currently i select all data that i need (only fields necessary) create a big loop and read other table to insert it to the fields in the final table
    Edited by: Alvin Rosales on Apr 8, 2009 9:49 AM

    Hi ,
    You can use field symbols instead of work area.
    If you use field symbols there is no need of modify statement.
    Here are two equivalent code:
    1) using work areas :
    types: begin of  lty_example,
    col1 type char1,
    col2 type char1,
    col3 type char1,
    end of lty-example.
    data:lt_example type standard table of lty_example,
           lwa_example type lty_example.
    field-symbols : <lfs_example> type lty_example.
    suppose if you have the following information in your internal table
    col1 col2 col3
    1      1    1
    1      2    2
    2      3    4
    Now you may use the modify statement using work areas
    loop at lt_example into lwa_example.
    lwa_example-col2 = '9'.
    modify lt_example index sy-tabix from lwa_example transporting col2.
    endloop.
    or better using field-symbols:
    loop at lt_example assigning <lfs_example>
    <lfs_example>-col2 = '9'.
    *here there is no need of modify statement.
    endloop.
    The code using field-symbols is about 10 times faster tahn using work areas and modify statement.

  • Data coming from Three Applications and needs to post into SAP

    Hi all,
    i have scenario like
    XI needs to take data from 3 Applications and do some validations like data comparision of comman records and post the comman data among the three to SAP.
    Application1 will send text file and Application2 will send CSV file and Application3 will send xml file
    Ex:
    Application1:
    Emp No
    Emp Name
    Sal
    Location
    Application2:
    Emp No
    Emp Name
    Desgination
    Application3:
    Emp No
    Emp name
    Designation
    Location
    Now the Target data should be
    Emp No
    Emp Name
    Designation
    This is only the comman record which is there from 3 Applications.
    Regards

    Hi vamsi,
      As the experts mentioned above BPM is the way to achieve your business requirement.When you use BPM there is a step to wait till the 3 messages to come into XI to process furthur.Refer to the following links choose the one that is useful to you.
    Check this link for more information...
    http://help.sap.com/saphelp_nw04/helpdata/en/0e/56373f7853494fe10000000a114084/content.htm
    take a look at this blog..
    /people/alexander.bundschuh/blog/2006/01/04/scheduling-messages-in-sap-xi
    For your case yes its not mandatory to create communication channels for each step if the inputfiles are same and from same system..Else you obviously need to create different communication channels for each different file more over you mentioned that files are coming from 3 different application and should be merged.So 3 communication channels are required.
    Refer the following link too to get better understanding on message merge
    Re: BPM - Message merge
    All the best,
    Ram.
    Edited by: Ramakrishna kopparaju on Apr 24, 2009 7:55 PM

  • Populate large volume of data in the quickest way.

    In Oracle 9i, what is the best tools to populate large amount of data is minimium amount of time?
    I heard that SQLLDR DIRECT PATH loading could work. Is there any other tools available? What I need to do is to populate a large set of dummy data to stress test the perofrmance of the database system.
    Any suggestion will help!
    Thank you very much.

    Hi,
    There are various options provided by Oracle like External tables, SQL Loader etc.
    SQL Loader is the fastest from the lot. You may refer the docs for a full help on SQL Loader.
    Regards.

Maybe you are looking for

  • How do I make Address Book default for vCards

    I use Apple Address Book (6.0) and have the Lion O/S 10.7.1on my 2 Macs.  My Apple Address book syncs with my Gmail (Google Apps) contacts. Recently when I opened a vCard, Outlook for the Mac attempted to open the vCard.  I cancelled this as I no lon

  • [SOLVED] KDEmod problem after Xorg Update

    Hi! Before explain my problem, I have to say that my english isn't perfect, sorry ^_^ (i'm spanish). Ok, let's go: One week ago, I updated my system and Xorg was updated too, since that update my KDEmod restarts always. I read the kdm.log, and it say

  • Can't print pdf's in Preview

    My son is having trouble printing an email attachment in Preview, the pages print blank. He has an 2008 15" Macbook Pro running Snow Leopard. I am trying to help him over the phone. Attachment pops up as PDF in Preview, he hits file, print and the pr

  • Break up one file into two

    Hi, I'm new to pages and really haven't done anything like this before. I created a 4 page brochure document. The printer is telling me that I have to upload two seperate files-- a back page, pgs 1 and 4 and a front page, pgs 2-3. Is there anyway to

  • Retrieve duration of sound files via Apple Script - System Events

    I need to get more details of my sound files into a FileMaker 10 Database as it is possible by using the FileMaker import function. There ist a complicated way by going first through iTunes and than to FileMaker. Now I found out, that part of the inf