WebInterfaces for Millions of records - Transactional InfoCube

Hi Gerd,
Could u please suggest me which one can i use when i'm dealing with millions of records-Large amount of data.
(Displaying data from planning folders or WebInterfaceBuilder)
Right now i'm using WebInterfaceBuilder when i'm doing planning where user is allowed to enter values - for millions of records like Revenue forecast planning on salesorders.
Thanks in advance,
Thanks for your time,
Saritha.

Hello Saritha,
Well - technically there is no big difference whether you are using Web interfaces or planning folders. All data has to be selected from the data base, processed by the BPS, the information has to be transmitted to the PC and displayed there. So both front ends should have roughly the same speed.
Sorry, but one question - is it really necessary to work with millions of data records online? The philosophy of the BPS is that you should limit the number of records you use online as much as possible - it should be an amount  also the user can handle online - i.e. manually working with every record (which is probably not possible when handling 1 million of records). If a large number of records should be calculated/manipulated this should be done in a batch job - i.e. a planning sequence that runs in the back ground. This prevents the system from terminating the operation due to a long run time (usual time until a time out for an online transaction occurs is about 20 min) and gives you also more opportunities to control memory use or parallelizing of processes (see note 645454).
Best regards,
Gerd Schoeffl
NetWeaver RIG BI

Similar Messages

  • Row to Column Transformation for Millions of records

    Hi Members,
    I need to transform data from two stage tables which has data in PIM structure (data stored as separate records ) to target table which has a flat structure (data stored in a single record). One of the stg tables has data volume of 45M and other has 5M records.The challenge I am seeing here is to transform such huge data into single table with considerable performance.What would be the ideal way to transform such huge data?Also can we have multiple programs running at the same time to achieve transformation for such huge data load quicker?
    Just to add my Oracle Version is 10g.
    Thanks
    Edited by: Sonic on Jul 12, 2010 1:33 AM

    Still no version number, still no code, and no explain plan report.
    Is there a better, faster way to do it?, I don't know ... how could I or anyone else as you've not told us what you are doing beyond the level of "my car won't start tell me why?"
    This should help you understand the issue from my keyboard.
    http://www.builderau.com.au/strategy/developmentprocess/soa/How-developers-should-ask-for-help/0,339028278,339299138,00.htm

  • Transaction used for maintaining condition record for contract

    Hi All,
    In one of the issue user is not able to print contract,we have noticed that default printer is set for user.I need to check what condition record is maintained for user.
    Can anyone let me know what is the transaction used for maintaining condition record for contract output.
    Regards,
    Dharmesh

    Hi
    MN01 Create Message: RFQ
    MN04 Create Message: PO
    MN07 Create Message: Outline Agreement
    MN10 Create Message: Schd. Agmt. Schedul
    MN13 Create Message: Service Entry Sheet
    MN21 Create Condition: Inventory Mgmt
    MN24 Create Message: Shipping Notif.
    MN27 Create message: rough goods receipt
    MN10 / MN11 - Scheduling Agreement Delivery Schedule
    Vijay

  • Easy way for copy Transactional Infocubes

    Hy Experts!
    Does anybody know a quicly and easy way for copy a transactional infocube?
    ( I need data also)
    Thank you in advance

    interms of copying all data from one cube to another i would create export datasource and schedule load from one cube to another.
    If its in IP environment i would write a fox code or use standard copy function based on the complexity to do the same.
    Message was edited by:
            ragav

  • FM to read yellow status requests from transactional InfoCube

    Hi,
    I am using the FM RSDRI_INFOPROV_READ to read data from the transactional InfoCube. My problem is that my code using this FM should be able to read data requests that have the yellow status for the transactional InfoCube.
    My code using this FM is able to read data from the green requests only but not the yellow requests. I was told in BPS, the yellow requests should turn green automatically after 50K of records have been queue up and ready to be turned green by itself.
    Has anyone come across this and help me solve this issue of reading data using FM RSDRI_INFOPROV_READ to read yellow request data. Is there another FM to read both the green and yellow requests?
    Please advise. Thanks
    RT

    Hi Rob,
    The function module mentioned by you is correct and it will read the yellow requests please try this, by default the parameter i_rollup_only is set as true set it to false (blank) while calling the function module.
    Please see the attached code
    CALL FUNCTION 'RSDRI_INFOPROV_READ'              
       EXPORTING                                      
         i_infoprov             = p_infocube          
         i_th_sfc               = t_sfc               
         i_th_sfk               = t_sfk               
         i_t_range              = lt_range            
         I_PACKAGESIZE          = p_pksize            
         i_rollup_only          = rs_c_false          
       IMPORTING                                      
         e_t_data               = <fsl_data_read>     
         e_end_of_data          = w_end_of_data       
       CHANGING                                       
         c_first_call           = l_first_call        
       EXCEPTIONS                                     
         illegal_input          = 1                   
         illegal_input_sfc      = 2                   
         illegal_input_sfk      = 3                   
         illegal_input_range    = 4                   
         illegal_input_tablesel = 5                   
         no_authorization       = 6                   
         ncum_not_supported     = 7                   
         illegal_download       = 8                   
         illegal_tablename      = 9                   
         trans_no_write_mode    = 10                  
         inherited_error        = 11                  
         x_message              = 12                  
         OTHERS                 = 13.                 
    Hope this was helpful.
    thanks

  • Error in transaction infocube

    Hi all,
    We are using Transactional infocube for plan data in bps. Actually the user will write the data through a programme directly into the infocube, and from bps we send the data to r/3.For the current year 2007 each record has been entered twice. can u please let me know hw to delete each record.they have kept only one request as we cannot delete it.can u please tell me solution to delete each record.
    regards
    phani

    hi dinesh,
    thanks for reply. for my case selective deletion cannot work because for example we are using cost centre ,gl account and amount each cost centre ,glaccount and amount has been populating twice.it is a bps transactional cube the user enter manually the transaction through a programme.the problem has arised for 2007 plan data ,upto 2006 it was cmg fine.so can u tell me hw to remove each record of duplicate.see i am going to manage clicking the cube contents and going in ,i have delete each record for each costcenter as it is populating twice.so can u please let me know hw to solve it .
    regards
    phani

  • Database table with potentially millions of records

    Hello,
    We want to keep track of user's transaction history from the performance database.  The workload statistics contain the user transaction history information, however since the workload performance statistics are intended for temporary purposes and data from these tables are deleted every few months, we loose all the user's historical records.
    We want to keep track of the following in a table that we can query later:
    User ID      - Length 12
    Transaction  - Length 20
    Date         - Length 8
    With over 20,000 end users in production this can translate into thousands of records to be inserted into this table daily.
    What is the best way to store this type of information?  Is there a specific table type that is designed for storing massive data quantity?  Also, over time (few years) this table can grow into millions or hundreds of millions of records.  How can we manage that in terms of performance and storage space?
    If anyone has worked with database tables with very large amounts of records, and would like to share your experiences, please let us know how we could/should structure this function in our environment.
    Best Regards.

    Hi SS
    Alternatively, you can use a <u>cluster table</u>. For more help refer to F1 help on <b>"IMPORT TO / EXPORT FROM DATABASE"</b> statements.
    Or you can store data as a <u>file</u> on the application server using <b>"OPEN DATASET, TRANSFER, CLOSE DATASET"</b> statements.
    You can also select to archieve data of older than some definite date.
    You can also mix your alternatives for the recent and archieve data.
    *--Serdar <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=qbk%2bsag%2bjiw%3d">[ BC ]</a>

  • Best way to insert millions of records into the table

    Hi,
    Performance point of view, I am looking for the suggestion to choose best way to insert millions of records into the table.
    Also guide me How to implement in easier way to make better performance.
    Thanks,
    Orahar.

    Orahar wrote:
    Its Distributed data. No. of clients and N no. of Transaction data fetching from the database based on the different conditions and insert into another transaction table which is like batch process.Sounds contradictory.
    If the source data is already in the database, it is centralised.
    In that case you ideally do not want the overhead of shipping that data to a client, the client processing it, and the client shipping the results back to the database to be stored (inserted).
    It is must faster and more scalable for the client to instruct the database (via a stored proc or package) what to do, and that code (running on the database) to process the data.
    For a stored proc, the same principle applies. It is faster for it to instruct the SQL engine what to do (via an INSERT..SELECT statement), then pulling the data from the SQL engine using a cursor fetch loop, and then pushing that data again to the SQL engine using an insert statement.
    An INSERT..SELECT can also be done as a direct path insert. This introduces some limitations, but is faster than a normal insert.
    If the data processing is too complex for an INSERT..SELECT, then pulling the data into PL/SQL, processing it there, and pushing it back into the database is the next best option. This should be done using bulk processing though in order to optimise the data transfer process between the PL/SQL and SQL engines.
    Other performance considerations are the constraints on the insert table, the triggers, the indexes and so on. Make sure that data integrity is guaranteed (e.g. via PKs and FKs), and optimal (e.g. FKs should be indexes on the referenced table). Using triggers - well, that may not be the best approach (like for exampling using a trigger to assign a sequence value when it can be faster done in the insert SQL itself). Personally, I avoid using triggers - I rather have that code residing in a PL/SQL API for manipulating data in that table.
    The type of table also plays a role. Make sure that the decision about the table structure, hashed, indexed, partitioned, etc, is the optimal one for the data structure that is to reside in that table.

  • Virtual Infoprovider with Transactional Infocube datasource??

    Hi Gurus,
    I need to use Virtual Infoprovider which is reading data from a Transactional infocube.
    Virtual Infoprovider does not read yellow request in transactional cube. Is there a way to read this yellow request too? When I display transactional cube content I can see records which belong to yellow request. Virtual Infoprovider should see these records too I think.
    We cannot change yellow state to green because business is continuously entering data to transactional cube.
    Thanks in advance,
    ergin

    Hi,
    refer this link for details about how to read yellow requests or change to green.
    Re: FM to read yellow status requests from transactional InfoCube
    thnks.

  • Showing master data value in report for which no record in Infoprovider

    Hi,
    We have the following reporting requirement. The master data values , e.g. 0Customer should appear in the report for which no sales has taken place, i.e no record in Infocube.
    We have defined a multicube using Infoprovider & 0Customer Infoobject, but the report seems to show only values from InfoCube, not the values which are not there in infoprovider but are present in infoobject.
    Please suggest.
    Thanks & Regards
    Madhu
    00919830957798

    Madhu,
    There is a how to paper on how to do this. It available in the BW InfoIndex under Multiproviders and is called Slow Moving Inventory Scenario.
    https://service.sap.com/~sapidb/011000358700008387152002
    This document details out the process of reporting on master data values for which no transactions ahve been recorded. This is achieved by creating a MultiProvider on a Cube and InfoObject and then using teh record count property.
    Cheers
    Aneesh

  • Extracting records/ transactions list  in to excel sheet

    Hi Experties,
    I am facing problem while extracting data from CRM - Service Process Monitor.
    I tried to extract data(records) from executed service process monitor through
    System -> List ->Save->Local file->Spreadsheet
    I am not getting actual records list in excel sheet
    For example: Actual record - 244, But after extracting to excel sheet its showing only- 199
    why its not extracting remaining 45 records/ transactions.
    is there any setting or is there any other process of extracting ?
    How to count total records without extracting to excek sheet ?
    I will definitely assign points
    Please help me..
    Regards
    Praveen Khot

    Praveen,
    There are no setting for downloading the no of records...
    Probably there me be some problem with html code if it is a web templete...
    cheer's
    HVR.

  • 0FI_AR_4 Initialization - millions of records

    Hi,
    We are planning to initialize 0FI_AR_4 datasource for which there are millions of records available in Source system.
    While checking in Quality system we have realised that just for a single fiscal period it is taking hours to extract data, and in Production system we have data for last 4 years (about 40 million records).
    The trace results (ST05) say that most of the time taken while fetching data from BKPF_BSID / BKPF_BSAD view.
    I can see index available on tables BSID/BSAD - Index 5 - Index for BW extraction - which is not yet created on database.
    This index has 2 fields - BUKRS & CPUDT.
    I am not sure whether this index will help in extracting data.
    What all things can be done to improve the performance of this extraction so that Initialization of 0FI_AR_4 can be completed within optimum time.
    Appreciate your inputs experts.
    Regards,
    Vikram.

    We are planning to change the existing FI_AR line item load from current fiscal year full to delta. As of now the FI_AR_4 is full from R/3 for certain company codes and fiscal Yr/Period 2013001 - 2013012. Now business wants historical data and going forward the extractor should bring only changes ( delta).
    we would like to perform these below steps
    1. Initialisation w/o data transfer on comp_code and FY/period 1998001 - 9999012
    2. Reapir full load for all the historical data fiscal year/period wise like 1998001-1998012, 1999001-1999012...... current year 2013001 - 2013011 till PSA
    3. Load these to DSO
    4. activate the requests
    5. Now do a delta load from R/3 to BW till PSA for the new selection 1998001-9999012
    6. load till DSO
    7. Activate the load
    Pls let me know if these above steps will bring in all the data for FI_AR_4 line items and will not be missing any data once I do the delta load after the repair full loads.
    Thanks

  • FM for creating Condition Records

    Is there FM for creating Condition Records for output type like transaction VV11 (Create Output - Condition Records: Sales)

    Hi there,
    FM for condition records? Why do you want to think of a FM for creating conditon records?
    Condition records is a master data that has to be manually created as per the business requirement.
    But what we can do is that if the data ia existing in an external legacy system, we can migrate that into SAP through a BAPI / BDC or LSMW depending on the volume of data.
    But in other cases condition records are manually created in SAP system.
    If at all you have a function module to create condition records, how would you ensure that it feeds correct data into the condition records? What key combination would the system take & what data would you want the FM to upload?
    How will you monitor what data is uploaded,.
    Regards,
    Sivanand

  • Need help / Advice ; manage daily millions of records;;plz help me:)

    Hi all,
    I've only 2 years of experience in Oracle DBA. I need advice from Experts:)
    To begin, the company I work for, decide to daily save in our Oracle database about 40 millions of records in our only table (User tables). These records should be daily imported from csv or xml feeds into one table.
    This 's a project that need :
    - Study the performance
    - Study What is required in terms of hardware
    As a leader in the market, Oracle 's the only DBMS that could support this size of data, but what's the limit of Oracle in this case? can Oracle support and manage perfectly daily 40 millions of records and for many years, ie We need all data of this table, we can't consider after a period that we don't need history: we need to save all data and without purge the history and this for many years i suppose!!! you can imagine 40 daily millions of records and for many years!!!
    Then we need to consolidate from this table different views (or maybe materalized view) for each department and business inside the company, one other project that need study!
    My questions 're :Using Oracle Database 10g Enterprise Edition Release 10.2.0.1.0:
    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?
    2-Study the performance ; which solutions, technics could I use to improve the performance of :
    - Daily Loading 40 millions of records from csv or xml file/files?
    - Daily Consolidate / managing different views/ materalized view from this big table?
    3- What is required in terms of hardware? features / Technologies( maybe clusters...)
    Hope that experts help me and advice me! thank you very much for your atention :)

    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?Yes
    2-Study the performance ; which solutions, technics could I use to improve >>>the performance of :Send me your email, and I can send you a Performance tuning metodology pdf.
    You can see my email on my profile.
    Daily Loading 40 millions of records from csv or xml file/files?DIrect Load
    - Daily Consolidate / managing different views/ materalized view from this big table?You can use table partitions, one partition for each day.
    Regards,
    Francisco Munoz Alvarez

  • T code for maintaing condition record in MM pricing

    Hiii ..MM GURUS...
    Can anyone let me know the transaction code for maintaing condition records for condition types in MM pricing .
    Regards
    SARFRAZ..

    MEK1 with appropritate key combination

Maybe you are looking for