Code Optimization for handling large volume of data

Hi All,
We are facing a problem when executing a report... lot of time is taken to execute..... Many a times the program is terminated with a dump that "Timeout :Program terminated because of endless loop".
the internal table which has to looped has more than 8.5 lac records...
and for each run of loop there are two read and one select statement (unavoidable)...,,
(We have followed almost all the optimization techniques,,,,)
Please suggest if you have any idea as to ... what can be done in such situation....
Thanks and Regards,
Sushil Hadge.

Hi Martin,
Following is the piece of code.....
SELECT bukrs gpart hkont waers
      FROM dfkkop
      INTO TABLE it_dfkkop
      WHERE bukrs = p_bukrs AND bldat IN so_bldat AND hkont IN so_hkont.
SORT it_dfkkop BY gpart.
Loop at it_dfkkop into wa_dfkkop.
<Read statement>
<Read Statement>
ON CHANGE OF wa_dfkkop-gpart.
SELECT gpart hkont waers betrw FROM dfkkop INTO TABLE it_subtot WHERE hkont = wa_dfkkop-hkont AND gpart = wa_dfkkop-gpart.
      IF it_subtot IS NOT INITIAL.
        LOOP AT it_subtot INTO wa_subtot.
          v_sum = v_sum + wa_subtot-betrw.
        ENDLOOP.
     Endif. 
Endon.
Endloop.
Please suggest if this can be improved in some way....
Thanks ,
Sushil
Edited by: Sushil Hadge on Jun 4, 2008 3:12 PM

Similar Messages

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Looking for ideas for transferring large amounts of data between systems

    Hello,
    I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
    We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
    We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
    Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
    We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • Problem to update very large volume of data for 2LIS_04* extr.

    Hi
    I have problem with jobs for 2LIS_04* extractors using Queued Delta.
    There are interface between R3 system and other production system and 3 or 4 times in the month very large volumen of data has been send to R3.
    Then job runs very long and not pull data to RSA7.
    How to resolve this problem.
    Our R3 system is PI_BASIS 2005_1_620.
    Thanks
    Adam

    U can check these SAP Notes..........it will help u........
    How can downtime be reduced for setup table update
    SAP Note Number: 753654
    Performance improvement for filling the setup tables
    SAP Note Number: 436393
    LBWE: Performance for setup of extract structures
    SAP Note Number: 437672

  • What is the best way to extract large volume of data from a BW InfoCube?

    Hello experts,
    Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube.  I've tried OpenHub and APD but not working.  I always need to separate the extracts into small datasets.  Any advice is greatly appreciated.
    Thanks,
    David

    Hi David,
    We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
    But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
    What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
    yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
    Regards
    BN

  • Performance Issue in Large volume of data in report

    Hi,
    I have a report that will process large amount of data, but it takes too long to process the data into final ALV table, currently im using this logic.
    Select ....
    Select for all entries...
    Loop at table into workarea...
    read table2 where key = workarea-key binary search.
    modify table.
    read table2 where key = workarea-key binary search.
    modify table.
    endloop.
    Currently i select all data that i need (only fields necessary) create a big loop and read other table to insert it to the fields in the final table
    Edited by: Alvin Rosales on Apr 8, 2009 9:49 AM

    Hi ,
    You can use field symbols instead of work area.
    If you use field symbols there is no need of modify statement.
    Here are two equivalent code:
    1) using work areas :
    types: begin of  lty_example,
    col1 type char1,
    col2 type char1,
    col3 type char1,
    end of lty-example.
    data:lt_example type standard table of lty_example,
           lwa_example type lty_example.
    field-symbols : <lfs_example> type lty_example.
    suppose if you have the following information in your internal table
    col1 col2 col3
    1      1    1
    1      2    2
    2      3    4
    Now you may use the modify statement using work areas
    loop at lt_example into lwa_example.
    lwa_example-col2 = '9'.
    modify lt_example index sy-tabix from lwa_example transporting col2.
    endloop.
    or better using field-symbols:
    loop at lt_example assigning <lfs_example>
    <lfs_example>-col2 = '9'.
    *here there is no need of modify statement.
    endloop.
    The code using field-symbols is about 10 times faster tahn using work areas and modify statement.

  • Best practices for handling large messages in JCAPS 5.1.3?

    Hi all,
    We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
    Our setup looks like this:
    We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
    It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
    I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
    We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
    My questions:
    * Any ideas how to handle large messages most efficiently?
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    * Any ideas why messages sometimes disappear?
    * Any other suggestions?
    Thanks
    /Alex

    * Any ideas how to handle large messages most efficiently? --
    Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
    Generally we use following process
    After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
    which contains file name and file location. Most of places we never send file content in JMS message.
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
    stop processing or processing might take lot of time
    * Any ideas why messages sometimes disappear?
    Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
    goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
    of messages.
    * Any other suggestions
    If file size is more then better to stream the file to local directory from FTP location and send only the file
    location in JMS message.
    Hope it would help.

  • Seeking recommendations for handling large binary documents with security(preferable) for inbound and outbound scenarios from OSB- SOA and SOA- OSB

    Hi,
    I am currently working on a project with the following requirements
    1. Client transfers binary document (between 1-20MB in size) from OSB proxy to SOA composite to Content Management system
    2. Client retrieves binary document (between 1-20MB in size) from Content Management system to SOA composite to OSB proxy
    In otherwords, a inbound and outbound integration.
    What I have tried so far and my results:
    Scenario A
    1. Enabled MTOM on SOA composite by attaching wsmtom policy
    2. Created an OSB business service and consumed the SOA composite application
    3. Enabled MTOM on OSB proxy and business service and configured it to pass by reference
    Scenario B
    1. Enabled MTOM and security on SOA composite by attaching wsmtom policy and SAML policy
    2. Created an OSB business service and consumed the SOA composite application
    3. Enabled MTOM on OSB proxy and business service and configured it to pass by reference
    I have a demo integration setup that writes a binary document to a file using the above steps. My SOA composite has a file adapter that writes the binary data to an external file and it is exposed as a web service with a simple WSDL definition that has an inline XSD schema with an single element of base64binary type. I have added a mediator that maps this base64binary element node to the file adapter's input node.
    Result for Scenario A with file size less than 1 MB:
    Flawless execution with sub-second response times
    Result for Scenario A with file size of 8MB
    First attempt: SOA composite faults with database transaction related error, solved by increasing JTA timeout
    Second attempt: Flawless execution, but file transfer took over 100 seconds to complete. This is very poor performance and my suspicions are that this cannot be the expected behaviour, but I dont know the internal workings of the SOA composite and why its taking this long.
    Result for Scenario B:
    The OSB business service does not accept/recognize the SAML policy in the WSDL and suggests to configure OWSM policies manually, but OWSM policy in OSB does not have the wsmtom policy. Regardless of this, any permutation of MTOM + WSS security in this integration scenario either did not work outright or MTOM optimization was not happening ie binary data was materalizing in the message body.
    I have only about 3 weeks left to implement a viable solution and the closest ive come to a solution is Scenario A but that +100 second response time for an 8MB file is really worrying.
    I would appreciate any level of guidance, recommendations or suggestions as to how I go about tackling this problem.
    Thanks
    regards,
    Johnny

    I think this is due to the underlying mechanism of weblogic classloading..
    You can contact oracle support @ https://support.oracle.com to report issues. Roughly this is the process .
    1- get the Oracle Customer Support Identifier (CSI) for the client you are working for.
    2- Create a user profile quoting the CSI. This will send an approval request to oracle support admins at your client.
    3- Get the oracle support admins at your client site to approve your request for support access.
    4-Once they approve , you can access the support site and raise service requests.

  • Best Practices for loading large sets of Data

    Just a general question regarding an initial load with a large set of data.
    Does it make any sense to use a materialized view to aid with load times for an initial load? Or do I simply let the query run for as long as it takes.
    Just looking for advice on what is the common approach here.
    Thanks!

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Populate large volume of data in the quickest way.

    In Oracle 9i, what is the best tools to populate large amount of data is minimium amount of time?
    I heard that SQLLDR DIRECT PATH loading could work. Is there any other tools available? What I need to do is to populate a large set of dummy data to stress test the perofrmance of the database system.
    Any suggestion will help!
    Thank you very much.

    Hi,
    There are various options provided by Oracle like External tables, SQL Loader etc.
    SQL Loader is the fastest from the lot. You may refer the docs for a full help on SQL Loader.
    Regards.

  • Java proxies for handling large files

    Dear all,
    Kindly let me know handle the same in step by step explanation as i do not know much about java.
    what is advantage of using the java proxies here.Do we implement the split logic in java code for mandling 600mb file?
    please mail me the same to [email protected]

    Hi !!   Srinivas
    Check out this blog....for   Large file handling issue  
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    This will help you
    Please see the documents below. This might help you
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/a068cf2f-0401-0010-2aa9-f5ae4b2096f9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f272165e-0401-0010-b4a1-e7eb8903501d
    /people/prasad.ulagappan2/blog/2005/06/27/asynchronous-inbound-java-proxy
    /people/rashmi.ramalingam2/blog/2005/06/25/an-illustration-of-java-server-proxy
    We can also find them on your XI/PI server in folders:
    aii_proxy_xirt.jar
    j2eeclusterserver0 inextcom.sap.aii.proxy.xiruntime
    aii_msg_runtime.jar
    j2eeclusterserver0 inextcom.sap.aii.messaging.runtime
    aii_utilxi_misc.jar
    j2eeclusterserver0 inextcom.sap.xi.util.misc
    guidgenerator.jar
    j2eeclusterserver0 inextcom.sap.guid
    Java Proxy
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a068cf2f-0401-0010-2aa9-f5ae4b2096f9
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f272165e-0401-0010-b4a1-e7eb8903501d
    Pls reward if useful

  • How to handle large library, limited data plan

    I've been using Itunes Match for about six months now, and I'm having problems...
    I live rurally and so I use an AT&T hotspot with a 10 gig/month data plan for my phone, ipad, and desktop mac. My wireless connection speed is pretty good.
    I have about 15000 tracks in my Itunes library, most not purchased through Itunes.
    When I signed up for Match and went through the initial process, it took about 24 hours and used up about six gig, which caused a significant overage in my data plan. Consequently, I just use Match on my Mac and decided not to use it on my other devices because of the data limitations. My motivation to use it currently is as a back up for my music. I connect my phone to my mac manually to transfer some of my music to my phone.
    I figured that the data overuse problem was a one time deal if I didn't use my other devices.
    But recently, when I purchase a song from emusic or Amazon, the icloud processing image pops up after the download purchase is complete. Itunes will then start the process of sending data to Amazon, which was still going at 4 hours yesterday when I manually stopped it as I saw my data plan being used up. It seemed to restart the process of sending periodically and never got to the analyzing or returning data stages.
    Now my recently purchased music shows up with both the Icloud processing symbol and a faded download icloud icon, one for each separate track. I can play the "processing" track, but Itunes won't allow me to add it to a playlist. Further, if I'm online with my hotspot, sometimes the regular Itunes Icon is visible, but at other times the icon with the thunderbold through it. I'm guessing this means it's streaming with the first icon, playing from my harddrive with the second.
    So a couple of questions, and sorry for the length, I'm a first-time support user
    1) Does it make sense to use Match with such a large library, and limited data plan?
    2) "Where" is the music I've purchased - on my computer or in the cloud?
    3) Should it take all day to send data to itunes, when the only updates to my library are songs I've deleted and a handful of new albums I purchased.
    4) Am I using up my data plan when the regular Icloud Icon appears and I'm online? Is there a way to manually play from the hardrive to reduce use of my data plan? I turned off Match once and payed with a half-day sending and receiving session when I turned it back on.
    4) Will I lose music if I unsubscribe from Match?
    Thanks to anybody who has the inclination to respond to any of these questions!

    This datasource sends After images in delta loads which is compatible with loading to a Std DSO only. You cannot load from this datasource directly to a cube.
    You can check with business the number of years they would need history. If they need for 5 years, you could delete data older than that or Archive.
    For GL, there would not be any changes to years that are closed for posting. There may be adjustments carried out for the previous fiscal year. I guess there would not be any changes to years prior to that. So archiving old data should not affect delta.

Maybe you are looking for