How to Improve large data loads ?

Hello Gurus,
Large data loads at my client long hours. I have tried using the recommedations from various blogs and SAP sites, for control parameters for DTP's and Infopackages. I need some viewpoints on what are the parameters that can be checked in the Oracle and Unix systems. I would also need some insight on:-
1) How to clear log files
2) How to clear any cached up memory in SAP BW.
3) Control parameters in Oracle and Unix for any improvements.
Thanks in advance.

Hi
I think those work should be performed by the BASIS guys.
2)U can delete the cache memory by using the Tcode : RSRT and then select the cache monitor and then delete.
Thanx & Regards,
RaviChandra

Similar Messages

  • How to find the data loaded from r/3 to bw

    hi
    how to find the data loaded from r/3 to bw is correct . i am not able to find which feild in the query is connected to which feild in the r/3 . where i am geting the data from r/3 . is there any process to find which feild  and table the data is comming from . plz help
    thanks in advance to u all

    Hi Veda ... the mapping between R/3 fields and BW InfoObjects should take place in Transfer Rules. Other transformation could take place in Update Rule.
    So you could proceed this way: look at InfoProvider Data Model and see if the Query does perform any calculation (even with Virtual keyfigures / chars). Than go back to Update Rules and search for other calculation / transformation. At least there are Tranfer Rule and eventually DataSource / Extraction Enhancements.
    As you can easily get there are many points where you have to look for ... it's a quite complex work but very usefull.
    Once you will have identified all mappings / transfromation see if BW data matchs R/3 (considering calculations ...)
    Good job
    GFV

  • How to automate the data load process using data load file & task Scheduler

    Hi,
    I am doing Automated Process to load the data in Hyperion Planning application with the help of data_Load.bat file & Task Scheduler.
    I have created Data_Load.bat file but rest of the process i am unable complete.
    So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler or what are the rest of the file is require to achieve this.
    Thanks

    To follow up on your question are you using the maxl scripts for the dataload?
    If so I have seen and issue within the batch (ex: load_data.bat) that if you do not have the full maxl script path with a batch when running it through event task scheduler the task will work but the log and/ or error file will not be created. Meaning the batch claims it ran from the task scheduler although it didn't do what you needed it to.
    If you are using maxl use this as the batch
    "essmsh C:\data\DataLoad.mxl" Or you can also use the full path for the maxl either way works. The only reason I would think that the maxl may then not work is if you do not have the batch updated to call on all the maxl PATH changes or if you need to update your environment variables to correct the essmsh command to work in a command prompt.

  • How do we improve master data load performance

    Hi Experts,
    Could  you please tell me how do we identify the master data load performance problem  and what can be done to improve the master data load performance .
    Thanks in Advance.
    Nitya

    Hi,
    -Alpha conversion is defined at infoobject level for objects with data type CHAR.
    A characteristic in SAP NetWeaver BI can use a conversion routine like the conversion routine called ALPHA. A conversion routine converts data that a user enters (in so called external format) to an internal format before it is stored on the data base.
    The most important conversion routine - due to its common use - is the ALPHA routine that converts purely numeric user input like '4711' into '004711' (assuming that the characteristic value is 6 characters long). If a value is not purely numeric like '4711A' it is left unchanged.
    We have found out that in customers systems there are quite often characteristics using a conversion routine like ALPHA that have values on the data base which are not in internal format, e.g. one might find '4711' instead of '004711' on the data base. It could even happen that there is also a value '04711', or ' 4711' (leading space).
    This possibly results in data inconsistencies, also for query selection; i.e. if you select '4711', this is converted into '004711', so '04711' won't be selected.
    -The check for referential integrity occurs for transaction data and master data if they are flexibly updated. You determine the valid InfoObject values.
    - SID genaration is must for loading transaction data with respect to master data, to cal master data at bex level.
    Regards,
    rvc

  • How to best reduce data load on MAC due to duplicate Adobe files?

    I just got hired at a small business. I don't have a lot of experience with MACs, so I need to know some best practices here.
    I am working with CS3, Ai, Ps, Id, and later, Dw.
    It's a magazine publishing company. I have it organizing so each magazine has its folder, and I want to have an "old editions" and a "working edition" folders. Within each, I want to break it down into "Ads this issue", "Links", and "stories".
    The Ads and Links are where I'm concerned. I want to have a copy of each ad's file within that folder, and a copy of all the other files its linked to, so that if the original ads/images get moved, the links won't be disturbed.
    I'm wondering if there is a way to do this without bogging down the machine's HD with duplicates of really large files. The machine moves slow enough as it is.
    I've theorized that I could:
    A) keep the Main "Ads" folder along with the subfolders compressed, and the "old editions" compressed, and have a regular copy in the working folder only. This also works because the ads get edited for different editions sometimes.
    or
    B) Is there a way to do this with Aliases? Being unfamiliar with alias, or even shortcuts, because I haven't worked in an actual production environment yet, I don't know they functionality of linking alias into an ID file. I read a couple of previous posts and the outlook isn't very good for it.
    or
    C) Just place a PDF (or whatever you guys think is the best quality preserving filetype) in with the magazine itself? Then each company could have its own ad folder with all the rest of the files...
    What do you all think? If you can even link me to a post that goes into further detail on which option you think is best, or if  you have a different solution, that would be wonderful. I am open to answers.
    I want to be sure to leave a cleaner computer/work environment then the last few punks who were here... That's my "best practice". Documentation and file organization got drilled into me at Uni.

    Sorry, I am overcaffienated today, this response is kind of long.
    "Data load?" Do you mean that:
    a) handling lots of large files is too much for your computer to handle, or
    b) simply having lots of large files on your hard drive (even if they are not currently in use) slows your computer down?
    Because b) is pretty much impossible, unless you are almost out of space on your system drive. Which can be ameliorated by... buying another drive.
    I once set up an install of InDesign on a Mac for a friend of mine who is chipping away at a big-data math PhD. and who is sick to death of LaTeX. (Can't blame her, really.) Because we are both BSD nerds from way back, she wanted to do what you are suggesting - but instead of thinking about aliases, which you are correct to regard with dubiousness, she wanted to do it with hardlinks. Which worked, more or less. She liked it. Seemed like overkill to me.
    I suspect that this is because she is a highfalutin' academic whereas I am a production wonk in a business. I have to compare the cost of my time resolving a broken-link issue due to a complicated archiving scheme versus Just Buying Another Drive. Having clocked myself on solving problems induced by complicated archival schemes (or failure of overworked project managers to correctly follow the rules for same) I know that it doesn't take many hours of my work invested in combing through archives or rebuilding lost image files to equal Another Drive that I can go out and Just Buy.
    If you set up a reasonable method of file organization, and document it clearly, then you have already saved your organization (and your successors!) significant amounts of time and cash. Hard drive space is cheap. Don't spend your time on figuring out a way to save a few terabytes here and there. In fact, what I'd suggest for you is to try to figure out how many terabytes you've already spent on this question, by figuring out todays ratio of easily purchaseable reliable external hard drives to your unit of preferred currency, then figuring out how many hours you've already spent on the question.
    The only reason I can make this argument is that price-per-unit-of-magnetic-data-storage has, with remarkablly few exceptions, been constantly plummeting for decades, while the space requirements for documentation have been going up comparatively slowly. If you need a faster computer to do your job more efficiently, then price out a SSD drive for your OS and applications and jobs-on-deck, and then show your higher-ups the math that proves that the SSD pays for itself in your saved time within n weeks. My gut feeling these days is that, unless you are seriously underpaid, n is between two and six.
    Finally: I didn't really address your suggested possibilities. Procedure C (placing PDFs) usually works, but you do need to figure out how to make PDFs in such a way as to ensure they play nicely with your print method. Procedure A (compress stuff you don't need anymore) probably works okay, but I hope that you have some sort of command-line scripting ability to be able to quickly route stuff into and out of archives.

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to stop the data loads through process chains

    hi,
    I want to stop all the data loads to BI through Process chains where load happens periodic.
    kindly suggest how can I proceed.

    Hi,
    Goto RSPC find your PC and double click on START then change the timings, i.e. give starting date is 01.01.9999 like that Save and ACtivate the PC, it won't start till 01.01.9999.
    Thanks
    Reddy

  • Java servlet: how to store large data result across multiple web session

    Hi, I am writing a java servlet to process some large data.
    Here is the process
    1), user will submit a query,
    2) servlet return a lot of results for user to make selection,
    3). user submit their selections (with checkboxes).
    4). servlet send back the complete selected items in a file.
    The part I have trouble with (new to servlet) is that how I can store the results arraylist (or vector) after step 2 so I needn't re-search again in step 4.
    I think Session may be helpful here. But from what I read from tutorial, session seems only store small item instead of large dataset. Is it possible for session to store large dataset? Can you point me to an example or provide some example code?
    Thanks for your attention.
    Mike

    I don't know whether you connect the databases and store the resultset?

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • How to use incremental data load in OWB? can CDC be used?

    hi,
    i am using oracle 10g relese 2 and OWB 10g relese 1
    i want know how can i implement incremental data load in OWB?
    is it having such implicit feature in OWB tool like informatica?
    can i use CDC concept for this/ is it viable and compatible with my envoirnment?
    what could be other possible ways?

    Hi ,
    As such the current version of OWB does not provide the functionality to directly use CDC feature available. You have to come up with your own strategy for incremental loading. Like, try to use the Update Dates if available on your source systems or use CDC packages to pick the changed data from your source systems.
    rgds
    mahesh

  • How to configure once data load then trigerd or run ibot?

    Hi Experts,
    I have a one requirement,
    1) Every day run one workflow( means data load into data warehouse)
    2) After, ibot should be run and delivery to users.
    3) We scheduled the workflows in DAC for every day morning.
    Requirement:
    Once data loaded, then IBot should be run and send to users dynamically (without scheduling).
    If workflow failed, IBot won’t be delivered.
    How to find out or configure once data load then trigerd or run ibot?
    I am using obi 10g and informatica 8 and os xp.
    Advance thanks..
    Thanks,
    Raja

    Hi,
    Below are the details for automating the OBIEE Scheduler.
    Create a batch file or Sh file with following command
    D:\OracleBI\server\Bin\saschinvoke -u Administrator/udrbiee007 -j 8
    -u is username/Password for Scheduler (username/password that u given while configuration)
    -j is a job id number when u create a ibot it will assign the new job number that can be identified from"Jobmanager"
    Refer the below thread for more information.
    iBot scheduling after ETL load
    Or ,
    What you the above also it will work but problem is we need specify the time like every day 6.30 am .
    Note: The condition report is true then the report will be delivered at 6.30 pm only but the condition is false the report will not triggered.
    I also implemented this but that is little bit different .
    Hope this help's
    Thanks
    Satya
    Edited by: Satya Ranki Reddy on Jul 13, 2012 12:05 PM

  • How to process large data files in XI  ?  100 MB files ?

    Hi All
       At present we have a scenario as follows
      It is File to IDoc ....Problem is the size of the file
      We need to transfer 100mb file to SAP R/3 system ? So this huge data how to
      process ?
    Adv thanx and regards
    Rakesh

    Hi,
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Please let me know , your problem is solved or not..
    Regards
    Chilla..

  • Unexpected query results during large data loads from BCS into BI Cube

    Gurus
    We have had an issue occur twice in the last few months but its causing our business partners a hard point.  When they send a large load of data from BCS to the real time BI Cube the queries are showing unexpected results.  We have the queries enabled to report on yellow requests and that works fine it seems the issue occurs as the system is processing the closing of the request and opening the next request.  Has anyone encountered this issue if so how did you fix it?
    Alex

    Hi Alex,
    There is not enough information to judge. BI queries in BCS may use different structure of real-time, basic, virtual cubes and multiproviders:
    http://help.sap.com/erp2005_ehp_02/helpdata/en/0d/eac080c1ce4476974c6bb75eddc8d2/frameset.htm
    In your case, most likely, you have a bad design of the reporting structures.

  • SQLLOADER: Large Data Loads: 255 CHAR limit?

    Issue: I'm trying to load a delimited text file into an Oracle table
    with data that execeeds 255 characters and it's not working correctly.
    The table fields are set to VARCHAR2 2000 - 4000. No problem. Some of
    the data fields in the file have over 1000 characters. Ok. When running
    a load, SQLLOADER seems to only want to handle 255 characters based on
    it's use of the CHAR datatype as a defult for delimited file text
    fields. Ok. So, I add VARCHAR(2000) in the .ctl file next to the fields
    that I want to take larger datasets. That does not seem to work.
    When I set a field in the control file to VARCHAR(2000), the data for
    that field will get into the table. That's fine but, the issue is
    SQLLOADER does not just put just that field's data into the table, but
    it puts the remainder of the record into the VARCHAR(2000) field.
    SQLLOADER seems to fix the length of the field and forgets I want
    delimiters to continue to work.
    Anyone know how to get SQLLOADER to handle multiple >255 data fields in
    a delimited file load?
    jk
    Here is my control file:
    load data
    infile 'BOOK2.csv'
    append into table PARTNER_CONTENT_TEMP
    fields terminated by ',' optionally enclosed by '^' TRAILING NULLCOLS
    (ctlo_id,
    partners_id,
    content2_byline ,
    content2 varchar(4000),
    content3 varchar(2000),
    content9 varchar(1000),
    submitted_by,
    pstr_id,
    csub_id)
    null

    I have been sucessful using char instead of varchar. But having
    the optionally enclosed by value in the data has always solved
    the problem.

Maybe you are looking for