Dataload performances

I'm currently experiencing performances troubles with essbase 6.5 on HP-UX server (4 processors - 8 Gb RAM) on a very tiny database...Loading 1 line using dataload spend 5 seconds...On my laptop : 0.18 seconds...I have tried to enable/disable the parallel dataload : Same thing...Can somebody help me ?Laurent PULCE - Vivendi Water Systems

A bit of a helpdesk line but first starting place would to upgrade from 6.5 to 6.5.1. There were a number of bugs in 6.5 especially on Unix. Try this first and then let me know how you get on from there.Hope this helps.Paul Armitage.Analitica Ltd.www.analitica.co.uk

Similar Messages

  • Dataloading Performance

    Hi All...
    What are the different ways to improve data loading performance...
    Helpful answers will be appreciated!
    THanks
    Deepti

    Hi Deepthi
    There are many ways to improve data loading performance
    A
    First tune the individual single execution and then the whole load processes.
    Eliminating unnecessary processes
    Reducing data volume to be processed
    Deploying parallelism on all available levels
    B
    Extraction Performance
    C
    Parallel processes:
    distribute to different servers
    avoid bottlenecks on one server
    Config in table ROIDOCPRMS
    D
    PSA, Transfer Rules, Update Rules
    You have to take few measures to optimize load performance. Few are listed below.
    Consider the Packet sizing
    Delete index
    Table Partitioning
    Data Model
    Load Sequencing
    Parellel Processing
    .To obtain a good load performance for ODS objects,
    1. Activating data in the ODS object
    In the Implementation Guide in the BW Customizing, you can implement different settings under Business Information Warehouse -> General BW settings -> Settings for the ODS object that will improve performance when you activate data in the ODS object.
    2. Creating SIDs
    The creation of SIDs is time-consuming and may be avoided in the following cases:
    a) You should not set the indicator for BEx Reporting if you are only using the ODS object as a data store.Otherwise, SIDs are created for all new characteristic values by setting this indicator.
    b) If you are using line items (for example, document number, time stamp and so on) as characteristics in the ODS object, you should mark these as 'Attribute only' in the characteristics maintenance.
    SIDs are created at the same time if parallel activation is activated (see above).They are then created using the same number of parallel processes as those set for the activation. However:if you specify a server group or a special server in the Customizing, these specifications only apply to activation and not the creation of SIDs.The creation of SIDs runs on the application server on which the batch job is also running.
    3. DB partitioning on the table for active data (technical name:
    The process of deleting data from the ODS object may be accelerated by partitioning on the database level.Select the characteristic after which you want deletion to occur as a partitioning criterion.For more details on partitioning database tables, see the database documentation (DBMS CD).Partitioning is supported with the following databases:Oracle, DB2/390, Informix.
    4. Indexing
    Selection criteria should be used for queries on ODS objects.The existing primary index is used if the key fields are specified.As a result, the characteristic that is accessed more frequently should be left justified.If the key fields are only partially specified in the selection criteria (recognizable in the SQL trace), the query runtime may be optimized by creating additional indexes.You can create these secondary indexes in the ODS object maintenance.
    5. Loading unique data records
    If you only load unique data records (that is, data records with a one-time key combination) into the ODS object, the load performance will improve if you set the 'Unique data record' indicator in the ODS object maintenance.
    And for Infocube
    Buffering Number Range (InfoCube): Activate The number range buffer for the dimension ID’s Reduces application server access to Database. e.g. set the number range buffer for one dimension to 500, the system will keep 500 sequential numbers in memory
    High volumes of transaction data: significant DB access (NRIV table) to fulfill number range requests. Expected Results:Accelerates data load performance per load request.
    Note:After the load, reset the number ranges buffer to its original state: minimize unnecessary memory allocation.
    Hope this info is helpful
    Atul

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Extractor Designing to improve the Load performance.

    Hi all,
    I am extracting the data from MM application for this i am using the LO  2lis_02_itm extractor and i had enhanced it with 32 field and its happering my data load performance.
    Could u pls let me know,  how can i improve the data load performance.
    Do i need to create the different Generic Extractors instead of enhancing the LO.
    The DSO is also having many fields in it. Should i split it into 2 and create the Multiprovider for reporting.
    Regards
    KK

    Hello,
    my suggestion would be to create another generic DS for the logcal set of fields to be required in BI.
    then you can load then seperately to different DSOs and then to single IC or to two IC and use MP to report on them.
    Further you can check the below links:
    Extraction-Enhancement-Performance problem
    Increase dataload performance
    Dataload Performance
    Performance Enhancement for Custom Data Extractor
    Regards,
    Dhanya

  • Master data Process chain running long time

    Dear SDN Team,
    PC Process Chain: General Master Data - time running is about 4.5 hours . how we can improve running time.
    There are 15-25 info objects are loading from this process chain.
    any steps to improve the performace
    thanks and kind regards,
    Lakshman Kumar G

    Hi,
    Go to the function module RSD_IOBJ_GET to find the object name of the dimension. Go to SE37 & proceed as follows.
      I_IOBJNM = 'Info Object Name'
      I_OBJVERS = 'A'
      I_BYPASS_BUFFER = 'X'
    and then Execute the FM.
    Double click on the "'E_S_VIOBJ'" and find the number in the field 'u201ENUMBRANR‟'
    and add  BIM to the number taken  from NUMBRANR.
    Goto SNRO T-Code and enter BIM+the number and edit.
    From Edit--> Set up Buffering --> Main memory, select the buffering check box and enter approx 500.
    This needs to be done for all the master and your dataload performance would improve.
    Regards,
    Mani

  • Creation of index on DSO

    Hi,
      How to create a INDEX on a DSO and how does it improve dataload performance
    Thanks

    hI,
    to create index on DSO
    GOTO DSO>Rightclick>change-->under Nav attri option u wil find Indexes->There you can create indexes.
    When you activate DSO, the system automatically creates an index based on the keyfields. Thsi is called Primary Index.
    Primary Inedex is created automatically when a table is created in the Dataabase.
    You can create Inedex on your own to improve performance, these are called Seconadary Indexes. This is necessary if the table is frequently accessed in a way that does not take advantage of the sorting of the primary index for the access. Different indexes on the same table are distinguished with a three-place index identifier. 
    We will take one example:
    suppose we have one DSO and we have primary key as customer number, calendar day. lets say these are unique. and lets say if you want to run queries  frequently where u select data based on Customer area and customer type.
    In this case we can create secondary index on customer area and customer type.
    Now if we run query, instead of having to read every record, it will use index to select records which contain Customer area and customer type.
    Here query saves time.
    Regards,
    Marasa.

  • Key figs not populating in Multiprovider Query

    Folks,
    I have created a multiprovider on a Order Header and and Order Items with the following fields -
    Order Header - Sales Doc
                             Orde_ date
                             Promotion_Code
    Order Items - Sales Doc
                          Order_date
                           Sale_amt
                          Order_cnt
                          Line_Cnt
    My Report should display -
          __Order_date__   __Prom_Code__  __Order_Cnt__ __Lin_Cnt__  __Sale_ amt__
    Even though I have order date as 1st column in my report (which is common field in both providers), in the report it is displaying as 2 different rows for each Order date.
    1st row displaying Promotion_code from Provider 1 and the 2nd row displaying all the Key fig's.
    How can I get all fields in one row? please share your thoughts..
    Thanks,
    KK

    Thanks Mansi and Arun for your response, Promotion code infoobject is only available in Header Cube...Is there any alternative way of doing this other than adding this field in Order item and doing look up in header cube?? I do not want to do a look up as it will effect my dataload performance.
    Thanks,
    KK
    Edited by: kumar K on Aug 21, 2009 12:14 PM

  • Which is better?? INDEXING or AGGREGATES or PARTITIONING??

    Hello BW Experts,
    If I need to decide one out of three i.e., INDEXING or AGGREGATES or PARTITIONING for better performance.. ??
    whcih one should i go for?? and why??
    Plz throw some light on this!!
    Thanks & Regards,
    Sapster.

    Hi,
      For query performance and dataloading performance we need all the three.. So we cannot say this one is better than other as per my knowledge..
    Partition:
    By using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Due to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    Aggregates:
    An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form on the database.
    Aggregates allow quick access to InfoCube data during reporting. Similar to database indexes, they serve to improve performance.
    Index :
    Indexes are created in the fact table for each dimension allow you to easily find and select the data. When initially loading data into the InfoCube, you should not create the indexes at the same time as constructing the InfoCube, rather only afterwards.
    The indexes displayed are the secondary indexes of the F and E fact tables for the InfoCube. The primary indexes and those defined by the user are not displayed. The aggregates area deals with the corresponding indexes of the fact table for all aggregates of an InfoCube.
    Have a look at the below document..
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e7deb490-0201-0010-f882-ef40f8c9bb5c
    Regards,
    Siva.

  • Cache Settings - Data Load

    Hello All,
    Do we have to set cache while performing data laod?
    Defragmentation - No setting of cache
    Calculation - Set cahce to reduce the calculation time (Max 2 GB- Index Cache + Data Cache)
    Data Load - ???
    Amarnath

    Hi Amarnath,
    There are some configuration settings that can effect dataload performance which are:-
    1- DLTHREADSPREPARE - Specifies how many threads Essbase may use during the data load stage that codifies and organizes the data in preparation to being written to blocks in memory.
    2- DLTHREADSWRITE - Specifies how many threads Essbase may use during the data load stage that writes data to the disk. High values may require allocation of additional cache.
    3- DLSINGLETHREADPERSTAGE - Specifies that Essbase use a single thread per stage, ignoring the values in the DLTHREADSPREPARE and DLTHREADSWRITE settings.
    If you set high value for 2nd setting then you need to increase cache size too.
    Hope it answere your question.
    Regards,
    Atul Kushwaha

  • How do I use LSMW with a bespoke dataload program?

    Hello data migration gurus, I need your help.
    I need to migrate data from a legacy system into a suite of bespoke functionality we've written within SAP CRM. We are planning to write a program to perform the load. However we would like such a program to be reusable, so it makes sense to keep the data load separate from the data formatting, which may vary between customers.
    The obvious answer is to use LSMW for the data formatting, calling our load program as the last step. However in LSMW you can only choose from a list of standard dataload programs. Does anyone know how to use LSMW with a bespoke program? It looks like transaction SXDA might be involved, but it isn't too clear. If anyone has done this kind of thing elsewhere, advice would be much appreciated!
    Obviously as an alternative we could just write a separate program to format the data - it would just be a lot better if we could use LSMW.

    No longer required.

  • To improve the performance of the extractor

    Hi Team,
    Currently there is one dataload which is taking 48hours to extract data from R/3 System to BI. It is based on infoset query.
    The extractor uses the standard LDB : PNP, The database driver for this LDB is SAPDBPNP.
    The extractor is based on PA0008 infotype.
    1 tricky part of this code is that the values are not stored in the PA0008 table, but these are dynamically calculated during the infoset extract.
    Could you please provide some inputs in improving the performance.
    Thanks in advance.
    Sunil Kumar.

    Since its dynamically calculated during the infoset extract, obviously it would go longer time. Because it has to calculate those many times as the no.of PERNR (in IT8).
    If possible try to move those calculations to Transfer / Update Rules level.
    OR
    If its a non-delta datasource, try to add more infopackages (with PERNR selections) to this datasource, run them in parallel in process chain.

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Performance tunning in exraction.

    hii frnds,
       can anyone plz tell me the performance tunning in extraction,in report,in dataloading.plz reply me in pointwise..
    Thanks
    Rosy
    Please search for available information before posting.
    Edited by: kishan P on Jan 24, 2012 10:57 AM

    Hi Rosy,
    Please check the below docs,
    http://www.tli-usa.com/download/Expert_Tips_and_New_Techniques_for_Optimizing_Data_Load_and_Query_Performance__Part_One_.pdf
    http://www.tli-usa.com/download/Expert_Tips_and_New_Techniques_for_Optimizing_Data_Load_and_Query_Performance__Part_Two_.pdf
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/08f1b622-0c01-0010-618c-cb41e12c72be?QuickLink=index&overridelayout=true
    Thanks,
    Vinod

  • Performance Measuring

    Hai
    We finish the BW-Implementation for our client . And we build some Bex-report.
    Now we need to measure the 'Query Performance' and 'Performance Of BW-System'.
    So how can measure the performance in terms of query and modeling, dataloading .
    pls let me know
    kumar

    Hi,
    You can do that through ST03.
    But for that you need to activate the statistics cube and the business conatent query.
    It will give the time consumed by the queries and other stuff.
    Open this link
    the fist search option is the how to document just save it on the desktop.
    https://www.sdn.sap.com/irj/sdn/advancedsearch?query=how%20to%20set%20up%20bw%20statistics&cat=sdn_all
    Thanks
    Message was edited by:
            Ajeet Singh

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

Maybe you are looking for