Data Load Optimization

Hi,
I have a cube with following dimension information and it requires optimization for data load, its data is cleared and loaded every week from SQL data source using load rule. It loads 35 million records and the load is so slow that only for data load excluding calculation takes 10 hrs. Is it common? Is there any change in the structure I need to make the load faster like changing the Measures to sparse or change the position of dimensions. Also the block size is large, 52920 B thats kind of absurd. I have also the cache settings below so please look at it please give me suggestions on this
MEASURE      Dense     Accounts 245 (No. Of Members)
PERIOD     Dense     Time 27
CALC      Sparse     None      1
SCENARIO     Sparse     None 7
GEO_NM     Sparse     None     50
PRODUCT     Sparse     None 8416
CAMPAIGN     Sparse     None 35
SEGMENT     Sparse     None 32
Cache settings :
Index Cache setting : 1024
Index Cache Current Value : 1024
Data File Cache Setting : 32768
Data file Cache Current Value : 0
Data Cache Setting : 3072
Data Cache Current Value : 3049
I would appreciate any help on this. Thanks!

10 hrs is not acceptable even for that many rows. For my discussion, I'll assume a BSO cube,
There are a few things to consider
First what is the order of the columns in your load rule? Can you post the SQL? IS the sql sorted as it comes in? Optimal for a load would be to have your sparse dimensions first followed by the dense dimensions(preferably having one of the dense dimensiosn as columns instead of rows) For example your periods going across like Jan, Feb, Mar, etc
Second, Do you have parallel data loading turned on? Look in the config for Dlthreadsprepare and DLthreadswrite. My multithreading you can get better throughput
Third, how does the data get loaded? Is there any summation of data before being loaded or do you have the load rule set to addative. doing the summation in SQL would spead things up a lot since each block would only get hit once.
I have also seen network issues cause this as transferring this many rows would be slow ( as KRishna said) and have seen where the number of joins done on the SQL caused massive delays in preparing the data. Out of interest, how long does the actual query take if you are just executing it from a SQL tool.

Similar Messages

  • Optimize the data load process into BPC Cubes on BW

    Hello Gurus,
    We like to know how to optimize the data load process and our scenario for this is that we have ECC Classic Ledger,  and we are looking for the best way to load data into the BW Infocubes from an ECC source.
    To complement the question above, from what tables the data must be extracted and then parsed to BW so the consolidation it´s done ?  also, is there any other module that has to be considered from other modules like FI or EC-CS for this?
    Best Regards,
    Rodrigo

    Hi Rodrigo,
    Have you looked at the BW Business Content extractors available for the classic GL? If not, I suggest you take a look. BW business content provides all the business logic you will normally need to get data out of ECC and into BW for pretty much every ECC application component in existence: [http://help.sap.com/saphelp_nw70/helpdata/en/17/cdfb637ca5436fa07f1fdc0123aaf8/frameset.htm]
    Ethan

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Roll back data load

    Hi All,
    I am using essbase 7.1.5. Is there any way to rollback the database in case if it finds any error. For example if source file (Flat file) contains 1000 records where at 501 record essbase found an error and has rejected that record (501) and aborted the load. In this case is there a way to rollback the database before loading the 500 records.If so where should i do those settings?? Please advice.
    Thanks in advance.
    Hari

    A 6 Hour data load? I've never heard of such a thing. It sounds like the data should be sorted for a more optimum dataload to me.
    As for the two stage approach, this assumes that you are loading variances (delta's) to the existing values, rather than replacing the data itself. It further assumes that you can create an input level cube to handle the conversion. You have your base data in one scenario, and your variances/delta's in another. You can reload your delta's (as absolutes) any time, and derive the absolutes from the sum of the base and variance values.
    - Scenario
    -- Base (+) <--- this get's recalculated when the Delta's are considered "good"
    -- Delta (+) <--- this get's loaded for changes only, and reset to zero when the base is recalculated.
    You export the modified data from this cube to your existing/consolidation cube. If you can "pre-load" the changes (outside of your calc window for the main cube), you can optimize the calculation window -- although if it takes 6 hours to load the database your calc window is probably shot no matter what you do.
    However, if you mean that the load AND calc takes 6 hours, and it takes a relatively short time to load the data alone, this can be a performance enhancement because you can recalc this "input cube" in seconds from a new/complete load relative to the "in place" reset and reload changes approach (in your existing cube).
    You are simply redirecting your data into a staging table, essentially, and the staging table handles the conversion of variances to absolutes so you can make the process more efficient over all (it is often more efficient to break the process up into smaller pieces).

  • Load Optimization

    Hi Experts,
    Can you please tell me the load optimization techniques in BW. I need to enhance the performance of a process chain which used to take less time earlier for a particular but takes more time for very few extra records in addition to the previous records
    Please suggest if i can do something

    Hi,
    Please find below some of the performance optimization techniques which may help u...
    1) Increase the parallel processing during extraction.
    2) Selective loading.
    3) Every job will be having a priority (A, B, C u2013 A being the highest and C being the lowest), choose this based on your scenario.
    4) Check with BASIS for the sizing of your server.
    5) We can increase the number of background processes during data loads. This can be done by making dialog processes as Background processes.
    For this you need BASIS inputs. (This is done in some profile settings by making the system behave differently during loads. (something like day mode/night mode))
    6) There are some maintenance jobs that should run regularly in any SAP box to ensure proper functioning. Check them.
    7) Use of start routines is preferred instead of update routines.
    Regards,
    KK.

  • Data load Tuning

    Hello All,
    What are the Data Load Tuning ways from R/3 to BW we can possibly do, please help.
    Thanks,
    Suman

    Hi,
    To improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    For eg
    1) Create Index
    2) Delete Index
    3) Aggregate Creation on Info Cube
    4) Compressing Info Cube data
    5) Rollup Data to Aggregates
    6) Partitioning infoCube
    7) Load Master data before loading Transactional Data
    8) Adjusting Datapackage size
    https://forums.sdn.sap.com/click.jspa?searchID=10049032&messageID=4373697
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    4. Check whether the system processes are free when this load is running
    5. Try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. Goto manage ODS -> activate -> activate in parallel -> increase the number of processes from there.for direct access try TCode RSODSO_SETTINGS
    7. Remove Bex Reporting check box in ODS if not required.
    8. When the load is not getiing processed due to huge volume of data, or more number of records per data packet, Please try the below option.
    1) Reduce the IDOC size to 8000 and number of data packets per IDOC as 10. This can be done in info package settings.
    2) Run the load only to PSA.
    3) Once the load is succesfull , then push the data to targets.
    In this way you can overcome this issue.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Use InfoPackages with disjoint selection criteria to parallelize the data export.
    Complex database selections can be split to several less complex requests.
    Number Range Buffering Performance  
    /thread/754694
    Check this oss note : 130253.Review the oss note 857998 and 130253. The first note tells you how to find the dimensions and infoobjects that needs number range buffering.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    SAP Business Intelligence Accelerator : A High - Performance Analytic Engine for SAP Ne tWeaver Business Intelligence
    http://www.sap.com/platform/netweaver/pdf/BWP_AR_IDC_BI_Accelerator.pdf
    BI Performance Audit
    http://www.xtivia.com/downloads/Xtivia_BIT_Performance%20Audit.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10564d5c-cf00-2a10-7b87-c94e38267742
    https://websmp206.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000689436&
    Thanks,
    JituK

  • Data load performance using infoset Vs View.

    Hi Guru,
    I am performing generic extraction in that i am loading data to cube but my Data source is based on Infoset in R/3.
    it is taking 30 MIn. to load 10,00000 Lakh (Ten Lakh) records ideally it has to take 10 to 15 min. rit ?
    can anybody suggest me where i need to check for increase in performance or shall i create datasource over a view and try to load data will it help me in data load performance ?
    thanks,
    ganesh.

    hi Ganesh,
    Primary Index ->
    When you create a database table in the ABAP Dictionary, you must specify the combination of fields that enable an entry within the table to be clearly identified. The key fields must be specified at the top of the table field list, and define them as key fields. A minimum of 1 key field and a maximum of 16 key fields can be defined.
    When the table is activated, an index formed from all key fields is created on the database (with Oracle, Informix, DB2), in addition to the table itself. This index is called the primary index The primary index is unique by definition.
    In addition to the primary index you can define one or more secondary indexes for a table in the ABAP Dictionary, and create them on the database. Secondary indexes can be unique or non-unique.
    If you dispatch an SQL statement from an ABAP program to the database, the program searches for the data records requested either in the database table itself (full table scan) or by using an index ( index unique scan or index range scan). If all fields requested are found in the index using an index scan, the table records do not need to be accessed.
    The index records are stored in the index tree and sorted according to index field. This enables accelerated access using the index The table records in the table blocks are not sorted.
    An index should not consist of too many fields. Having a few very selective fields increases the chance of reusability, and reduces the chance of the database optimizer selecting an unsuitable access path.
    To create Index ->
    Yo have to use trx SE11 into Dev system.
    Enter the database table name and press
    Display -> Indexes -> Create
    Enter index name.
    Choose Maintain logon language.
    Enter short description and index fields.
    Then press save and create the request to transport the index to QA and PRD. Then press activate.
    Hope this helps,
    VA
    Edited by: Vishwa  Anand on Sep 29, 2010 12:50 PM

  • Data load problem - BW and Source System on the same AS

    Hi experts,
    I’m starting with BW (7.0) in a sandbox environment where BW and the source system are installed on the same server (same AS). The source system is the SRM (Supplier Relationship Management) 5.0.
    BW is working on client 001 while SRM is on client 100 and I want to load data from the SRM into BW.
    I’ve configured the RFC connections and the BWREMOTE users with their corresponding profiles in both clients, added a SAP source system (named SRMCLNT100), installed SRM Business Content, replicated the data sources from this source system and everything worked fine.
    Now I want to load data from SRM (client 100) into BW (client 001) using standard data sources and extractors. To do this, I’ve created an  InfoPackage in one standard metadata data source (with data, checked through RSA3 on client 100 – source system). I’ve started the data load process, but the monitor says that “no Idocs arrived from the source system” and keeps the status yellow forever.
    Additional information:
    <u><b>BW Monitor Status:</b></u>
    Request still running
    Diagnosis
    No errors could be found. The current process has probably not finished yet.
    System Response
    The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
    and/or
    the maximum wait time for this request has not yet run out
    and/or
    the batch job in the source system has not yet ended.
    Current status
    No Idocs arrived from the source system.
    <b><u>BW Monitor Details:</u></b>
    0 from 0 records
    – but there are 2 records on RSA3 for this data source
    Overall status: Missing messages or warnings
    -     Requests (messages): Everything OK
    o     Data request arranged
    o     Confirmed with: OK
    -     Extraction (messages): Missing messages
    o     Missing message: Request received
    o     Missing message: Number of sent records
    o     Missing message: Selection completed
    -     Transfer (IDocs and TRFC): Missing messages or warnings
    o     Request IDoc: sent, not arrived ; Data passed to port OK
    -     Processing (data packet): No data
    <b><u>Transactional RFC (sm58):</u></b>
    Function Module: IDOC_INBOUND_ASYNCHRONOUS
    Target System: SRMCLNT100
    Date Time: 08.03.2006 14:55:56
    Status text: No service for system SAPSRM, client 001 in Integration Directory
    Transaction ID: C8C415C718DC440F1AAC064E
    Host: srm
    Program: SAPMSSY1
    Client: 001
    Rpts: 0000
    <b><u>System Log (sm21):</u></b>
    14:55:56 DIA  000 100 BWREMOTE  D0  1 Transaction Canceled IDOC_ADAPTER 601 ( SAPSRM 001 )
    Documentation for system log message D0 1 :
    The transaction has been terminated.  This may be caused by a termination message from the application (MESSAGE Axxx) or by an error detected by the SAP System due to which it makes no sense to proceed with the transaction.  The actual reason for the termination is indicated by the T100 message and the parameters.
    Additional documentation for message IDOC_ADAPTER        601 No service for system &1, client &2 in Integration Directory No documentation exists for message ID601
    <b><u>RFC Destinations (sm59):</u></b>
    Both RFC destinations look fine, with connection and authorization tests successful.
    <b><u>RFC Users (su01):</u></b>
    BW: BWREMOTE with profile S_BI-WHM_RFC (plus SAP_ALL and SAP_NEW temporarily)
    Source System: BWREMOTE with profile S_BI-WX_RFCA (plus SAP_ALL and SAP_NEW temporarily)
    Someone could help ?
    Thanks,
    Guilherme

    Guilherme
    I didn't see any reason why it's not bringing. Are you doing full extraction or Delta. If delta extraction please check the extractor is delta enabled or not. Some times this may cause problems.
    Also check this weblog on data Load errors basic checks. it may help
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    Thanks
    Sat

  • BI 7.0 data load issue: InfoPackage can only load data to PSA?

    BI 7.0 backend extraction gurus,
    We created a generic datasource on R3 and replicated it to our BI system, created an InfoSource, the Transformation from the datasource to the InfoSource, an ODS, the transformation from the InfoSource to the ODS. 
    After the transformation creation between the InfoSource and the ODS is done on this BI system, a new folder called "Data Transfer Process" is also created under this ODS in the InfoProvider view.  In the Data Transfer Process, in the Extraction tab, picks 'Full' in the field Extraction Mode, in the Execute tab, there is a button 'Execute', click this button (note: so far we have not created InfoPackage yet) which sounds like to conduct the data load, but find there is no data available even if all the status show green (we do have a couple of records in the R3 table). 
    Then we tried to create an InfoPackage, in the Processing tab, find 'Only PSA' radio button is checked and all others like 'PSA and then into Data Targets (Package by Package)" are dimmed!  In the Data Target tab, find the ODS as a target can't be selected!  Also there are some new columns in this tab, 'Maintain Old Update Rule' is marked with red color 'X', under another column 'DTP(S) are active and load to this target', there is an inactive picture icon, that's weird since we have already activated the Data Transfer Process!  Anyway, we started the data load in the InfoPackage, and the monitor shows the records are brought in, but since in the Process tab in the InfoPackage, 'Only PSA' radio button is checked with all others dimmed that there is no any data going to this ODS!  Why in BI 7.0, 'Only PSA' radio button can be checked with others all dimmed?
    Many new features with BI 7.0!  Any one's idea/experience is greatly appreciate on how to load data in BI 7.0!

    You dont have to select anything..
    Once loaded to PSA in DTP you have the option of FULL or DELTA ,full loads all the data from PSA and DELTA loads only the last load of PSA.
    Go through the links for Lucid explainations
    Infopackage -
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/03808225cf5167e10000000a1553f6/content.htm
    DTP
    http://help.sap.com/saphelp_nw2004s/helpdata/en/42/f98e07cc483255e10000000a1553f7/frameset.htm
    Creating DTP
    http://help.sap.com/saphelp_nw2004s/helpdata/en/42/fa50e40f501a77e10000000a422035/content.htm
    <b>Pre-requisite-</b>
    You have used transformations to define the data flow between the source and target object.
    Creating transformations-
    http://help.sap.com/saphelp_nw2004s/helpdata/en/f8/7913426e48db2ce10000000a1550b0/content.htm
    Hope it Helps
    Chetan
    @CP..

  • Open HUB ( SAP BW ) to SAP HANA through DB Connection data loading , Delete data from table option is not working Please help any one from this forum

    Issue:
    I have SAP BW system and SAP HANA System
    SAP BW to SAP HANA connecting through a DB Connection (named HANA)
    Whenever I created any Open Hub as Destination like DB Table with the help of DB Connection, table will be created at HANA Schema level ( L_F50800_D )
    Executed the Open Hub service without checking DELETING Data from table option
    Data loaded with 16 Records from BW to HANA same
    Second time again executed from BW to HANA now 32 records came ( it is going to append )
    Executed the Open Hub service with checking DELETING Data from table option
    Now am getting short Dump DBIF_RSQL_TABLE_KNOWN getting
    If checking in SAP BW system tio SAP BW system it is working fine ..
    will this option supports through DB Connection or not ?
    Please follow the attachemnet along with this discussion and help me to resolve how ?
    From
    Santhosh Kumar

    Hi Ramanjaneyulu ,
    First of all thanks for the reply ,
    Here the issue is At OH level ( Definition Level - DESTINATION TAB and FIELD DEFINITION )
    in that there is check box i have selected already that is what my issue even though selected also
    not performing the deletion from target level .
    SAP BW - to SAP HANA via DBC connection
    1. first time from BW suppose 16 records - Dtp Executed -loaded up to HANA - 16 same
    2. second time again executed from BW - now hana side appaended means 16+16 = 32
    3. so that i used to select the check box at OH level like Deleting data from table
    4. Now excuted the DTP it throws an Short Dump - DBIF_RSQL_TABLE_KNOWN
    Now please tell me how to resolve this ? will this option is applicable for HANA mean to say like , deleting data from table option ...
    Thanks
    Santhosh Kumar

  • Data load times

    Hi,
    I have a question regarding data loads. We have a process cahin which includes 3 ods and cube.
    Basically ODS A gets loaded from R/3 and the from ODS A it then loads into 2 other ODS B, ODS C and CUBE A.
    So when I went to monitor screen of this load ODS A-> ODS B,ODS C,CUBE A. The total time shows as 24 minutes.
    We have some other steps in process chain where ODS B-> ODS C, ODS C- CUBE 1.
    When I go the monitor screen of these data loads the total time the tortal time  for data loads show as 40 minutes.
    I *am suprised because the total run time for the chain itself is 40 minutes where in the chain it incclude data extraction form R/3 and ODS's Activations...indexex....
    *Can anybody throw me some light?
    Thank you all
    Edited by: amrutha pal on Sep 30, 2008 4:23 PM

    Hi All,
    I am not asking like which steps needed to be included in which chain.
    My question is when look at the process chain run time it says the total time is equal to 40 minutes and when you go RSMO to check the time taken for data load from ODS----> 3 other data targets it is showing 40 minutes.
    The process chain also includes ods activation buliding indexex,extracting data from R/3.
    So what are times we see when you click on a step in process chain and displaying messages and what is time you see in RSMO.
    Let's take a example:
    In Process chain A- there is step LOAD DATA- from ODS A----> ODS B,ODS C,Cube A.
    When I right click on the display messages for successful load it shows all the messages like
    Job started.....
    Job ended.....
    The total time here it shows 15 minutes.
    when I go to RSMO for same step it shows 30 mintues....
    I am confused....
    Please help me???

  • Master Data loading got failed: error "Update mode R is not supported by th

    Hello Experts,
    I use to load master data for 0Customer_Attr though daily process chain, it was running successfully.
    For last 2 days master data loading for 0Customer_Attr got failed and it gives following error message:
    "Update mode R is not supported by the extraction API"
    Can anyone tell me what is that error for? how to resolve this issue?
    Regards,
    Nirav

    Hi
    Update mode R error will come in the below case
    You are running a delta (for master data) which afils due to some error. to resolve that error, you make the load red and try to repeat the load.
    This time the load will fail with update mode R.
    As repeat delta is not supported.
    So, now, the only thing you can do is to reinit the delta(as told in above posts) and then you can proceed. The earlier problem has nothing to do with update mode R.
    example your fiorst delta failed with replication issue.
    only replicating and repeaing will not solve the update mode R.
    you will have to do both replication of the data source and re-int for the update mode R.
    One more thing I would like to add is.
    If the the delat which failed with error the first time(not update mode R), then
    you have to do init with data transfer
    if it failed without picking any records,
    then do init without data transfer.
    Hope this helps
    Regards
    Shilpa
    Edited by: Shilpa Vinayak on Oct 14, 2008 12:48 PM

  • CALL_FUNCTION_CONFLICT_TYPE Standard Data loading

    Hi,
    I am facing a data loading problem using Business content on CPS_DATE infocube (0PS_DAT_MLS datasource).
    The R/3 extraction processes without any error, but the problem occurs in the update rules while updating the milestone date. Please find hereunder the log from the ST22.
    The real weird thing is that the process works perfectly in development environment and not in integration one (the patch levels are strongly the same: BW 3.5 Patch #16).
    I apologise for the long message below... this is a part of the system log.
    For information the routine_0004 is a standard one.
    Thanks a lot in advanced!
    Cheers.
       CALL_FUNCTION_CONFLICT_TYPE                                                 
    Except.                CX_SY_DYN_CALL_ILLEGAL_TYPE                             
    Symptoms.                                                Type conflict when calling a function module
    Causes                                                        Error in ABAP application program.   
         The current ABAP program "GP420EQ35FHFOCVEBCR6RWPVQBR" had to be terminated because one of the statements could not be executed.                                 
         This is probably due to an error in the ABAP program.                                 
         A function module was called incorrectly.
    Errors analysis
         An exception occurred. This exception is dealt with in more detail below                      
         . The exception, which is assigned to the class 'CX_SY_DYN_CALL_ILLEGAL_TYPE', was neither caught nor passed along using a RAISING clause, in the procedure "ROUTINE_0004"               
         "(FORM)" .                                    
        Since the caller of the procedure could not have expected this exception                      
         to occur, the running program was terminated.                                                  The reason for the exception is:     
        The call to the function module "RS_BCT_TIMCONV_PS_CONV" is incorrect:
    The function module interface allows you to specify only fields of a particular type under "E_FISCPER".
        The field "RESULT" specified here is a different field type.
    How to correct the error.
      You may able to find an interim solution to the problem in the SAP note system. If you have access to the note system yourself, use the following search criteria:                                
        "CALL_FUNCTION_CONFLICT_TYPE" CX_SY_DYN_CALL_ILLEGAL_TYPEC                                    
        "GP420EQ35FHFOCVEBCR6RWPVQBR" or "GP420EQ35FHFOCVEBCR6RWPVQBR"                                
        "ROUTINE_0004"                                                                               
        If you cannot solve the problem yourself and you wish to send                                 
        an error message to SAP, include the following documents:                                  
        1. A printout of the problem description (short dump)                                         
           To obtain this, select in the current display "System->List->                              
           Save->Local File (unconverted)".                                              2. A suitable printout of the system log To obtain this, call the system log through transaction SM21.  Limit the time interval to 10 minutes before and 5 minutes  after the short dump. In the display, then select the function                                    
           "System->List->Save->Local File (unconverted)".                                       
        3. If the programs are your own programs or modified SAP programs, supply the source code.               
           To do this, select the Editor function "Further Utilities->  Upload/Download->Download".                                        
        4. Details regarding the conditions under which the error occurred                            
           or which actions and input led to the error.                                               
        The exception must either be prevented, caught within the procedure                           
         "ROUTINE_0004"                                    
        "(FORM)", or declared in the procedure's RAISING clause.                                      
        To prevent the exception, note the following:                                    
    Environment system SAP Release.............. "640"
    Operating system......... "SunOS"   Release.................. "5.9"
    Hardware type............ "sun4u"
    Character length......... 8 Bits     
    Pointer length........... 64 Bits             
    Work process number...... 2        
    Short dump setting....... "full"   
    Database type............ "ORACLE" 
    Database name............ "BWI"  
    Database owner........... "SAPTB1"  
    Character set............ "fr" 
    SAP kernel............... "640"   
    Created on............... "Jan 15 2006 21:42:36"   Created in............... "SunOS 5.8 Generic_108528-16 sun4u" 
    Database version......... "OCI_920 " 
    Patch level.............. "109"    
    Patch text............... " "        
    Supported environment....     
    Database................. "ORACLE 9.2.0.., ORACLE 10.1.0.., ORACLE 10.2.0.."             
    SAP database version..... "640"
    Operating system......... "SunOS 5.8, SunOS 5.9, SunOS 5.10"
    SAP Release.............. "640"  
    The termination occurred in the ABAP program "GP420EQ35FHFOCVEBCR6RWPVQBR" in      
         "ROUTINE_0004". 
       The main program was "RSMO1_RSM2 ".  
    The termination occurred in line 702 of the source code of the (Include)           
         program "GP420EQ35FHFOCVEBCR6RWPVQBR"                                             
        of the source code of program "GP420EQ35FHFOCVEBCR6RWPVQBR" (when calling the editor 7020).
       Processing was terminated because the exception "CX_SY_DYN_CALL_ILLEGAL_TYPE" occurred in the procedure "ROUTINE_0004" "(FORM)" but was not handled locally, not declared in  the RAISING clause of the procedure. 
        The procedure is in the program "GP420EQ35FHFOCVEBCR6RWPVQBR ". Its source code starts in line 685 of the (Include) program "GP420EQ35FHFOCVEBCR6RWPVQBR ".
    672      'ROUTINE_0003' g_s_is-recno 
    673      rs_c_false rs_c_false g_s_is-recno  
    674      changing c_abort.   
    675     catch cx_foev_error_in_function. 
    676     perform error_message using 'RSAU' 'E' '510'  
    677             'ROUTINE_0003' g_s_is-recno
    678             rs_c_false rs_c_false g_s_is-recno
    679             changing c_abort.
    680   endtry.              
    681 endform.
    682 ************************************************************************ 
    683 * routine no.: 0004
    684 ************************************************************************ 
    685 form routine_0004 
    686   changing 
    687   result  type g_s_hashed_cube-FISCPER3
    688   returncode     like sy-subrc 
    689     c_t_idocstate  type rsarr_t_idocstate
    690     c_subrc        like sy-subrc 
    691     c_abort        like sy-subrc. "#EC *  
    692   data:
    693     l_t_rsmondata like rsmonview occurs 0 with header line. "#EC * 
    694                    
    695  try.             
    696 * init
    variables 
    697   move-corresponding g_s_is to comm_structure.
    698                     
    699 * fill the internal table "MONITOR", to make monitor entries  
    700                     
    701 * result value of the routine
    >>>>    CALL FUNCTION 'RS_BCT_TIMCONV_PS_CONV'  
    703          EXPORTING      
    704               I_TIMNM_FROM       = '0CALDAY'  
    705               I_TIMNM_TO         = '0FISCPER'  
    706               I_TIMVL            = COMM_STRUCTURE-CALDAY
    707               I_FISCVARNT        = gd_fiscvarnt
    708          IMPORTING 
    709               E_FISCPER          = RESULT.                               
    710 * if the returncode is not equal zero, the result will not be updated 
    711   RETURNCODE = 0. 
    712 * if abort is not equal zero, the update process will be canceled
    713   ABORT = 0.
    714              
    715   catch cx_sy_conversion_error   
    716         cx_sy_arithmetic_error.
    717     perform error_message using 'RSAU' 'E' '507'
    718             'ROUTINE_0004' g_s_is-recno
    719             rs_c_false rs_c_false g_s_is-recno
    720             changing c_abort.
    721   catch cx_foev_error_in_function.
    System zones content
    Name                Val.                                                                               
    SY-SUBRC           0                                         
    SY-INDEX           2                                         
    SY-TABIX           0                                         
    SY-DBCNT           0                                         
    SY-FDPOS           65                                        
    SY-LSIND           0                                         
    SY-PAGNO           0                                         
    SY-LINNO           1                                         
    SY-COLNO           1                                         
    SY-PFKEY           0400                                      
    SY-UCOMM           OK                                        
    SY-TITLE           Moniteur - Atelier d'administration       
    SY-MSGTY           E                                         
    SY-MSGID           RSAU                                      
    SY-MSGNO           583                                       
    SY-MSGV1           BATVC  0000000000                         
    SY-MSGV2           0PROJECT                                  
    SY-MSGV3                                           
    SY-MSGV4                                           
    Selected variables
    Nº       23 Tpe          FORM
    Name    ROUTINE_0004           
    GD_FISCVARNT                                 
        22
        00 RS_C_INFO                                                      I
          4
          9                                
    COMM_STRUCTURE-CALDAY
    20060303
    33333333
    20060303  
    SYST-REPID GP420EQ35FHFOCVEBCR6RWPVQBR   4533345334444454445355555452222222222222 704205135686F365232627061220000000000000
    RESULT
    000
    333
    00

    You have an update routine in which youar callin FM 'RS_BCT_TIMCONV_PS_CONV'. Parameter e_fiscper must be the same that type of the variable you use (you can see the data tyoe in FM definition, transaction se37). You should do somethin like the following.
    DATA: var type <the same that e_fiscper in FM definition>
    CALL FUNCTION 'RS_BCT_TIMCONV_PS_CONV'
    EXPORTING
    I_TIMNM_FROM = '0CALDAY'
    I_TIMNM_TO = '0FISCPER'
    I_TIMVL = COMM_STRUCTURE-CALDAY
    I_FISCVARNT = gd_fiscvarnt
    IMPORTING
    E_FISCPER = var.
    result = var.
    --- ASSIGN POINTS IS USEFUL.

  • Data load stuck from DSO to Master data Infoobject

    Hello Experts,
    We have this issue where data load is stuck between a DSO and master data infoobject
    Data uploads from DSO( std) to master data infoobject.
    This Infoobject has display and nav attributes in it which are mapped from DSO to Infoobject.
    Now we have added a new infoobject as attribute to the master data infoobject and made it as NAV attri.
    Now when we are doing full load via DTP the load is stuck and is not processing.
    Earlier it took only 5 mns of time to complete the full load.
    Please advise what could be the reason and cause behind this.
    Regards,
    santhosh.

    Hello guys,
    Thanks for the quick response.
    But its nothing proceeding further.
    The request is still running.
    earlier this same data is loaded in 5 mns.
    Please find the screen shot.
    master data for the infoobjects are loaded as well.
    I can see in SM50 the process at P table of the infoobject the process is.
    Please advise.
    Please find the detials
    Updating attributes for InfoObject YCVGUID
    Start of Master Data Update
    Check Duplicate Key Values
    Check Data Values
    Process time dependent attributes- green.
    No Message: Process Time-Dependent Attributes- yellow
    No Message: Generates Navigation Data- yellow
    No Message: Update Master Data Attributes - yellow
    No Message: End of Master Data Update - yellow
    and nothing is going further in Sm37
    Thanks,
    Santhosh.

  • How to create a report in bex based on last data loaded in cube?

    I have to create a query with predefined filter based upon "Latest SAP date" i.e. user only want to see the very latest situation from the last load. The report should only show the latest inventory stock situation from the last load. As I'm new to Bex not able to find the way how to achieve this. Is there any time characteristic which hold the last update date of a cube? Please help and suggest how to achieve the same.
    Thanks in advance.

    Hi Rajesh.
    Thnx fr ur suggestion.
    My requirement is little different. I build the query based upon a multiprovider. And I want to see the latest record in the report based upon only the latest date(not sys date) when data load to the cube last. This date (when the cube last loaded with the data) is not populating from any data source. I guess I have to add "0TCT_VC11" cube to my multiprovider to fetch the date when my cube is last loaded with data. Please correct me if I'm wrong.
    Thanx in advance.

Maybe you are looking for

  • Creative Cloud not see the license of Adobe ID

    Hello, I have a problem. I started studying at the British School of Art and Design  and we were issued a license for 1 year Creative Cloud Suite. I activate the license as it was written in the manual and in the settings of my account Adobe ID indic

  • Security update 2015-002

    Hi there I recently installed the lastest update entitled Security update 2015-002. So all seemed fine till I opened Mac Mail and it asks me for my password as there is no icloud mail account set up. It may be a bit stupid of me but I keep my passwor

  • NEWBIE JSP QUES

    I'm really new at jsp, and have a frustratingly simple question. All I'm trying to do is get a .jsp page to display in internet explorer. So I create the file and put this in it: <% String test = "this is a test"; %> <html> <%=test%> </html> And noth

  • Missing air play icon

    I have a late 2011 macbook pro.  The icon for air play is showing up in my itunes but not on my main tool bar.

  • HT204053 i lost my ipod touch 4th generation and cant find it what do i do?

    i cant find my ipod what do i do?