Initial Load performs deletion TWICE!!

Hi All,
I face a very peculiar issue. I started an initial load on a codition object. On the R/3 there are about 3 million records. The load starts
1)First it deletes all the records in CRM(count bcomes 0)
2) Then it starts inserting the new records ( the records get inserted and the count reaches 3 million)
in R3AM1 this adapter object(DNL_COND_A006) status changes to "DONE"!!
Now comes the problem
There are still some queue entries which again starts deleting the entries from the condition table and the
count starts reducing and the record count becomes 0 agai n in the conditio table!
Then it again starts inserting and the entire load stops after insertin 1.9 million records! Thsi isvery strange.Any pointers will be helpful
I also checked whether the mappin module is maintained twice in CRM but that is also not the case. Since the initial load takes more than a day i checked whether there are any jobs scheduled but there are no jobs scheduled also.
I am really confused as to why 2 times deletion should happen. Any pointers will be highly appreciated.
Thanks,
Abishek

Hi Abishek,
This is really strange and I do not have any clue. What I can suggest is that before you start the load of DNL_COND_A006, load the CNDALL & CND object again. Some time CNDALL resolve this kind of issues.
Good luck.
Vikash.

Similar Messages

  • Initial Load Performance Decrease

    Hi colleagues,
    We noticed a huge decrease initial load performance after installing an
    application in the PDA.
    Our first test we downloaded one data object with nearly 6.6Mb that
    corresponds to 30.000 registries with eight fields each. Initial Load
    to PDA took only 2 minutes.
    We performed a second test with same PDA after a reinstallation and
    new device ID. The difference here is that we installed an MI
    application related to same data object. Same amount of data was sent
    to the PDA. It took 3 hours to download it.
    In third test we change the application in order not to have the
    related data object assigned to it. In this case, download took 2
    minutes again.
    In other words, if we have an application with the data object
    assigned, it results in a huge decrease in initial load.
    In both cases we use direct connection to our LAN.
    Here goes our PDA specs:
    - Windows Mobile 6 Classic
    - Processor: Marvell PXA310 de 624MHz
    - 64MB RAM, 256MB flash ROM (190MB available to user)
    Any similar experiences?
    Thanks.
    Edited by: Renato Petrulis on Jun 1, 2010 4:15 PM

    I am confused on downloading a data object with no application.
    I thought you can only download data if it is associated to a Mobile Component, I guess you just assign the DMSCV manually?
    In any case, I have only experienced scenario two when we were downloading application with mobile component with no packaging of messages.  we had maybe a few thousand records to download and process and it would take an hour or more.
    When we enabled packaging, it would take 15-30 minutes.
    Then I went to Create Setup Package because it was just simpler to install the application and data together with no corruption or failure of DMSCV not going operational and not sending data etc... plus it was a faster download using either FTP or ActiveSync to transfer the install files.

  • Golden Gate Initial Load - Performance Problem

    Hello,
      I'm using the fastest method of initial load. Direct Bulk Load with additional parameters:
    BULKLOAD NOLOGGING PARALLEL SKIPALLINDEXES
    Unfortunatelly the load of a big Table 734 billions rows (around 30 GB) takes about 7 hours. The same table loaded with normal INSERT Statement in parallel via DB-Link takes 1 hour 20 minutes.
    Why does it take so long using Golden Gate? Am I missing something?
    I've also noticed that the load time with and without PARALLEL parameter for BULKLOAD is almost the same.
    Regards
    Pawel

    Hi Bobby,
    It's Extract / Replicat using SQL Loader.
    Created with following commands
    ADD EXTRACT initial-load_Extract, SOURCEISTABLE
    ADD REPLICAT initial-load_Replicat, SPECIALRUN
    The Extract parameter file:
    USERIDALIAS {:GGEXTADM}
    RMTHOST {:EXT_RMTHOST}, MGRPORT {:REP_MGR_PORT}
    RMTTASK replicat, GROUP {:REP_INIT_NAME}_0
    TABLE Schema.Table_name;
    The Replicat parameter file:
    REPLICAT {:REP_INIT_NAME}_0
    SETENV (ORACLE_SID='{:REPLICAT_SID}')
    USERIDALIAS {:GGREPADM}
    BULKLOAD NOLOGGING NOPARALLEL SKIPALLINDEXES
    ASSUMETARGETDEFS
    MAP Schema.Table_name, TARGET Schema.Table_tgt_name,
    COLMAP(USEDEFAULTS),
    KEYCOLS(PKEY),
    INSERTAPPEND;
    Regards,
    Pawel

  • Improving initial load performance.

    Hi ,
    Please let me know the setup  and prerequisite required for running parallel request so as to fasten the connection object download .
    I need to download connection object and Point of Delivery from ISU to CRM. Is there any other way to improve the performance.
    Regards,
    Rahul

    Hello,
    May you please tell us more about your scenario? Because using the connection object ID may not be easy to start many request in parallel, as this field is alphanumeric if I remember well... meaning that a range between 1 and 2 will include 10, 11, 100, etc.
    That's why within a migration process SAP introduced a new concept (via table ECRM_TEMP_OBJ) to replicate into CRM only those connection objects that are not already there. This is explained page 12 of the cookbook. Futhermore, as far as replication performance is concerned, I highly recommend to read those OSS notes carefully (which are valid for ISU technical objects as well):
    Note 350176 - CRM/EBP: Performance improvement during exchange of data
    Note 426159 - Adapter: Running requests in parallel
    Regards,
    Nicolas Busson.

  • Initial Load Error - No generation performed. Call transaction GN_START

    Hi Folks,
    We are doing middleware configuration for data migration between R3->CRM.Have followed "Best Practies" configuration Guide.
    System Using; CRM 2007 and ECC6.0
    Issue
    While performing initial load, system is throwing the error as
    001- No generation performed. Call transaction GN_START
    002-Due to system errors the Load is prohibited (check transaction MW_CHECK)!
    After calling the transaction GN_START system asks for job scheduling,whereas I have already scheduled it.
    A job is already scheduled periodically.
    Clicking on 'Continue' will create another job
    that starts immediately.
    After checking(MW_CHECK),message is displayed as
    No generation performed. Call transaction GN_START.
    If anybody has encountered the similar issue and has resolved it,their guidence will be greatly appriciated.
    Thanks in Advance
    VEERA B

    Veera,
    We also faced the same problem when we have done the upgrade from CRM 4.0 to CRM 2007.
    For that you go to SMWP where you can see all the errors related to Middleware with the error message so try to remove the error,
    Also pls check in RZ20 and activate the middleware trace tree.
    Regards
    Vinod

  • Deletion of initial load

    hi expert's
                     i need to delete a initial load requset. whr i have to delta this requst nd how? can anyone guide me
    regards
    harikrishna.N

    what exactly you want to do?
    delete data loaded from initial request or redo initialization by deleting the init request setting?
    in first case, go to ur data target, right click, manage, find out the init request and delete it.. there might be some impact if any other data target is attaached to this data target and delta is running in between these 2.
    in second case, open ur infopackage, in menu - scheduler - initialization option for source systems - delete that request and re-run the init request..

  • Perform rollback occurs during initial load of material

    Hi Gurus,
    When we try to do the initial load of materials, only some part of the materials are replicated to SRM. We have the R3AC1 filter of taking only the materials with Purchasing view. We have no other filter. Although there are 576 materials that match this filter, only 368 materials are replicated to SRM.
    One thing we have observed is that when we have a look at SM21 (System Log) we see "Perform rollback" actions. Below is the details of the log. Can anyone help on our issue?
    Details Page 2 Line 30 System Log: Local Analysis of sapsrmt                  1
    Time
    Tip
    Nr
    Clt
    User
    İKodu
    Grp
    N
    Text
    23:52:59
    DIA
    003
    013
    ALEREMOTE
    R6
    8
    Perform rollback
    Perform rollback
    Details
    Recording at local and central time........................ 29.11.2006 23:52:59
    Task......
    Process
    User......
    Terminal
    Session
    İKodu
    Program
    Cl
    Problem cl
    Package
    87262
    Dialog work process No. 003
    ALEREMOTE
    1
    SAPMSSY1
    W
    Warning
    STSK
    Further details for this message type
    Module nam
    Line
    Error text
    Caller....
    Reason/cal
    thxxhead
    1300
    ThIRoll
    roll ba
    No documentation for syslog message R6 8 exists
    Technical details
    File
    Offset
    RecFm
    System log type
    Grp
    N
    variable message data
    4
    456660
    m
    Error (Function,Module,Row)
    R6
    8
    ThIRollroll bathxxhead1300

    Hi,
    Some of our material groups were problematic. After removing these the problem is resolved.
    FYI

  • Initial Load of Business Partner not running

    Dear SAP CRM gurus,
    We have been able to perform initial download of Business Partners from ECC into our CRM system. We have done this many times. We do not know what is wrong, but since last week, we are unable to perform initial download of our Business Partners. When we run initial download using R3AS, there is no BDoc created, there is also no queues on inbound/outbound of both CRM and ECC system. There is also no error. R3AM1 showing initial download complete but only with 1 block. But there is no BDoc created!! All other replication objects are fine, except that BUPA_MAIN we are unable to perform initial download. Delta download is fine as well.
    We have not changed anything on SMOEAC and it is all correct. Entries on CRMSUBTAB and CRMC_BUT_CALL_FU is also correct.
    Please help!!

    Hi,
    When you are downloading CUSTOMER_MAIN through R3AS are u getting any warning or error??
    or r u getting pop up with green light??
    If u are getting any warning or error then go to tcode: smwp. Then go to runtime information
    ->Adapter Status Information>Initial Load Status
    under that check running objects and check customer_main is there or not??
    if found delete that entry and do initial load again.
    Also check outbound queue of R/3 and inbound queue of CRM .
    If then also its not working do request download using r3ar2, r3ar3 and r3ar4 and check whether it is working or not.
    If helpful kindly reward me.
    Thanks & Regards,
    Anirban

  • GRC-IDM initial load job not enriching one system's privs

    Hi GRC Experts,
    We have integrated IDM 7.1 and GRC 5.3 and tested provisioning to one target system in DEV; this worked perfectly; when testing a similar configuration in Quality, we were setting up the system, and had to run the IDM-GRC Initial Load job in order to enrich the imported privileges for use with GRC AC 5.3; in the Quality system, instead of just connecting to 1 target system, we have connected to 5 ABAP systemes, ECC, PI, POSDM, BW & SRM; for some strange reason when performing the GRC-IDM Initial load job 4 of the target system's privileges get enriched, while the ECC system's privileges aren't getting enriched; I would say through random sampling all ECC profiles are getting enriched but none of the ECC privileges are getting enriched; why could this be happening? we've tried running the ECC Initial Load job  and then the GRC-IDM initial load job about 8-10 times but with no luck; the set of privileges we're investigating are still not enriched; we also ran the GRC CUP role load job, also selecting the option to over-write all existing roles in the system; via this method the CUP roles have been refreshed twice so far, but running the GRC-IDM initial load job even after refreshing the ECC system's privileges in CUP has had no effect whatsoever, all ECC privileges are still left to be enriched, but strangely enough the ECC Profiles have been enriched.
    Any clues as to why this could be happening? We've checked and re-checked and there is no filtering or delta being applied to any of the passes, so it really makes no sense. Is there something we should be doing apart from what we've already done? Would greatly appreciate your help with this!
    Thanks a lot in advance!
    Best regards,
    Sandeep

    What you could do is simply add the attributes by a background job to the privileges. This works fine in most cases. You need to be sure that GRC knows the role and then it is fine. The load only adds those 2 privileges and does nothing of any deeper complexity.
    MX_AC_ROLEID = <rolename>
    MX_APPLICATION_ID = <system name>

  • Improve data load performance using ABAP code

    Hi all,
             I want to improve my load performance using ABAP code, how to do this?. If i writing ABAP code in SE38 how i can call
    in BW side? if give sample code to improve load performance it will be usefull. please guide me.

    There are several points that can improve performance of your ABAP code:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    4.Avoid using nested SELECT and SELECT statements within LOOPs.
    5. Avoid using INTO CORRESPONDING FIELDS OF. Instead use INTO TABLE.
    6. Avoid using SELECT * and select only the required fields from the table.
    7. Avoid Executing a SELECT multiple times in the program.
    8. Avoid nested loops when working with large internal tables.
    9.Whenever using READ TABLE use BINARY SEARCH addition to speed up the search.
    10. Use FIELD-SYMBOLS instead of a work area when there are more than 200 entries in an internal table where some fields are being manipulated.
    11. Use MOVE with individual variable/field moves instead of MOVE-CORRESPONDING.
    12. Use CASE instead of IF/ENDIF whenever possible.
    13. Runtime transaction code se30 can be used to measure the application performance.
    14. Transaction code st05 can be used to analyse the SQL trace and measure the performance of the select statements of the program.
    15. Start routines can be used when transformation is needed in the data package level. Field/individual routines can be used for a simple formula or calculation. End routines are used when you wish to populate data not present in the source but present in the target.
    16. Always use a WHERE clause for DELETE statement. To delete records for multiple values, use SELECT-OPTIONS.
    17. Always use 'IS INITIAL' instead of equal to '' because null for a character is '' but '0' for an integer.
    Hope it helps.

  • Loading performance of the infocube & ODS ?

    Hi Experts,
    Do we need to turn off the aggregates on the infocubes before loading so that it will decrease the loading time or it doesn't matter at all, I mean if we have aggregates created on the infocube..Is that gonna effect in anyway to the loading of the cube ? Also please let me know few tips to increase the loading performance of a cube/ods. Some of them are
    1. delete index and create index after loading.
    2. run paralled processes.
    3. compression of the infocube , how does the compression of an infocube decrease the loading time ?
    Please throw some light on the loading performance of the cube/ods.
    Thanks,

    Hi Daniel,
    Aggregates will not affect the data loading. Aggregates are just the views similar to InfoCube.
    As you mentioned some performance tuning options while loading data:
    Compression is just like archiving the InfoCube data. Once compressed, data cannot be decompressed. So need to ensure the data is correct b4 Compressing. When you compress the data, you will have some free space available, which will improve data loading performance.
    Other than the above options:
    1.If you have routines written at the transformation level, just check whether it is tuned properly.
    2.PSA partition size: In transaction RSCUSTV6 the size of each PSA partition can be defined. This size defines the number of records that must be exceeded to create a new PSA partition. One request is contained in one partition, even if its size exceeds the user-defined PSA size; several packages can be stored within one partition.
    The PSA is partitioned to enable fast deletion (DDL statement DROP PARTITION). Packages are not deleted physically until all packages in the same partition can be deleted.
    3. Export Datasource:The Export DataSource (or Data Mart interface) enables the data population of InfoCubes and ODS Objects out of other InfoCubes.
    The read operations of the export DataSource are single threaded (i.e. sequential). Note that during the read operations u2013 dependent on the complexity of the source InfoCube u2013 the initial time before data is retrieved (i.e. parsing, reading, sorting) can be significant.
    The posting to a subsequent DataTarget can be parallelized by ROIDOCPRMS settings for the u201Cmyselfu201D system. But note that several DataTargets cannot be populated in parallel; there is only parallelism within one DataTarget.
    Hope it helps!!!
    Thanks,
    Lavanya.

  • JSF page 'Initial load' problem

    I've found several threads touching on this already, but none seem to have a solution.
    When JSF loads a JSP page for the first time, it goes through the restore view phase which creates an initial view (as there isn't a current one to restore). It then goes directly to the render response phase.
    My problem is, I have a JSP/JSF page that I pass paramaters to via html GET. For example:
    http://localhost:8080/jsf/region.jsp?locationForm:directorate=1&locationForm=locationForm
    Because the first load goes directly to the render response phase, the parsing of these paramaters is never done & the page does not update as expected.
    The second time you perform the same request, JSF goes through the standard request processing lifecycle and works as you would expect, setting directorate to 1 in the backing bean and displaying an updated page.
    Is there any way to change JSF's default behaviour on a JSP initial load to do the whole lifecycle? Is there another way to get around this, short of loading the page twice to ensure it has the right information in it (which would be quite a hack)?
    I need to use html GET (as opposed to html POST) because:
    I'm using a technique of a hidden iframe that loads dynamically created javascript to update a dropdown list (DDL) on the main page without reloading the page in its entirity. This is to minimise network chatter as the system will be run on a 56k network. I have an onchange event on my JSF DDL that calls javascript to reload the hidden iframe.

    Thanks for the replies.
    I tried both of the suggested options
    1. If your bean is managed (declared as managed bean in faces_config), you can set the initial value of the property as, for example, #{param.locationFor }.
    Unfortunately I can't use this option as the backing bean i'm using has to be session scope. This is because the DDL options are set by the iframe page, not the main page. There could be many request/responses between client/server before the user finally presses the submit button. If I change the backing bean to request scope, I end up getting "Validation Error: Value is not valid" for the DDL because the selected ID is not in the backing bean's list of possible values for the DDL.. #{param} can't be used for session level BBs.
    2. If you don't want to use the managed bean properties, you can go get your parameters in your bean's constructor.
    I'm unable to use this option either. The backing bean is shared between the main page and the hidden iframe page. When the main page loads, the backing bean's constructor is called but that isn't the time when parameters need to be parsed. When the iframe page is loaded for the first time (via javascript onchange on a DDL on the main page) using http://localhost/iframe.jsf?iframeForm:ddlId=1&iframeForm=iframeForm is when I need to parse the parameters, by which time the backing bean is already instanciated and the constructor has already been called.
    I'm looking at where else I could get the parameters other than the constructor. I might be able to do it elsewhere.
    My guess as to why the following code works is it's not using a backing bean & isn't updating backing bean values on the first run:
    <f:view>
    <h:outputText value="param= #{param}"/>
    </f:view>To replicate the problem, create a simple backing bean, for example:
    public class sample {
        private Integer selectedId
        public String getSelectedId() {
            return selectedId
        public void setSelectedId(Integer selectedId) {
            this.selectedId = selectedId;
    }Then create the following sample.jsp:
    <!doctype html public "-//w3c//dtd html 4.01 transitional//en">
    <!--
      Copyright 2004 ArcMind, Inc. All Rights Reserved.
    -->
    <%@taglib uri="http://java.sun.com/jsf/html" prefix="h"%>
    <%@taglib uri="http://java.sun.com/jsf/core" prefix="f"%>
    <html>
    <head>
    <f:view>
      <h:form id="iframeForm">
        <h:panelGroup>
          <h:inputText id="selectedId" value="#{sample.selectedId}" />
        </h:panelGroup>
      </h:form>
    </f:view>
    </head>
    </html>Then try going to sample.jsp?iframeForm:selectedId=10&iframeForm=iframeForm (Similar to the request my main page is doing via javascript to populate the hidden iframe)
    The first time you do this, the text box will be populated with 0 (ie, it skipped the JSF lifecycle and ignored your 10 input). The second time and subsequent times it works as expected, with the text box containing the number 10.

  • How to Improve DSO loading performance

    Hello,
    I have a DSO having 3 infosources. This DSO is Generic means based on generic Data Sources. Daily we have a full upload (last 2 months data). Initially it was taking around 55 mins to load the data but now a days all are taking 2.5 Hrs daily.
    Can u please tell me how can i improve the performance in other word how can i reduce the time.
    Please give some solution or document to resolve this.
    amit

    Hi,
    Genearl tips you can try to improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    4. Check whether the system processes are free when this load is running
    5. Try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. Goto manage ODS -> activate -> activate in parallel -> increase the number of processes from there.for direct access try TCode RSODSO_SETTINGS
    7. Remove Bex Reporting check box in ODS if not required.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Use InfoPackages with disjoint selection criteria to parallelize the data export.
    Complex database selections can be split to several less complex requests.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    SAP Business Intelligence Accelerator : A High - Performance Analytic Engine for SAP Ne tWeaver Business Intelligence
    http://www.sap.com/platform/netweaver/pdf/BWP_AR_IDC_BI_Accelerator.pdf
    BI Performance Audit
    http://www.xtivia.com/downloads/Xtivia_BIT_Performance%20Audit.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10564d5c-cf00-2a10-7b87-c94e38267742
    ODS Query Performance  
    Thanks,
    JituK

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Initial load of sales orders from R3 to CRM without statuses

    1) Some sales orders were uploaded into CRM without statuses in the headers or line items. 
    2) Some sales orders were uploaded without status, ship-to, sold-to, payer.....If I deleted them and use R3AR2, R3AR4 to upload each individual then no problem.
    Any ideas or suggestions?
    Thanks.

    Hi,
       Request load of adapter objects uses different extractor modules for extracting the data from external system to CRM. While your initial load of sales docs. will use a different extraction logic bases on the filter conditions specfied on trx.
    R3AC1
       There may be a problem in the extraction of data from the source system (don't know if you are using a R/3). Can you please de-register the R/3 (i suppose) outbound queue using trx.
    SMQS
    , and then debug the extraction (R/3 outbound) before the data is sent to CRM using FM
    CRS_SEND_TO_SERVER
       If this goes well, you may try debugging the mapper in CRM inbound and the validation module in CRM as a last resort. Also, please refer to trx.
    SMW01
    to see if the Bdocs are fully processed.
    Hope this helps...Reward if helpful.
    Regards,
    Sudipta.

Maybe you are looking for

  • Empty strings and nillable

    I have the following: <xs:element name="AString" nillable="true" type="xs:string" minOccurs="0" maxOccurs="1"/> What I want is that when I call getAString() that this will return nulll if I have not set the value. At the moment it returns an empty st

  • Hp color laserjet 2025 and windows 7

    Subject  I have two hp color laserjets.  Toolbox cannot be installed with windows 7 pro.  Toolbox worked well with windows xp pro. How can I determine amount of remaining color toner and other necessary information without toolbox.  There is no toolb

  • BADI/User exits for FV50/FB50/FB02

    Hi all, Is there any badi/user exit which is triggered when document is changed from display mode to change mode. My requirement is when document is parked using FV50 the workflow is triggered, as long as document in workflow document should be locke

  • Business package for best practces on EP 6.0

    Hello friends, The BP for Best practices on EP 6.0 is under restricted release. The documentation attached to the package asks to contact address [email protected] to participate in the restricted release implementation. But there is no response from

  • Newbie help please -- Infrant ReadyNAS NV

    Hello: Does anyone have any experience using a Infrant ReadyNAS NV on an Airport network? Positive/negative opinions would be welcomed!