Steps to reducing time for loading of data

Hi
Could any one tell me how to reduce the time for loading of records into a particular cube or Ods. For ex: iam loading some 1 lac records.It was taking for me some 5 hrs. I want to reduce the time to 3hrs or 4hrs.What are the very first steps to be considered to make fast.
Regards
Ajay

Hi Ajay,
Check the following.
1> Any routine you have in transfer rule and update rule should not fire database select more then one time in the same code.
2> Load Master data before transaction data.
3> Reduce the data pack size in infopackage.
4> Delete old PSA because you may space issue while data loading .
5> If you are loading in ODS then remove Bex check in ODS maintenance screen if you are not doing report on that ODS.
hope this will help you.
Suneel

Similar Messages

  • What are the steps for loading master data

    Hello
    what are the steps for loading master data? i want to learn about loading all master data and the steps to choose the best way to load the data.
    if anyone has documents please send me the documents i will be really greatful
    [email protected] thanks everyone
    Evion

    Hi Heng,
    Download the data into a CSV file.
    Write a program using GUI_UPLOAD to upload the CSV file and insert records.Chk the below link for example
    http://www.sap-img.com/abap/vendor-master-upload-program.htm
    Reward Points for the useful solutions.
    Regards,
    Harini.S

  • Take long time for loading

    Hi Experts,
    One of my data target (Data comming from another ODS) is taking long time for loading, basically it takes below 10 times only, but today it is running from last 40 minesu2026
    In Status Tab it showing....
    Job termination in source system
    Diagnosis
    The background job for data selection in the source system has been terminated. It is very likely that a short dump has been logged in the source system
    Procedure
    Read the job log in the source system. Additional information is displayed here.
    To access the job log, use the monitor wizard (step-by-step analysis)  or the menu path <LS>Environment -> Job Overview -> In Source System
    Error correction:
    Follow the instructions in the job log messages.
    Can anyone please solve my problem.
    Thanks in advance
    David

    Hi Experts,
    Thanks for your answers, and my load failed when the data comes from one ODS to another ODS. find the below job log
    Job started
    Step 001 started (program SBIE0001, variant &0000000007169, user ID RCREMOTE)
    Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
    DATASOURCE = 8ZPP_OP3
             Current Values for Selected Profile Parameters               *
    abap/heap_area_nondia......... 20006838008                             *
    abap/heap_area_total.......... 20006838008                             *
    abap/heaplimit................ 83886080                                *
    zcsa/installed_languages...... ED                                      *
    zcsa/system_language.......... E                                       *
    ztta/max_memreq_MB............ 2047                                    *
    ztta/roll_area................ 5000000                                 *
    ztta/roll_extension........... 4294967295                              *
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:31:55, End = 06.09.2010 01:31:55
    Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 2 in task 0004 (2 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:00, End = 06.09.2010 01:32:00
    Asynchronous transmission of info IDoc 4 in task 0005 (2 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 3 in task 0006 (3 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:04, End = 06.09.2010 01:32:04
    Asynchronous transmission of info IDoc 5 in task 0007 (3 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 4 in task 0008 (4 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:08, End = 06.09.2010 01:32:08
    Asynchronous transmission of info IDoc 6 in task 0009 (4 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 5 in task 0010 (5 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:11, End = 06.09.2010 01:32:11
    Asynchronous transmission of info IDoc 7 in task 0011 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Asynchronous send of data package 13 in task 0026 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:01, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:44, End = 06.09.2010 01:32:45
    tRFC: Data Package = 8, TID = 0AEB465C00AE4C847CEA0070, Duration = 00:00:17,
    tRFC: Start = 06.09.2010 01:32:29, End = 06.09.2010 01:32:46
    Asynchronous transmission of info IDoc 15 in task 0027 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 14 in task 0028 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:48, End = 06.09.2010 01:32:48
    tRFC: Data Package = 9, TID = 0AEB465C00AE4C847CEF0071, Duration = 00:00:18,
    tRFC: Start = 06.09.2010 01:32:33, End = 06.09.2010 01:32:51
    Asynchronous transmission of info IDoc 16 in task 0029 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 15 in task 0030 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:52, End = 06.09.2010 01:32:52
    Asynchronous transmission of info IDoc 17 in task 0031 (6 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 16 in task 0032 (7 parallel tasks)
    tRFC: Data Package = 10, TID = 0AEB465C00684C847CF30070, Duration = 00:00:18,
    tRFC: Start = 06.09.2010 01:32:37, End = 06.09.2010 01:32:55
    tRFC: Data Package = 11, TID = 0AEB465C02E14C847CF70083, Duration = 00:00:17,
    tRFC: Start = 06.09.2010 01:32:42, End = 06.09.2010 01:32:59
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:56, End = 06.09.2010 01:32:56
    Asynchronous transmission of info IDoc 18 in task 0033 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 17 in task 0034 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:33:00, End = 06.09.2010 01:33:00
    tRFC: Data Package = 12, TID = 0AEB465C00AE4C847CFB0072, Duration = 00:00:16,
    tRFC: Start = 06.09.2010 01:32:46, End = 06.09.2010 01:33:02
    Asynchronous transmission of info IDoc 19 in task 0035 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 18 in task 0036 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:33:04, End = 06.09.2010 01:33:04
    Asynchronous transmission of info IDoc 20 in task 0037 (6 parallel tasks)
    ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    Job cancelled
    Thanks
    Daivd
    Edited by: david Rathod on Sep 6, 2010 12:04 PM

  • Best Practices for Loading Master Data via a Process Chain

    Currently, we load attributes, text, and hierarchies before loading the transactional data.  We have one meta chain.  To load the master data it is taking more than 2 hours.  Most of the master data is full loads.  We've noticed that a lot of the master data, especially text, has not changed or changed very little since we implemented 18 months ago.  Is there a precedence or best practice to follow such as do we remove these processes from the chain?  If so, how often should it be run?  We would really like to reduce the amount of the master data loading time.  Is there any documentation that I can refer to?  What are other organizations doing to reduce the amount of time to load master data?
    Thanks!
    Debby

    Hi Debby,
    I assume you're loading Master Data from a BI system? The forum here are related to SAP NetWeaver MDM, so maybe you should ask this question in a BI forum?
    Nevertheless, if your data isn't changed this much, maybe you could use a delta mechanism for extraction? This would send only the changed records and not all the unchanged all the time. But this depends on your master data and of course on your extractors.
    Cheers
    Michael

  • Why the processing time for loading is taking longtime

    Hi All,
              why  the processing time for loading is taking longtime i want to know the soltion.
    Thanks,
    chandu

    To analyze the process chain and fix it, go through the below document:
    [SAP BW Data Load Performance Analysis and Tuning|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b]
    Hope it helps,
    Naveen

  • I had backed up my IPhone 4s on iCloud on Jan 19. I am now trying to do another back up but it says the time required is 7 hours. It appears to long a time for 1GB of data stored on the iCloud. Can someone help me please?

    I had backed up my IPhone 4s on iCloud on Jan 19. I am now trying to do another back up but it says the time required is 7 hours. It appears to long a time for 1GB of data stored on the iCloud. Can someone help me please?

    To be honest, that sounds about right.
    For example on my 8Mbps (megbits) down service I get around 0.4Mbps upload.  That is the equivalent of (very approximately) 3Mb (megabytes) per minute or 180Mb per hour.  Over 7 hours that would be just over 1Gb.
    Obviously, it all depends on your connection speed, but that is certainly what I would expect, and that is why I use my computer for backing up, not iCloud.  So much quicker.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Self Service Password Registration Page taking more time for loading in FIM 2010 R2

    Hi,
    I have beeen successfullly installed FIM 2010 R2 SSPR and it is working fine
    but my problem is that Self Service Password Registration Page taking more time for loading when i provide Window Credential,it is taking approximate 50 to 60 Seconds for loading a page in FIM 2010 R2
    very urgent requirement.
    Regards
    Anil Kumar

    Double check that the objectSid, accountname and domain is populated for the users in the FIM portal, and each user is connected to their AD counterparts
    Check here for more info:
    http://social.technet.microsoft.com/wiki/contents/articles/20213.troubleshooting-fim-sspr-error-3003-the-current-user-account-is-not-recognized-by-forefront-identity-manager-please-contact-your-help-desk-or-system-administrator.aspx

  • HT4759 Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why

    Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why I need to deactivate the space service and take my money back .. Thanks

    The "issues" you've raised are nothing to do with the iCloud service.
    No service that uploads data allows you to disconnect the device you are uploading from while uploading data. Doing so would prevent the upload from completing. It is a basic requirement for any uploading service that you remain connected to it for uploading to be possible.
    The time it takes to upload data to iCloud is entirely dependent on how fast your Internet connection is, and how much data you are uploading. Both of these things are completely out of Apple's control. Whichever upload service you use will be affected by the speed of your Internet connection.

  • ASCP Test for Load Legacy Data

    Through to the legacy Web Adi ASCP Data in the Test of the load.
    Plan to check more than one, but the result is not correct  that there is  cause msc_operation_components not exists.
    Questions 1: msc_operation_components Table also through the Upload Web Adi BOM_Component and Routing_operation Data should be used or whether the Pre-Process Monitor Program or other Concurrent Program please tell us what is produced?
    Questions 2: If you create a Data Collect flat file data self service if you need through the OA Template Upload for msc_operation_components Dat there is no there, what should I do??

    I think you'll find the white paper published on metalink, our main avenue for publishing documentation and providing support information to customers.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by JohnWorthington:
    Hello,
    The 11.0.2 implementation doc mentions a technical "White Paper about the Data Pump" for loading legacy data into HRMS.
    I cannot find this White Paper anywhere.
    Does it exist?
    What is the easiest way to load our legacy data into 11.0.3 HRMS??
    Thanks,
    John Worthington
    [email protected]<HR></BLOCKQUOTE>
    null

  • The error message "No more virtual tiles can be allocated" appears when I try to use the effects in the quick editing mode in my Elements 13. The baton OK has to be pressed several times for loading all effect patterns. The error returns when selecting th

    The error message "No more virtual tiles can be allocated" appears when I try to use the effects in the quick editing mode in my Elements 13. The baton OK has to be pressed several times for loading all effect patterns. The error returns when selecting the particular pattern.
    The problem does not appear, if PH Elements 13 is executed in the administrator mode.
    The available computer resources are rather  large enough: CPU INTEL i7 4 cores,  16GB RAM, 1TB HDD + 32GB SSD, Windows 8.1.
    Please, advice how to solve this problem? Maybe, there is patch or updating available?

    Dear n_pane,
    Thank you for the quick answer. In the meantime I found other way to pass by the problem. I increased the cache level up to the maximum value 8.
    The errors reported as "No more virtual tiles can be allocated" vanish, but I still do not understand, why PSE 13 cannot work properly by the lower cache levels, having available maximum resources it needed (10443 MB RAM and 26.53GB SSD space for scratches), or cannot collaborate with the fast SSDs properly.
    I wish you all the best in the New Year 2015!

  • DTP error: Lock NOT set for: Loading master data attributes

    Hi,
    I have a custom datasource from ECC which loads into 0CUST_SALES.  I'm using DTP & transformation which has worked for loading data in the past for this infoobject.  The infopackage loads with a green status but when i try to load data, the DTP fails with an error message at "Updating attributes for InfoObject 0CUST_SALES Processing Terminated" & the job log says: "Lock NOT set for: Loading master data attributes".  I've tried reactivating everything but it didn't help. Does anyone know what this error means? We're on SP7.   Thanks in advance!

    Hello Catherine
    I have had this problem in the past (3.0B) --> the reason is that our system was too slow and could not crunch the data fast enough, therefore packets where loacking each other.
    The fix: load the data into the PSA only, and then send it in background from the PSA to the info object. By doing this, only a background process will run, therefore locks cannot happen.
    Fix#2: by a faster server (by faster, I mean more CPU power)
    Now, maybe you have another issue with NW2004s, this was only my 2 cents quick thought
    Good luck!
    Ioan

  • Reduce Time for Rman Backup

    Dear Experts;
    rman for 0 level backup is taking about 5:26 hours, backup size is now 312gb I have enabled block track checking & it reduces time for incremental level 1 from 2hour to almost 3 minutes.
    database shows biggest tablespace is "users"
    I want valuable suggestions for reducing its time or is there any way to break 0 level backup. I can allocate channels but ultimately it will take time when taking "users" tablepace backup
    Right now I am taking backup at usb drive & its version is 2.0
    Regards

    As you are taking backup to a usb drive there is not much that can be done to improve the speed. If you are concerned about the backup being slow.. then you could think of taking the backup on local disk( which would be faster and more efficient) and then move the backups from the disk to usb drive.
    This can be done in a single backup script as 2 part operation.
    1) take backup to disk.
    2) copy the backup to usb drive and delete the backups from the disk.
    There are many additional features that u can add to enhance it thoe.
    Regards,
    V

  • Taking much time for loading data

    Dear All
    While loading data in <b>PRD system</b> (for master data and transaction data), it's taking much time. For ex., 2LIS_05_QOITM, I am loading delta data only in <b>PSA</b> from R/3. Some times (yesterday) it's taking 2 mins and some times (today) it's taking 5 hrs not yet completed (it's yellow). Actually yesterday, we went to SM58 in R/3 side and executed that LUW for some other data sources. Now also we can do but we don't want to like that. Because we are expecting permanent soulution. Could u please advice me. I am getting below message in status tab
    Errors while sending packages from OLTP to BI
    Diagnosis
    No IDocs could be sent to BI using RFC.
    System Response
    There are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of BI.
    Further analysis:
    Check the TRFC log.
    You can access this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".
    Error handling:
    If the TRFC is incorrect, check whether the source system is fully connected to BI. In particular, check the authorizations of the background user in the Errors while sending packages from OLTP to BI
    Diagnosis
    No IDocs could be sent to BI using RFC.
    System Response
    There are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of BI.
    Further analysis:
    Check the TRFC log.
    You can access this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".
    Error handling:
    If the TRFC is incorrect, check whether the source system is fully connected to BI. In particular, check the authorizations of the background user in the source system.
    I am loading data through Process chain and user is <b>BWREMOTE (authorized user).</b>
    Please help me.
    Thanks a lot in advance
    Raja

    Dear Karthik
    No I couldn't resolve till now
    But Everything is fine.
    Now status is yellow only (209 from 209). Now what i want to do.
    Im getting below message <b>in status tab</b>
    Missing data packages for PSA Table
    Diagnosis
    Data packets are missing from PSA Table . BI processing does not return any errors. The data transport from the source system to BI was probably incorrect.
    Procedure
    Check the tRFC overview in the source system.
    You access this log using the wizard or following the menu path "Environment -> Transact. RFC -> Source System".
    Error handling:
    If the tRFC is incorrect, resolve the errors listed there.
    Check that the source system is connected properly to BI. In particular, check the remote user authorizations in BI.
    <b>In detail tab</b>, I am getting below message
    Info IDoc 2 : sent, not arrived ; IDoc ready for dispatch (ALE service)
    Thanks in advance
    Raja

  • Taking more time for loading Real Cost estimates

    Dear Experts,
    It is taking more time to load data into cude CO-PC: Product Cost Planning - Released Cost Estimates(0COPC_C09).The update mode is "Full Update"There are only 105607 records.For other areas there are more than this records,but it gets loaded easily.
    I have problem only to this 0COPC_C09.Could  anybody can guide me?.
    Rgds
    ACE

    suresh.ratnaji wrote:
    NAME                                 TYPE        VALUE
    _optimizer_cost_based_transformation string      OFF
    filesystemio_options                 string      asynch
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      choose
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
    Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
    On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default?

Maybe you are looking for

  • Free goods - reg

    hi gurus,        i have reqt i.e, when the customer any two materials, the company want to give the third material at free of cost. for this wat is the configuration. regards, krish.

  • Page down problem in O3AI transaction

    Hi dear.. i am uploading a file in ap using o3ai transaction ,,, but its not having Page down tab So need to insert OK CODE = "P+ "  manually .. but each time it needs  curser moment at the last entry and then needs page dwn statement.. Any body can

  • HT4689 Restoring mission control setting within a single application

    My "control+down" function is no longer working properly.  Instead of allowing me to view multiple files of the same application (ie. word files), the files are moving completely out of view.  How can I restore this setting?  Thanks!

  • Cell width

    Can I  adjust the width of the cells in Numbers for iPad?

  • Does Microsoft Visio work on MACs?

    Does Microsoft Visio work on MACs?  If not, are their plans to make it work?  Are their comprable tools on MAC?