HANA and DataServices - delta merge during big load

Hi Gurus
I'll do load oracle table to HANA via DataServices. Main event is big 1 table.
At that time , I want to execute delta merge during load , like every 100000 records/every 3 mins etc.
I'm understand about perfoemance risk but ..... I have many times memory shortage problem which caused by forget execute of delta merge.
But It looks like no setting content about delta merge in Dataservices .... forcely , execute sql after load ?
I don't want to use subdivide big table and load it and execute merge table after every load method.
It's just my idea , execute 'MERGE DELTA OF BIGTABLE' every 3 minutes by cron or timer during load , Is it good method?
Rgds,
Jim
Message was edited by: Tom Flanagan

Hi Jim
if your big table requires a merge, AUTOMERGE will pick it up. The mergedog process checks it every 60 seconds, so that should be alright for your requiremen.
If the table doesn't need to be merged, it won't.
Manually handling the delta merge is a fine-tuning action that is most often not required or recommendable.
- Lars

Similar Messages

  • TS3212 i have a windows 7 ultimate pc it had itunes 11.1.62 on it and said i needed to update. so i did. now during the load i get a message as to whether i have admin privliges.

    I have a windows 7 ultimate pc. It is 64 bit and had Itunes running on it. it said i needed to update itunes. (was at 11.1.62). during the load it reported i needed system admin privliges I am in admin acct. it then could not start the mobile device driver.it reported my c++ for visual was bad. i reloaded the c++ libary.
    it still would not load. so i removed itunes and dounloaded a freash copy. same problem. I just use itunes to load my ipads and iphone. but this is driving me to wards a different OS. 

    Go to Control Panel > Add or Remove Programs (Win XP) or Programs and Features (later)
    Remove all of these items in the following order:
    iTunes
    Apple Software Update
    Apple Mobile Device Support (if this won't uninstall move on to the next item)
    Bonjour
    Apple Application Support
    Reboot, download iTunes, then reinstall, either using an account with administrative rights, or right-clicking the downloaded installer and selecting Run as Administrator.
    The uninstall and reinstall process will preserve your iTunes library and settings, but ideally you would back up the library and your other important personal documents and data on a regular basis. See this user tip for a suggested technique.
    Please note:
    Some users may need to follow all the steps in whichever of the following support documents applies to their system. These include some additional manual file and folder deletions not mentioned above.
    HT1925: Removing and Reinstalling iTunes for Windows XP
    HT1923: Removing and reinstalling iTunes for Windows Vista, Windows 7, or Windows 8
    tt2

  • Calling Delta Merge in DS after every commit

    Hi Folks,
    I am using an Delta extraction logic in DS to extract large table from ECC (50 Million rows) to the HANA database. The commits in DS job have been configured fopr every 10,000 records. Three questions
    1) Should I disable the delta merge in HANA database for this target table prior to the initial load of table. Once the initial load is complete, manually perform the delta merge in HANA is the right approach or
    2) Should I be calling manually performing Delta merge in DS job to make sure the table is merged after every commit? If yes how do I call the Delta merge command in DS jobs and how can I do it per commit?
    3) Can I invoke Delta merge in DS as part of Delta extraction logic after the initial load is completed in DS?
    Any advise will definately be appreciated.
    Thanks,
    -Hari

    Hi Jim
    if your big table requires a merge, AUTOMERGE will pick it up. The mergedog process checks it every 60 seconds, so that should be alright for your requiremen.
    If the table doesn't need to be merged, it won't.
    Manually handling the delta merge is a fine-tuning action that is most often not required or recommendable.
    - Lars

  • When are init entries done in source system?And early delta initialization?

    Hi Friends,
    Q1) When are the init entries done in source system?
    If INIT entries are done when I run INIT that means:
    If my setup job finishes at 6AM and I run the INIT at 1 PM will I loose all data between 6AM and 1PM?
    If INIT entries are done when I run setup table i.e. when setup table load finish then
    If my setup job finishes at 6AM and I run the INIT at 1 PM...it will load data till 6AM
    and after that data would bere there in Delta queue. So, this is ok...as no loss of data.
    Q2) I  have read on help.sap that "With early delta initialization, you have the option of writing the data into the
    delta queue or into the delta tables for the application during the initialization request in the source
    system. This means that you are able to execute the initialization of the delta process (the init request),
    without having to stop the updating of data in the source system."
    Does this mean when I run INIT at 1PM, no V1 updates will happen for that DS in Source System?
    If Yes, that means I should always (when users are online) run INIT with early delta initialization
    during times when users would be online?
    Thanks!

    Hello,
    You will find the answers to the questions that you ask and the correct procedure to follow in the SAP note 602260.
    Best Regards,
    Des

  • Getting short dump "TSV_TNEW_PAGE_ALLOC_FAILED" during the load

    Hi Experts,
    I am getting short dump "TSV_TNEW_PAGE_ALLOC_FAILED" when loading data one ODS to Two cubes in 3.1 system . we have only 12000 records to load. this load is delta update. daily we loaded 14000 record from this load but today we are getting short dump.
    Short Dump : TSV_TNEW_PAGE_ALLOC_FAILED
    Description : No storage space available for extending the internal table.we attempted to extend an internal table, but the required space wasnot available.
    Thanks

    This is a memory issue whereby an internal table requires more memory than what is currently available. If you're executing this during processing of other ETL, then your memory is being consumed by all of the processes and you would need to change your schedule as to balance the load better.
    Another possibility is that you have an extremely inefficient SQL statement in a routine that is causing the memory to be overly consumed. Even though the output may be less than average, there is a possiblity that it's reading more data in a SELECT statement and therefore requires more memory than normal.
    Finally, have you Basis team look at this issue to determine if there's anything that they can do to resolve it.

  • Difference between COPA and Logistic Delta Mechanisam

    Dear All,
    May i knw the difference between the COPA and Logistic Delta Mechanisam h it works.
    In our Production system, when ever a logistic delta fails we take action and repeat it then it will featch the delta on the same day,
    where as in COPA delta, when it fails we take action and when we repeat it will fetch zero records and in the next day the delta will come.
    What is the difference in both
    What exactly is happeing in R/3 for these both delta mechanisam.
    Thanks in advance,
    K Janardhan Kumar

    Hi Guru,
    I will explain about  delta extraction with timestamp in general with an example:
    timestamp is generally in  yyyymmddhhmmss format
    let's assume delta runs daily at 09:00 morning.Last delta ran at 09.00 yestreday.And today when delta runs it picks the data ranges between
    09:01 (yesterday's) to  09:00(today).
    if one record is posted at 09:10 today,then it will not be picked by today's delta(coz' it is posted after 09:00)
    Hope now you are clear about timestamp.........
    In case of COPA ,we are using timestamp as a tool to identify delta
    now COPA delta mechanism has one more concept "saftey delta ":let's put a question to ourselves ,why we should use this;
    SAP answer is "The reason for the selection of the safety delta is that there are possible level differences of the clocks on different application servers. If the delta is selected on a level that is too low, it is possible that records
    are not taken into account when uploading into the BW."
    'Safety delta' usually will be set to 30 mins during the initialization /delta upload(default).
    This means that only records that are already half an hour old at the starting point of the upload are loaded into BW
    Ex:
    we have made following settings for copa
    timestamp=09:00
    safety delta=30 mins
    now when you run daily delta,it picks data ranges between (current timestamp-safety delta) i.e 08:30 instead of 09:00(yesterday's)
    to 09:00 today
    check this oss notes  502380 for better understanding on COPA delta mechanism
    Symptom
    There is some confusion about how the delta process works with CO-PA DataSources and the old logic (time stamp administration in the Profitability Analysis) or there are data inconsistencies between the BW and OLTP systems.
    As of PlugIn Release PI2004.1 (Release 4.0 and higher), a new logic (generic delta) is used during the delta process. Old DataSources can be converted to the new logic. New DataSources automatically use the new logic. With the new logic, the time stamp administration is located in the Service-API and no longer in the Profitability Analysis.
    This note refers only to DataSources with the old logic.
    Reason and Prerequisites
    Administration of the delta process for CO-PA DataSources partly occurs in the OLTP system. In particular, the time up to which the data was already extracted is stored in the DataSource control tables (old logic).
    Solution
    Since the control tables for the delta process for the extractor are managed in the OLTP, the following restrictions apply:
    1. There should only ever be one valid initialization package for a DataSource. Data inconsistencies may occur between BW and OLTP if, for example, you schedule an Init for various selections for the same DataSource and data is posted between the individual initializations to the Profitability Analysis. The reason for this is that each time the time stamp for the DataSource is initialized in the OLTP, the current value (minus the safety delta, see note 392876) is reset. Records from a previous selection are therefore no longer selected with the next delta upload if they were posted before the last initialization run with another selection.
    2. Initialization can always only be carried out from one system. Inconsistencies may occur if the same DataSource is used from several BW systems and if data is posted between the initialization runs. This is because the time stamp for the replication status is reset for every initialization or delta upload in the OLTP. Records may therefore be missing in the system that was first updated if updates were made in the result area before the Init or delta run. In the system that was the second one to be updated, the records that were loaded into the first system are missing for a delta upload.
    In the case of large datasets, you should therefore perform initialization either using several DataSources or with a combination of one or more full uploads and an init upload. Full uploads without errors are possible for closed periods/fiscal years because no additional changes are made to this data. Initialization should be performed, for example, from the current fiscal year. The full updates for the closed periods can also be split in time. If required, more characteristics, for example, the action type, can also be used for the selection. For information on the period selection, see note 425844
    Hope you are clear now!!!!!!!!!
    Cheers
    Swapna.G
    Message was edited by:
            swapna gollakota

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • Oracle ODBC  - Internal Error - unable to initialize NLS during driver load

    I'm having some trouble with my ODBC connections which I hope someone can please help me with!
    About 6 weeks ago all was working as normal.
    As far as I know there have been no updates to the ORacle DB, the Windows XP operating system or the ODBC Drivers.
    Today when I opened access and visual case 2 to connect to Oracle I was at first greeted with a:
    unable to connect SQLState=IM004 SQL_HANDLE_ENV
    error. ODBC also kept crashing.
    I restarted the computer and was confronted with a different error:
    odbc SQLSTate 08004 ORA 12154 TNS could not resolve the connect identifier specified
    I was able to fix this error by setting the environment variable TNS_ADMIN in windows xp environment variables. I'm extremely confused about how this happened though as it was working and I don't think anything has changed.
    I was then able to connect to the database via Microsoft Access but when I opened Visual Case 2 and tried to make an update, I was confronted with the following error:
    Oracle ODBC Driver - internal error - unable to initialize NLS during driver load
    I looked in the registry at:
    HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\KEY_OraClient10g_home1
    HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\KEY_OraClient10g_home2
    HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\KEY_OraDb10g_home1
    and NLS_LANG was set to "AMERICAN_AMERICA.WE8MSWIN1252" in all 3 places.
    (Though KEY_OraClient10g_home2 only had 4 entries as opposed to KEY_OraClient10g_home1's 13 entries)...
    Since I made those changes I can no longer connect through Access.
    I just receive a ODBC - connection to 'xxx' failed
    Advice greatly appreciated!!!!
    Edited by: user11150264 on Aug 25, 2009 10:21 PM

    Actually it sort of does...
    I switched the ODBC connection to use instant client and now it's all working again.
    The biggest mystery is what changed to make it suddenly stop working the old way...

  • Strange Error During Page Load in Debug Mode (only) - Please Help!

    Hi All,
    Data base version: oracle 11g
    Apex version: Apex 4.1.1
    Webserver: Apache
    Need help with how to troubleshoot a Critical problem. The following error only occurs during page load in "Debug" mode. And, only occurs on a specific page within the application. A web page is served-up containing the following message and the application is blocked from running the page. The browser's (IE 8.0) back button must be clicked to proceed outside of "Debug" mode.
    "Error occurred while painting error page: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06502: PL/SQL: numeric or value error: character string buffer too small"
    Debug log follows:
    "S H O W: application="2006" page="6" workspace="" request="" session="500549669426301"
    Computation point: Before Header
    ...Perform computation of item: APP_SERVER, type=FUNCTION_BODY
    ...Performing function body computation
    ...Execute Statement: declare function x return varchar2 is begin return owa_util.get_cgi_env('SERVER_NAME'); return null; end; begin wwv_flow.g_computation_result_vc := x; end;
    ......Result = 156.9.122.214
    ...Session State: Save "APP_SERVER" - saving same value: "156.9.122.214"
    Processes - point: BEFORE_HEADER
    ...Process "GET_POSITION" - Type: PLSQL
    ...Execute Statement: begin wwv_flow.g_boolean := :F109_POSITION_ID IS NULL and :APP_PAGE_ID != 101; end;
    ......Result = FALSE
    ......Skip because condition or authorization evaluates to FALSE
    ...Process "Get JARS Sifter Log File Record Count" - Type: PLSQL
    ...Execute Statement: begin DECLARE vcnt NUMBER := 0; BEGIN d('Get JARS Sifter Log File Record Count'); Select count(*) into vcnt From JARS.JARS_SIFTER_LOG Where moveid = to_number(:P6_MOVEID) and sifter_status IN ('F','J'); :F1000_P6_SIFTER_LOG_COUNT := to_char(vcnt); END; end;
    Custom: Get JARS Sifter Log File Record Count
    ...Process "Set PTM Planned Trip Status" - Type: PLSQL
    ......Skip because condition or authorization evaluates to FALSE
    ...compatibility mode - do not set mime type
    ...compatibility mode - do not set additional http headers
    ...close http header
    ...metadata, fetch item type settings
    ...metadata, fetch items
    Show page template header
    Rendering form open tag and internal values
    Add error onto error stack
    ...Error data:
    ......message: Error processing request.
    ......additional_info: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ......display_location: ON_ERROR_PAGE
    ......is_internal_error: true
    ......apex_error_code: APEX.UNHANDLED_ERROR
    ......ora_sqlcode: -6502
    ......ora_sqlerrm: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ......error_backtrace: ORA-06512: at "APEX_040100.WWV_FLOW", line 3027 ORA-06512: at "APEX_040100.WWV_FLOW", line 7867
    ...Show Error on Error Page
    ......Performing rollback
    Rendering form open tag and internal values
    ...Unhandled Error while painting error page: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ...Error Backtrace: ORA-06512: at "APEX_040100.WWV_FLOW", line 2707 ORA-06512: at "APEX_040100.WWV_FLOW_ERROR", line 185
    End Page Rendering"
    Thanks!
    Bernard

    All,
    It appears that the page Javascript maximum limit size was reached. The error stopped appearing after some of the page Javascript code was removed out to Application Static Files. I wonder if there exists any "direct" indicator by the system whenever the size limit has been reached?
    Again, the run error only occurred when the page was loaded in "Debug" mode.
    Bernard

  • How to debug a transfer rule during data load?

    I am conducting a flat file (excel sheet saved as a CSV file) data load.  The flat file contains a date field and the value is '12/18/1988'.  In transfer rule for this field, I use a function call to transfer this value to '19881218' which corresponds to BW DATS format, but the monitor of the InfoPackage shows red error:
    "Value '1981218' of characteristic 0DATE is not a number with 000008 spaces".
    Somehow, the last digit or character of the year 1988 was cut and the year grabbed is 198 other than 1988.  The function code is (see below in between two * lines):
    FUNCTION ZDM_CONVERT_DATE.
    ""Local Interface:
    *"  IMPORTING
    *"     REFERENCE(CHARDATE) TYPE  STRING
    *"  EXPORTING
    *"     REFERENCE(DATE) TYPE  D
    DATA:
    c_date(2) TYPE c,
    c_month(2) TYPE c,
    c_year(4) TYPE c,
    c_date_combined(8) TYPE c.
    data: text(10).
    text = chardate.
    search text for '/'.
    if sy-fdpos = 1.
      concatenate '0' text into text.
    endif.
    c_month = text(2).
    c_date = text+3(2).
    c_year = text+6(4).
    CONCATENATE c_year c_month c_date INTO c_date_combined.
    date = c_date_combined.
    ENDFUNCTION.
    Could experts here tell me what's wrong and also tell me on how to debug a transfer rule during data load?
    Thanks

    hey Bhanu/AHP,
    I find the reason.  Originally, I set the character length for the date InfoObject ZCHARDAT1 to 9, then I find the date field value (12/18/1988)length is 10.  Then I modified the InfoObject ZCHARDAT1 length from 9 to 10 and activated it already.  But when defining the transfer rule for this field, before the code screen, click the radio button "Selected Fields" and pick the filed /BIC/ZCHARDAT1, then continue to go to the transfer rule code screen, but find the declaration lines for the infoObject /BIC/ZCHARDAT1 is as following:
      InfoObject ZCHARDAT1: CHAR - 000009
        /BIC/ZCHARDAT1(000009) TYPE C,
    That means even if I've modified the length to 10 for the InfoObject and activated it, but somehow the transfer rule code screen always takes the old length 9.  Any idea to have it fixed to take the length 10 in the transfer rule code screen defination?
    Thanks

  • Safari pages not loading properly and certain pages don't even load

    Recently, I've been struggling with a lot of certificate warnings and some incomplete loadings from Safari. I don't know why but my mac is constantly asking me to verify certificates even though i've allowed them all for my most frequently visited sites. I've experienced this before but I don't know why I'm encountering this again and this time I'm also sick and tired of twitter insisting on not even loading and my hotmail being improperly loaded. Font and scale of everything change and there are no images when my pages are being improperly loaded. I can't do anything like that. How can I change the certificate settings so that my mac will never ever again ask me for authentification each and every time I try to load. Please help for Safari's certificate issue and improper loading.

    This could be a complicated problem to solve, as there are several possible causes for it.
    Back up all data, then take each of the following steps that you haven't already taken. Stop when the problem is resolved.
    Step 1
    From the menu bar, select
     ▹ System Preferences... ▹ Date & Time
    Select the Time Zone tab in the preference pane that opens and check that the time zone matches your location. Then select the Date & Time tab. Check that the data and time shown (including the year) are correct, and correct them if not.
    Check the box marked
    Set date and time automatically
    if it's not already checked, and select one of the Apple time servers from the menu next to it.
    Step 2
    Triple-click anywhere in the line below on this page to select it:
    /System/Library/Keychains/SystemCACertificates.keychain
    Right-click or control-click the highlighted line and select
    Services ▹ Show Info
    from the contextual menu.* An Info dialog should open. The dialog should show "You can only read" in the Sharing & Permissions section.
    Repeat with this line:
    /System/Library/Keychains/SystemRootCertificates.keychain
    If instead of the Info dialog, you get a message that either file can't be found, reinstall OS X.
    *If you don't see the contextual menu item, copy the selected text to the Clipboard (command-C). Open a TextEdit window and paste into it (command-V). Select the line you just pasted and continue as above.
    Step 3
    Launch the Keychain Access application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Keychain Access in the icon grid.
    In the upper left corner of the window, you should see a list headed Keychains. If not, click the button in the lower left corner that looks like a triangle inside a square.
    In the Keychains list, there should be items named System and System Roots. If not, select
    File ▹ Add Keychain
    from the menu bar and add the following items:
    /Library/Keychains/System.keychain
    /System/Library/Keychains/SystemRootCertificates.keychain
    From the Category list in the lower left corner of the window, select Certificates. Look carefully at the list of certificates in the right side of the window. If any of them has a a blue-and-white plus sign or a red "X" in the icon, double-click it. An inspection window will open. Click the disclosure triangle labeled Trust to disclose the trust settings for the certificate. From the menu at the top, select
    When using this certificate: Use System Defaults
    Close the inspection window. You'll be prompted for your administrator password to update the settings. Revert all the certificates with non-default trust settings. Never again change any of those settings.
    Step 4
    Select My Certificates from the Category list. From the list of certificates shown, delete any that are marked with a red X as expired or invalid.
    Export all remaining certificates, delete them from the keychain, and reimport. For instructions, select
    Help ▹ Keychain Access Help
    from the menu bar and search for the term "export" in the help window. Export each certificate as an individual file; don't combine them into one big file.
    Step 5
    From the menu bar, select
    Keychain Access ▹ Preferences ▹ Certificates
    There are three menus in the window. Change the selection in the top two to Best attempt, and in the bottom one to  CRL.
    Step 6
    Triple-click anywhere in the line of text below on this page to select it:
    /var/db/crls
    Copy the selected text to the Clipboard by pressing the key combination command-C. In the Finder, select
    Go ▹ Go to Folder...
    from the menu bar and paste into the box that opens (command-V). You won't see what you pasted because a line break is included. Press return.
    A folder named "crls" should open. Move all the files in that folder to the Trash. You’ll be prompted for your administrator login password.
    Step 7
    Reboot, empty the Trash, and test.

  • Acrobat PDFMaker Office COM Addin for Microsoft Office 365 does not work. When I check the cox for COM Add-ins, it unchecks it. "A runtime error occurred during the loading of the COM Add-in." I use Adobe X Professional. The Add-in worked fine in Office 2

    I recently upgraded to Microsoft 365 Home and use Outlook 2013. The Acrobat PDFMaker Addin worked fine in Office 2010. Now, I get an error message: Not loaded. A runtime error occurred during the loading of the COM Add-in.
    I use Adobe Acrobat X Professional.
    I have restarted Outlook, restarted my computer, and nothing changes.
    Does anyone have a solution?
    Steve

    I do not think that AA X is compatible with the newest versions of OFFICE and such. Your only choice is to print to the Adobe PDF printer or use the MS plugins to create PDFs.

  • Why we will go for Queue delta instead of Unserialized and Direct delta ?

    Hi Experts,
    Why we will go for Queue delta instead of Unserialized and Direct delta ? specify any reasons for that ?
    What happens internally when we use Queue delta , Direct delta ?
    I will allocate points to those who help me in detail. My advance thanks who respond to my query.

    Hi,
    Direct Delta
    With this update mode, extraction data is transferred directly to the BW delta queues every time a document is posted. In this way, each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues. If you are using this method, there is no need to schedule a job at regular intervals to transfer the data to the BW delta queues. On the other hand, the number of LUWs per DataSource increases significantly in the BW delta queues because the deltas of many documents are not summarized into one LUW in the BW delta queues as was previously the case for the V3 update.
    If you are using this update mode, note that you cannot post any documents during delta initialization in an application from the start of the recompilation run in the OLTP until all delta init requests have been successfully updated successfully in BW. Otherwise, data from documents posted in the meantime is irretrievably lost. The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) A maximum of 10,000 document changes (creating, changing or deleting documents) are accrued between two delta extractions for the application in question. A (considerably) larger number of LUWs in the BW delta queue can result in terminations during extraction.
    b) With a future delta initialization, you can ensure that no documents are posted from the start of the recompilation run in R/3 until all delta-init requests have been successfully posted. This applies particularly if, for example, you want to include more organizational units such as another plant or sales organization in the extraction. Stopping the posting of documents always applies to the entire client.
    Queued Delta
    With this update mode, the extraction data for the affected application is compiled in an extraction queue (instead of in the update data) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update.
    Up to 10,000 delta extractions of documents to an LUW in the BW delta queues are cumulated in this way per DataSource, depending on the application.
    If you use this method, it is also necessary to schedule a job to regularly transfer the data to the BW delta queues ("update collective run"). However, you should note that reports delivered using the logistics extract structures Customizing cockpit are used during this scheduling. This scheduling is carried out with the same report which is used when you use the V3 updating (RMBWV311, RMBWV312 or RMBWV313).There is no point in scheduling with the RSM13005 report for this update method since this report only processes V3 update entries. The simplest way to perform scheduling is via the "Job control" function in the logistics extract structures Customizing Cockpit. We recommend that you schedule the job hourly during normal operation - that is, after successful delta initialization.
    In the case of a delta initialization, the document postings of the affected application can be included again after successful execution of the recompilation run in the OLTP (e.g OLI7BW, OLI8BW or OLI9BW), provided that you make sure that the update collective run is not started before all delta Init requests have been successfully updated in the BW.
    In the posting-free phase during the recompilation run in OLTP, you should execute the update collective run once (as before) to make sure that there are no old delta extraction data remaining in the extraction queues when you resume posting of documents.
    Using transaction SMQ1 and the queue names MCEX11, MCEX12 or MCEX13 you can get an overview of the data in the extraction queues.
    If you want to use the functions of the logistics extract structures Customizing cockpit to make changes to the extract structures of an application (for which you selected this update method), you should make absolutely sure that there is no data in the extraction queue before executing these changes in the affected systems. This applies in particular to the transfer of changes to a production system. You can perform a check when the V3 update is already in use in the respective target system using the RMCSBWCC check report.
    In the following cases, the extraction queues should never contain any data:
    - Importing an R/3 Support Package
    - Performing an R/3 upgrade
    For an overview of the data of all extraction queues of the logistics extract structures Customizing Cockpit, use transaction LBWQ. You may also obtain this overview via the "Log queue overview" function in the logistics extract structures Customizing cockpit. Only the extraction queues that currently contain extraction data are displayed in this case.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) More than 10,000 document changes (creating, changing or deleting a document) are performed each day for the application in question.
    b) In future delta initializations, you must reduce the posting-free phase to executing the recompilation run in R/3. The document postings should be included again when the delta Init requests are posted in BW. Of course, the conditions described above for the update collective run must be taken into account.
    Un-serialized V3 Update
    Note: Before PI Release 2002.1 the only update method available was V3 Update. As of PI 2002.1 three new update methods are available because the V3 update could lead to inconsistencies under certain circumstances. As of PI 2003.1 the old V3 update will not be supported anymore.
    With this update mode, the extraction data of the application in question continues to be written to the update tables using a V3 update module and is retained there until the data is read and processed by a collective update run.
    However, unlike the current default values (serialized V3 update); the data is read in the update collective run (without taking the sequence from the update tables into account) and then transferred to the BW delta queues.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method since serialized data transfer is never the aim of this update method. However, you should note the following limitation of this update method:
    The extraction data of a document posting, where update terminations occurred in the V2 update, can only be processed by the V3 update when the V2 update has been successfully posted.
    This update method is recommended for the following general criteria:
    a) Due to the design of the data targets in BW and for the particular application in question, it is irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence in which the data was generated in R/3.
    Thanks,
    JituK

  • Lock Object error during batch load

    Batch load was delayed about an hour. First error message in the Server Log said "Object is already locked by user""Error 1053010 processing request [Lock Object]-Disconnecting"Then it went through a series of "Object locked by user admin" and "received client request: create Temporary Object(from user admin)"It did this several times until connection.Although from time to time we have had problems with Objects getting locked and preventing maintenance in the Application Manager. This is the first time that is has prevented a batch load. Does anyone know what could have triggered this during the load process?

    This happened to us a lot, I think it's due to people opening objects, then letting their session time out, leaving a phantom lock.We solved the problem by calling the UNLOCKOBJECT function before any dimbuilds. 99.9% of the time this raises an error, "..object x is not locked..", but the rare time that the otl is inadvertently locked, it prevents the script from failing.<br><br>HTH,<br><br>Jeff McAhren<br>Dallas, Texas<br><br>ex to unlock the outline for app/db flash:<br><br>UNLOCKOBJECT 1, "FLASH" "FLASH" "FLASH"

  • Server hangs or freezes during heavy load

    During peak times of the day, especially during heavy load on the Calendar Server,
    the application seems to hang. The client side application will not respond on
    the user's desktop, and uni* commands on the server itself respond considerably
    slow.
    <P>
    There are two parameters in the server configuration file that are strongly
    believed to be a trigger of server hangs or freezes in large deployments and/or
    busy servers. Here is a description of the problem:
    <P>
    Large deployments tend to be 3000+ users per node. This could be a single or
    multi-node environment.
    <P>
    A lock manager fix was implemented in 4.0 to correct a problem that was
    found in 3.51 where the server would hang. At that time, the parameters called
    read/writelocktimeouts
    were introduced as a failover mechanism in case the
    database was not available, which would then trigger the client process to
    disconnect rather than hang the whole server.
    <P>
    These timeouts effectively will terminate a process whose read or write exceeds
    the specified periods. The default of 20 seconds is quite a large amount of time;
    however, it is not totally unlikely that such a value could be met on a
    very busy system. If this is the case, and there is some relation between a
    process being terminated by one of these timeouts and subsequent system
    instability, then the "solution" would not be to extend the values of the
    timeouts but rather to exclude them. This way, it will ensure that no process is
    terminated this way and therefore the process would be allowed to continue until
    it had completed its job.
    <P>
    The timeouts were not removed from the product, but under normal circumstances
    they probably won't be needed anymore anyhow. It seems that on a busy calendar
    server, setting the db timeout alarms may actually trigger the server to freeze.
    Below are some examples of errors that appear in the log files which show
    that the database is no longer accepting client requests:
    <P>
    db_VISTA ERROR -920 -> cst_d_open: d_open
    db_SchedBaseOpen: unable to open database
    probable cause: unilckd is down or "/users/unison/tmp/unisonlckm"
    was removed
    uniengd: database lock timeout
    ITEM: "NA,NA" <0,0>
    CLIENT: "unises", "A.02.80"
    INET-NAME:
    INET-ADDR:
    CALL: "SessionsInfoGet"
    <P>
    To make the fix:
    <OL>
    <LI>Using your favorite editor, edit the /users/unison/misc/unison.ini file.
    In the following section you will see these two parameters:
    <P>
    [ENG]
    writelocktimeout = 20
    readlocktimeout = 20
    <P>
    <LI>Place a "#" sign (or the appropriate comment symbol for your OS) in front of
    these two lines and save the file.
    <P>
    <LI>The server will now have to be restarted in order for the changes to take
    effect.
    </OL>

    This looks similar to what I'm seeing.
    DPM 2010, there's one backup set (for me a file server disk) that every time I try to run the initial replica on it the server hangs and needs to be rebooted by iLO. It doesn't just die suddenly, first the data stream on the backup stops then the OS becomes
    less responsive but there is no resource issue. trying to open event view will cause a few things to lock up then over a few mins the server is complete froze. like the disk drives have been locked.
    Suspecting McAfee, I added in all the exclusions, that didn't help so I added the process exclusions which are done by setting dpmra and csc to low risk and that didn't help either. I could reproduce it just by kicking off a backup for this one file servers
    drive so it's easy to test with.
    Tonight, I had some permissions in EPO to let me stop the scanning completely and disable the on-access scan and for the first time it worked!
    There is definitely an issue between DPM and McAfee beyond what is on MS's web page for AV checks.
    I don't have a workaround yet other than stopping the AV completely... Something to follow up on next week. For the moment I made some progress though.

Maybe you are looking for

  • How to get the name of a (custom) component

    Hi, I would like to know whether there is a way to get the name Xcelsius uses for instance in the Object Browser. Getting the components "name" property doesnt return this information. Thanks in advance.

  • I helped an engineer and i now need help with conn...

    Hi, I recently got BT Infinity installed in my home. The HomeHub3 was installed along with a white box which it connects into and it connects to my phone socket. Everything is working fine. But to cut a long story short, my Engineer was "Snowed Under

  • How can I view and choose photos before importing from a digital camera?

    Hey, I am using a Canon PowerShot A40 and I want to be able to (after I connect the camera and open iPhoto of course) view the pictures on the camera in iPhoto and choose individual images to import. I have 560 images, but I only want to import 3 (gr

  • Autofill in Safari search bar for iPod Touch

    I have read lots of threads, but none seem to answer this question... How do I delete previous search items in the Google/Yahoo search bar in Safari on my iPod Touch??? Autofill displays these previously entered searches with a drop down menu. There

  • Backing up a recently lost 3G Touch onto my new account

    I have a 32G third generation Ipod, which I thought I lost 3 months ago, and just found again yesterday. It has 20G of music on it and because I thought I had lost it I bought a 16G Iphone and have replaced 14G of music on the Iphone. The trouble is