System Downtime in BW

Hi All,
We are working on BW3.5 . We will have to enhance one LO data source . I know we need system (Ecc ,/ R/3 ) downtime for setting up of setup tables and reinitialization with data  but can you let me know if we need any sytem downtime for bw system.  we are adding one field to the procurement cube .
Is it that during the transports and reinitialization we need complete BW downtime ?
regards,
Dola

Hi,
If your datasource is already in production, and you want to data only from today onwards for enhanced fields, i.e. historical data is not required for enhanced fields..
1. Fix ECC down time for 20 to 30 munities
2. Keep all objects on Qty system in ECC and BW.
3. Run Delta laods in BW for 2 to 3 times.So with this step you can clear SMQ1 and RSA7. Check the entries in RSA7, if it is ZERO then it is fine.
4. Move DS from ECC Qty to ECC PROD .
5. Replicate in BW.
6. Move all BW objects from BW Qty to BW Prod.
7. Delete Init load in InfoPackage level (Not in ODS/Cube).
8. Load Init without DataTransfer.
9. Then Run Delta.
10.Next day on wards deltas will come as usual.
If you need Historical data also.
1. Delete data in Cube.
2. Fix Down Time and load Init then Delta.
Thanks
Surendra Reddy

Similar Messages

  • System Downtime log

    Hi Forum,
    My Dev system was shutdown no. of time for last 3 day. So my superior asked me to provide a log for last 3 day for the system Downtime log as to when the system was up and when not?
    So how can i take the log related to same by SAP or also how on OS level.
    Thanks & Regards,
    SAP CRM

    On your <sid>adm's home there should be the log files:
    stopsap_<hostname>_<system_number>.log
    startsap_<hostname>_<system_number>.log
    Additionally you can check the DB logs:
    $ORACLE_HOME\saptrace\background\alert_ORASID.log
    Rgds,
    Loukas

  • System downtime web message

    There are a few nodes being load balanced by Cisco CSS here. When all of these nodes need to be subject to whole system downtime, I'd like to provide an informative webpage to notify end-users that the system will be under maintenance for a few hours.
    BigIP has failover pool capability, how to do the same thing in Cisco CSS?
    Thanks very much,
    Hai

    the command is "primarysorryserver ".
    Cient will be forwarded to a sorry_server that can display any information you want.
    Gilles.

  • System Downtime

    Can we please have a CLEAR AND CORRECT Statement as to the days of the week and the time (Start to finish) that the website was going to be down... I was under the impression that the website would be down (I belive sundays from like 1am to 6am?)
    now 3 days in a row... (weekdays) the system has gone down late ..

    Its usually common practice to update/maintenance on websites in the wee late hours of the night since thats when they are usually least busy and fewer people are affected by the downtime. Of course that does not apply if its unexpected maintenance.  The times i've had issue with the site was around 1am and 2am, particular not be able view usage for me.

  • System downtime during Statistical Set-up (OLI3BW)

    Hi Gurus,
    Please help me with this.
    Is there a way we can minimise R/3 downtime while doing Statistical set-up for loading data into BW.
    We have Purchasing cube active with us and we are enhancing the cube and the extractor to add new fields, since the requirements needs history for this new fields, we have to reinitliase the LO set-up again, we are also upgrading from unserialised V3 to Queued Delta update. However historically it looks like we need a downtime which means no transactions should happen while performing OLI3BW. I was wondering is there a way we can perform this with no downtime. Our client is little apprehensive for a long downtime. Please share your thoughts..
    Thanks All.

    hi,
    try check SAP Note Number: 753654
    and there is 'how to' minimize downtime
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d51aa90-0201-0010-749e-d6b993c7a0d6

  • Periodic Alert scheduler runs during system downtime

    Hi,
    I have implemented a Periodic Alert and have activated the Periodic Alert Scheduler.
    The issue is that the 'Periodic Alert Scheduler' will run at 12AM everyday and our Prod systems are down during this time for backup. Can this be scheduled to run at a different time or will it resume once the systems are up?
    Thanks,
    R

    The alert will execute the scheduled run when the system is back up.

  • Full ETL Load into PRD working around system downtime

    The problem we have got is some of the Informatica ETL tasks will run 30+ hours on Prod. But, we got overnight backups and batches, which means we got only 18 hours max per day and we got to stop ETL at this point. When we stop the ETL, it sends a truncate signal and wipes all the data out.
    Is there a way to make a task not send the truncate signal, instead commit to that execution point and start from there if the ETL is stopped and started due to overnight activities on the production system? Or is there a way to say this task will run at this particular time, so that we can cancel the overnight backups?
    Thanks in advance for any help you can offer

    Hi,
    The tables which get truncated or a rollback occurs in both in SDE's and SIL's. Its only particular tables which get rolled back, not the entire warehouse. This happens in particular SDE's and SIL's, as some of the tasks take quiet a while to complete. Yes, we have gone through Oracle performance recommendations and applied most of them which could be done within the timeframe we have got.
    For ex, say Payroll data is being loaded, which might take 25+ hours, we got a daily window of 17 hours. At this point, when I stop the ETL, it marks the task as failed, either SDE or SIL. What we want to do at this point is, for the staging/target table to commit itself instead of doing a rollback. This will enable the task to pick up from this point rather than running all over again.
    I had tried changing the commit settings to 5000 rows instead of 10,000 rows by default, in Workflow manager. Also, I enabled persistent cache for these mappings, and changed the 'recovery strategy' to 'restart from previous commit point', both seemed to have made no difference.
    Thanks.

  • Any performance impact on satellite systems?

    Hello,
    Our productive solution manager are setup to connect to all the ECC, XI & R/3 systems, as central monitoring system.
    So far we only use Solman for monitoring & EWA, not even BPM is configured yet.
    in SMSY, it's configured to use our XI SLD, to auto collect all the database, server & system data for solman system landscape.
    My question is: If there is downtime on Solman system, will it have any performance (slow transaction) on satellite system, especially on my XI system (as solman access the XI SLD).
    The reason I post this question here is: recently we have downtime for solution manager due to SP upgrade, during the downtime, we noticed there are a lot (thousand) IDOCs processing messages to R/3 system stucks in XI for 1 of our application. Users starting to make complaints, and we try to find the root cause in XI, in R/3 ... involve our basis guy & also XI admins, but no one able to find why.
    By the time PSM is up (after 2-3 hours), ALL the stuck messages in XI are manage to clear/process at once. And until now, on one able to explain why. They are suspecting my solution manager system downtime has the impact on those satellite system.
    Solman is setup to use PXI's SLD.
    Anyone has feedback on this, is it coincidently, or ?
    Please share if you have knowledge on this topic
    Thank you

    Hi,
    As per your description I would say it is pure coincidential.
    When you configure Solution Manager for system monitoring and/or EWA, all Solution Manager does is to collect the relevant data from the sattelite systems. Just this.
    So, when solution manager is not available there should be no reason for any impact on the sattelite systems side.
    Also,  from your description, I would say your XI is getting SLD information from Solution Manager. This is the only reason I could figure out that IDOCs are getting stucked in XI.
    Is it possibel to ask your XI guys to recheck which SLD XI is using?
    Regards
    Valdecir

  • How downtime can be reduced for setup table update.

    Hi;
    Can anyone tell me various ways to reduced system downtime for setup table updates.
    thanks
    Warm Regards
    Sharebw

    Hi,
    You will need to fill the set up tables in 'no postings period'. In other words when no trasnactions are posted for that area in R/3 otherwise those records will not come to BW. Discuss this with end user and decide. Weekends are a general choice for this activity.
    try Early Delta Initialization
    With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the posting of data in the source system. The option of executing an early delta initialization is only available if the DataSource extractor called in the source system with this data request supports this.
    Extractors that support early delta initialization are delivered with Plug-Ins as of Plug-In (-A) 2002.1.
    You cannot run an initialization simulation together with an early delta initialization.
    hope this link may make you clear about early delta init
    http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a65dce07211d2acb80000e829fbfe/frameset.htm
    thanks,
    JituK

  • Need for locking system for delta initialization (BW-R3-LO)

    Hello all,
    Could someone please help me with the following,
    Please be kind, don't get impatient for the lengthy post
    1. Do we need to lock the system (or posting-free the system) to do early delta initialization or not? If yes, please explain me how that work?
    2. Someone somewhere mentioned that early delta initialization is not preferable compared to normal delta initialization, even though early delta initialization is advantageous. Why is that so?
    3. Imagine, while scheduling Queued Delta Init or un-serialized V3 delta Init, if I choose certain end date and time (say, 10pm, 12-31-2007) for the records, then all the records till that point would be loaded to setup tables and all the records after that point of time will be moved to extraction queue or update tables. My question is, if this is true why would we ever need to lock the system during setup tables filling?
        If you say the records after that particular time and date (10pm, 12-31-2007) will not be collected in extraction queue and update tables, why don't we do the following
                 a). say, I do full load with end date and time (10pm 12-31-2007)
                 b). Next, I do delta init with start date and time (10pm 12-31-2007) (this time I lock the postings)
             If we do this way, we can actually reduce the down time of the system. Is my conclusion right?
             If you say it's not practicable as some transactions' saving time differs from time stamp and some transactions' delta is even measured by pointer.......if this is the case we can use Delta offset. What do you say about this?
    4. How this scenario differs to early delta initialization?
    Your responses would be greatly appreciated
    Thank you

    Hello,

    >
    curious maven wrote:
    > Do we need to lock the system (or posting-free the system) to do early delta initialization or not? If yes, please explain me how that work?
    No,you don't need to lock the Source System which is the advantage of Early Delta Init.
    With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the posting of data in the source system. The option of executing an early delta initialization is only available if the DataSource extractor called in the source system with this data request supports this.
    [Business Information Warehouse 3.0 Overview Presentation|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9a0e9990-0201-0010-629b-dc2735cb9c81]
    2
    >
    curious maven wrote:
    > 2. Someone somewhere mentioned that early delta initialization is not preferable compared to normal delta initialization, even though early delta initialization is advantageous. Why is that so?
    Early delta is not supported by all the extractors. See Lo Extraction
    3
    >
    curious maven wrote:
    Imagine, while scheduling Queued Delta Init or un-serialized V3 delta Init, if I choose certain end date and time (say, 10pm, 12-31-2007) for the records, then all the records till that point would be loaded to setup tables and all the records after that point of time will be moved to extraction queue or update tables. My question is, if this is true why would we ever need to lock the system during setup tables filling?
    >
    >     If you say the records after that particular time and date (10pm, 12-31-2007) will not be collected in extraction queue and update tables, why don't we do the following
    >            
    >              a). say, I do full load with end date and time (10pm 12-31-2007)
    >              b). Next, I do delta init with start date and time (10pm 12-31-2007) (this time I lock the postings)
    >
    >          If we do this way, we can actually reduce the down time of the system. Is my conclusion right?
    >
    >          If you say it's not practicable as some transactions' saving time differs from time stamp and some transactions' delta is even measured by pointer.......if this is the case we can use Delta offset. What do you say about this?
    This is applicable to Generic Delta where you can set the Safety Interval (Upper Limit and lower Limit) for the extractor based on ALE Pointer, or Time Stamp or Calendar day.
    If you excute a extractor with early delta init, during the setup table fill, the delta records will be directly written to the delta queue. The delta management handles these functionality.
    [How to Create Generic Delta|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/84bf4d68-0601-0010-13b5-b062adbb3e33]
    [SAP BI Generic Extraction Using a Function Module|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a0f46157-e1c4-2910-27aa-e3f4a9c8df33]
    4
    >
    curious maven wrote:
    How this scenario differs to early delta initialization?
    If you do a Init without Early Delta init, then the user cannot post new documents and you have to either schedule in the weekend or request for a Source System downtime.
    But if you use Early delta , then you don't need the source system down time and the user can post new records during the init, which will be written to the delta queue directly.
    [How to Minimize Effects of Planned Downtime (NW7.0)|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/901c5703-f197-2910-e290-a2851d1bf3bb]
    [How to Minimize Downtime for Delta Initialization (NW2004)|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5d51aa90-0201-0010-749e-d6b993c7a0d6]
    Thanks
    Chandran

  • Reducing time required for ABAP-only copyback (system copy) process

    Our company is investigating how to reduce the amount of time it takes to perform a copyback (system copy) from a production ABAP system to a QA system.  We use a similar process for all ABAP-only systems in our landscape, ranging from 3.1h systems to ECC6.0 ABAP-only systems on both DB2 and Oracle database platforms, and the process takes approximately two weeks of effort from end-to-end (this includes time required to resolve any issues encountered). 
    Here is an overview of the process we use:
    u2022     Create and release backup transports of key system tables and IDu2019s (via client copy) in the QA system to be overwritten (including RFC-related tables, partner profile and IDOC setup-related tables,  scheduled background jobs, archiving configuration, etc.).
    u2022     Reconfigure the landscape transport route to remove QA system from transport landscape.
    u2022                    Create a virtual import queue attached to the development system to capture all transports released from development during the QA downtime.
    u2022     Take a backup of the target production database.
    u2022     Overwrite the QA destination database with the production copy.
    u2022     Localize the database (performed by DBAu2019s).
    u2022     Overview of Basis tasks (for smaller systems, this process can be completed in one or two days, but for larger systems, this process takes closer to 5 days because of the BDLS runtime and the time it takes to import larger transport requests and the user ID client copy transports):
    o     Import the SAP license.
    o     Execute SICK to check the system.
    o     Execute BDLS to localize the system.
    o     Clear out performance statistics and scheduled background jobs.
    o     Import the backup transports.
    o     Import the QA client copy of user IDu2019s.
    o     Import/reschedule background jobs.
    o     Perform any system-specific localization (example: for a CRM system with TREX, delete the old indexes).
    u2022     Restore the previous transport route to include the QA system back into the landscape.
    u2022     Import all transports released from the development system during the QA system downtime.
    Our companyu2019s procedure is similar to the procedure demonstrated in this 2010 TechEd session:
    http://www.sapteched.com/10/usa/edu_sessions/session.htm?id=825
    Does anyone have experience with a more efficient process that minimizes the downtime of the QA system?
    Also, has anyone had a positive experience with the system copy automation tools offered by various companies (e.g., UC4, Tidal)?
    Thank you,
    Matt

    Hi,
    > One system that immediately comes to mind has a database size of 2TB.  While we have reduced the copyback time for this system by running multiple BDLS sessions in parallel, that process still takes a long time to complete.  Also, for the same system, importing the client copy transports of user ID's takes about 8 hours (one full workday) to complete.
    >
    For BDLS run, I agree with Olivier.
    > The 2 weeks time also factors in time to resolve any issues that are encountered, such as issues with the database restore/localization process or issues resulting from human error.  An example of human error could be forgetting to request temporary ID's to be created in the production system for use in the QA system after it has been initially restored (our standard production Basis role does not contain all authorizations required for the QA localization effort).
    >
    For the issues that you encounter because of system copy, you can minimize this time period as you would be doing it on periodic basis (making some task list) and you can make a note of issues that you faced in previous run. So, normally i don't count it as system copy time
    Thanks
    Sunny

  • XPRA to minimize downtime durina upgrade

    Hello Experts,
    I am looking for information about XPRAs and system downtime during upgrade. Is it possible to modify XPRAs that has to be run during upgrade inorder to reduce the downtime, how? Please provide me other suggestions in order to reduce the downtime with XPRAs...
    Thanks...
    Viral

    Hi Viral,
    When you will run the prepare it will give you how many xpra objects are there for conversion.You can work on the xpra objects at uptime period.
    Regarding  parallel process when you will run the upgrade it will ask you for the parameter no of background process and no of parallel process.You can take a bigger value to reduce the downtime provided you have sufficient hardware.
    Regards
    Ashok Dalai
    Edited by: Ashok Dalai on Aug 20, 2009 11:30 AM

  • SAP EPR6.0 Upgrade (Downtime).

    Dear Sir/Madam,
    Currently we are doing SAP ERP6.0 Upgrade, Can you please guide me what are the steps we need to adapt in order to Minimize System downtime.
    Regards.
    Hubert.

    Hi Hubert,
    Regarding the upgrade times, there are a lot of factors like the table
    contents, DB size, tp and R3trans versions and so on that can affect the
    performance. As to my experience, it depends mostly on database performance.
    Please refer to the following SAP notes:
    838725  - Oracle 10.2: Dictionary and system statistics
    830576 -  Parameter recommendations for Oracle 10g
    1171650 - Automated Oracle DB parameter check
    983548 -  Long runtimes during SAP Upgrade using Oracle 10
    558197 - upgrade hangs in PARCONV_UPG, XPRAS_UPG, SHADOW_IMPORT_UPG2
    Additional information:
    1. if the location of the datafiles (upgrade directory) is a local
       directory or a mounted remote directory. For a remote directory
       the connection speed to it is important;
    2. Even if the upgrade directory is local, network layer is used and
       thus network configuration is important. A typical difference
       between production and test systems can be that production is often
       high availability and thus can use e.g. virtual hostnames or other
       methods. Maybe the network setup is then different to non-production
       systems;
    3. Also hardware issues, e.g. if the database or the upgrade directory
       are on fast disks and if caches are used are important;
    4. the database speed (parameters, memory, disks, ....);
    5. if the database is on the host where SAPup and R3trans run or on a
       different host might influence the speed but even a local database
       access uses network layer and thus network configuration for the
       database host is also an issue.
    Best Regards
    Julia

  • System copy tools for SAP Basis 46C.

    Hi All,
    I need to do system copy for SAP Basis 46c. I have a query regarding the tools used for system copy. It is written in the SAP Documents and we need to use the latest version of the system copy tools (i.e. R3load,R3check,R3ldctl,,R3szchk,R3trans).
    I need to do the Heterogeneouse system copy
    Source :- Oracle 9.2.0.8 , AIX 5.3
    Target :- Oralce 9.2.0.8 (Or 10.2.0.4 if possible) , Sun Solaris
    So my question is
    1) Am i suppose to use only the tools (i.e. R3load,R3check,R3ldctl,,R3szchk,R3trans) which are compatible to relase 4.6C (Kernel 46D_EXT, latest path level is 2541)  or
    2) Can i download and use the Latest version of system copy tools (i.e. Withe release 7.00 for R3load,R3check,R3ldctl,,R3szchk,R3trans) to do the herogeneous system copy of SAP Basis 4.6C ??
    Please reply.
    Thanks

    > Yes we are aware about the certified consultant. But before going for such a big project we wanted to find out the possibilities how we can minimise system downtime and optimize the process therefore wanted to have some information earlier.
    Exactly that is the use of that consultant Those are experienced people who have done lots of migrations in the past, they know the possibilities of the tools and versions.
    > So now i am very sure that we can use 10.2 while installation of new target system while doing heterogenouse system copy from oracle 9.2.
    Correct.
    > 1) Migration Monitor (source :-System copy & Migration optimization.pdf)
    > The Migration Monitor is downward-compatible. As new features and bug fixes are only implemented in the latest version, SAP urgently recommends that you always use the latest version available.
    You can use it of course but support of it is not "built in" - there's no clicker to check as is in newer versions of sapinst. You will need to modify the R3S file at the "correct place" to make use of it.
    > 2) Distribution Monitor (Source :- Distribution Monitor (version 1.9.1) Usersu2019 Guide)
    > Restrictions:-The following restrictions apply to the current implementation:The DM does not support system copies of releases lower than SAP_BASIS 6.20
    So you can't use that option.
    > 3) PACKAGE SPLITTING (source :-System copy & Migration optimization.pdf)
    > The package splitting option is available and integrated into the system copy tools R3SETUP and SAPinst since SAP R/3 4.0B SR1.
    The "old version" is a perl script, you'll need to make sure you have perl installed.
    >
    > 4) TABLE SPLITTING  (source :-System copy & Migration optimization.pdf)
    > AVAILABILITY:
    > Table splitting can be used for ABAP systems with SAP kernel 6.40 or above
    So that is also not usable.
    > 6) R3LOAD OPTIONS - (source :-System copy & Migration optimization.pdf)
    > R3load option '-fast' (<= 4.6D) or '-loadprocedure fast': These options are available
    > u2022 from SAP kernel release 6.40 for - DB2/UDB - MSSQL - Oracle (see SAP Note 1045847 - Oracle Direct Path Load Support in R3load)
    So it's also not built in for Kernel 46D_EX2.
    > Request your final comments on the above points and Need to know whether parallel import and export can be done for 4.6C (46D_EXT Kernel)  release using migration monitor. Please reply.
    As I said before and as is written in the documentation, it can be used but not "by default", you'll make sure you stop at the right place with your migration to configure it manually (or the consultant will do).
    Markus

  • Shadow system during system upgrade

    Hi All,
    I am new to upgrade.I would like to know about Shadow System, how it will be created, what happens during the switch between the shadow and original system.
    I had read the following blog which helped me to get clarity in the introduction part.
    SUM: introduction to shadow system
    Appreciate if anyone can explain and share some document on this.
    Cheers,
    Febin Roy

    Hi Febin,
    If you are working on SAP upgrade project you will come across to this term – shadow system or instance and system switch.The background upgrades process. Basically by upgrade we want to get the newest functions that provider offers in its software product that we are currently using. For sure we want to get this done without an interruption to existing, running system. In reality there is always some interruption. We are just trying to minimize it as much as possible. So does the SAP. The background of shadow system is to minimize system downtime during the upgrade.
    Shadow system is basically copy of system that is going to be upgraded. This copies system is a bit limited to original one. It has only technical repository (SAP Basis objects) of objects and you cannot run your business processes on it.  More over technical repository is upgraded into version of final release of to be upgraded system. Technically speaking it is used for modification adjustments (famous transactions SPAU and SPDD) and activation of objects. All the support packages and add-ons are imported into this instance as well. By them you get you system to the higher release. Let me mention also that shadow system run in the same database as its original system. They are running in parallel.
    During preparation of shadow instance you are still running your original SAP system where all the business processes are being executed normally. At the moment when shadow instance is prepared we do its switch from shadow to original system. This is called system switch when you kind of “exchange” original system with shadow one. By that time shadow system becomes upgraded system.
    Regards,
    Deva

Maybe you are looking for

  • Multiple copies of JPEG on ePub export

    I'm making copies of a JPEG image and using this image in multiple places in my inDesign document. In the links panel, I see the disclosure triangle that shows all the pages that I'm using that link. So, all good so far. When I export the document to

  • HP Color Laser Jet Pro MFP M177fw can't scan or copy

    I can not scan or copy. The printer is just showing a message on the display of "calibrating" I have not had any paper jams or changed any toners. I tried a hard reset to see if that would resolve the issue but it did not. I have the printer connecte

  • Create a multi-company portal

    Hi all I´m currently inmersed in the design of a new SharePoint portal for multiple companies (SharePoint 2013 Enterprise). The main idea is to have the same structure for all companies (lists, librarys, public views, ...) but separate the contents (

  • URGENT - create FI invoice thru idoc  IDOC_INPUT_INVOIC_FI

    Hi all, I got a problem while creating a FI invoice thru the IDOC IDOC_INPUT_INVOIC_FI . In the created FI invoice i need to populate some fields in the profitability segment ( like customer number... ) . My problem is I don't know where in the idoc

  • Toolbar Application

    I'm trying to remove The Procrastinate toolbar I installed from the Safari Extensions gallery. I went to applications and looked for a toolbars app, since everything I've read has said that how you get rid of a toolbar. I have no toolbar app. I've lo