LC ES3 Business Continuity Management / Disaster Recovery / Continuation of Operations

Have any others in this forum deployed a COOP / DR / Site Fail-over of their LC ES3 DRM system, and have some lessons learned to share?

Satish Kandi wrote:
Not an expert comment on your setup but just the thoughts that came to my mind while reading this...
1. What is the setup for backup preservation? Is it in one of the servers only? Is the backup also replicated?Belt and suspenders. The backup location on disk is part of what gets mirrored to the sister SAN. Plus, the nightly system backup to tape picks it up as well.
2. What about OAS/Forms 9i backup? You have not mentioned anything about that.That is also mirrored to the sister SAN.
3. What is the archive log generation frequency? Or how much data loss can you afford?A typical day will see one or two dozen archivelog files.
>
In addition, a daily full-system backup (to tape) is done with the db shut down, providing a daily cold backup as well as backup of the online backupsLucky you :-) but I don't see a need for this with the levels of backup that you have (especially with RMAN weekly backup and daily archive log backup). No, no need, but it does give us another layer at no real cost. This cold backup is really just a byproduct of a daily full system shutdown and backup. We have a high tolerance for system down time, but low tolerance for data loss.

Similar Messages

  • Workflow Manager disaster recovery old instances not moving forward

    Dear ALL,
    I am facing an issue in workflow manager 1.0 disaster recovery.
    I followed all the steps to recover workflow manager 1.0 mentioned in MSDN, new instances which I have created after restoration working fine, but the old workflow instances when I complete any of the task does not moving forward even the workflow is
    not getting suspended there is no error please see the screen shot. 
    Regards,
    Faraz Javaid

    Hi I did not republish the workflow and the behavior was same for old workflow instances as mentioned in my question.
    Then I republished the workflow thinking that it might solve the problem but the result remain the same. I have check the Logs but unfortunately I did not find any noticeable error.
    Faraz Javaid

  • Disaster Recovery - Managed Server Startup

    We are testing some disaster recovery and want to be able to start our managed servers on our B node in MSI mode while the A node with admin server is down. It tests fine and works with nodemanager running. However, we want to assume that there has been a jvm problem on one or more managed servers and they have ceased to run on the B node. When I run the startManagedWebLogic.cmd and plug in my managed server name and admin link (which is obviously unreachable), I get the message below. I copied down the managed servers info (there is no 'config' folder as mentioned in the Oracle instructions so I copied down the 6 server folder into \common\bin\config\acr2 folder). Any tips on getting this to work?
    <Apr 9, 2010 11:11:09 AM CDT> <Info> <Management> <BEA-141107> <Version: WebLogi
    c Server Temporary Patch for 9216172 Tue Jan 12 02:07:27 PST 2010
    WebLogic Server 10.3.2.0 Tue Oct 20 12:16:15 PDT 2009 1267925 >
    <Apr 9, 2010 11:11:09 AM CDT> <Critical> <WebLogicServer> <BEA-000362> <Server f
    ailed. Reason:
    There are 1 nested errors:
    weblogic.management.ManagementException: [Management:141247]The configuration di
    rectory C:\Oracle\Middleware\wlserver_10.3\common\bin\config does not exist and
    the admin server is not available.
    at weblogic.management.provider.internal.RuntimeAccessImpl.parseNewStyle
    Config(RuntimeAccessImpl.java:200)
    at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(Runtim
    eAccessImpl.java:115)
    at weblogic.management.provider.internal.RuntimeAccessService.start(Runt
    imeAccessService.java:41)
    at weblogic.t3.srvr.ServerServicesManager.startService(ServerServicesMan
    ager.java:461)
    at weblogic.t3.srvr.ServerServicesManager.startInStandbyState(ServerServ
    icesManager.java:166)
    at weblogic.t3.srvr.T3Srvr.initializeStandby(T3Srvr.java:749)
    at weblogic.t3.srvr.T3Srvr.startup(T3Srvr.java:488)
    at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:446)
    at weblogic.Server.main(Server.java:67)
    >
    <Apr 9, 2010 11:11:09 AM CDT> <Notice> <WebLogicServer> <BEA-000365> <Server sta
    te changed to FAILED>
    <Apr 9, 2010 11:11:09 AM CDT> <Error> <WebLogicServer> <BEA-000383> <A critical
    service failed. The server will shut itself down>
    <Apr 9, 2010 11:11:09 AM CDT> <Notice> <WebLogicServer> <BEA-000365> <Server sta
    te changed to FORCE_SHUTTING_DOWN>

    So no one has ever had to restart a managed server on a clustered environment when the admin was down? Seems like this would be a common disaster recovery item that would be needed. If anyone has this working please provide details. These details found in Oracle documentation are a bit incomplete...
    Starting a Managed Server in MSI Mode
    To start up a Managed Server in MSI mode:
    1.Ensure that the Managed Server’s root directory contains the config subdirectory.
    If the config directory does not exist, copy it from the Administration Server’s root directory or from a backup to the Managed Server’s root directory.
    Note: Alternatively, you can use the -Dweblogic.RootDirectory=path startup option to specify a root directory that already contains these files.
    2.Start the Managed Server at the command line or using a script.
    The Managed Server will run in MSI mode until it is contacted by its Administration Server. For information about restarting the Administration Server in this scenario, see Restarting a Failed Administration Server.
    I have used the root.directory switch to point to these locations as well but still get message.....<Apr 13, 2010 11:28:05 AM CDT> <Critical> <WebLogicServer> <BEA-000386> <Server
    subsystem failed. Reason: weblogic.descriptor.ResourceUnavailableException: Miss
    ing SerializedSystemIni.dat
    weblogic.descriptor.ResourceUnavailableException: Missing SerializedSystemIni.da
    t
    at weblogic.descriptor.DescriptorManager$SecurityServiceImpl$SecurityPro
    xy.<init>(DescriptorManager.java:153)
    at weblogic.descriptor.DescriptorManager$SecurityServiceImpl$SecurityPro
    xy.instance(DescriptorManager.java:137)
    at weblogic.descriptor.DescriptorManager$SecurityServiceImpl.decrypt(Des
    criptorManager.java:114)
    at weblogic.descriptor.internal.AbstractDescriptorBean._decrypt(Abstract
    DescriptorBean.java:991)
    at weblogic.management.configuration.SecurityConfigurationMBeanImpl.getC
    redential(SecurityConfigurationMBeanImpl.java:709)
    Truncated. see log file for complete stacktrace
    >
    <Apr 13, 2010 11:28:05 AM CDT> <Notice> <WebLogicServer> <BEA-000365> <Server st
    ate changed to FAILED>

  • Oracle Business/Disaster Recovery Test Plan ?

    I've been asked to create a business/disaster recovery plan (Cover our Oracle environment) outlining what to test in the event of a DR/BR situation once the DB has been restored. Does anybody have a word template or any information would assist me in this task
    Thanks

    I cannot give you a templace since that would be a site specific confidential documents; however, you really do not need one.
    You might just consider adding a note that the exact steps required would depend on the exact nature of the diaster but in all cases would basically follow the pattern of
    restore missing files
    forward recovery the restored database using the backed up archive logs
    open the restored, recovered database
    Upon startup Oracle will check itself for consistency. If the database is found to be consisten, which it should be, then the database will be opened. The application can now be started. No additional checks by the DBA are required.
    In the event the database is not consistent then the steps above would be repeated for the file(s) not restored as part of the first attempt and the process repeated. Error information will be logged to the database alert log file.
    HTH -- Mark D Powell --

  • Disaster Recovery Test

    Hi,
      Can anybody please guide, who'll play the major role in "Disaster Recovery Test"
    like functional / basis / abap etc,,,

    Hello Mahesh
    Everybody has to play a major role , first Basis has to take action then abaper and then for testing Functional people required to put their effort.here is brief excerpt from an article regarding "Disaster recovery for SAP".
    It will give all of us an idea about Disater recovery.
    When you have your SAP system installed, you don't have a disaster recovery solution.
    "SAP has standard methodologies for doing backups and restoring the SAP environment, but there's nothing built into their application that specifically targets disaster recovery,"
    In other words, SAP tells you very explicitly what you need to protect, but you're on your own in figuring out how to make it happen. It is common practice among third-party solution providers to ask about disaster recovery, but if you're doing your own thing it is important to be aware of the need for a disaster recovery solution.
    Outsourcing vs. building a secondary site:
    There are two ways to go about setting up your disaster recovery solution: Outsource or build your own secondary site. Outsourcing may be more convenient and less expensive, especially for smaller companies on a tight budget. Simply approach the outsourcing company with your needs, and they will pretty much take it from there. Graap likens it to an insurance policy, where you pay a premium on an ongoing basis for the security.
    If you decide to outsource, ask colleagues for recommendations and spend some time researching prices, which can vary a lot. But make sure the outsourcer can step up to the plate in the unlikely event that you need their services.
    Building your own secondary site requires a larger investment up front but the leaves you in full control of your contingency plans rather than be at the mercy of an outsourcing company. If your outsourcing provider falls through for some reason -- such as being in the same disaster zone as your main office during an earthquake for example -- you're in trouble. When building your own site, you can prepare for more scenarios and place it far enough away from your main office.
    High availability vs. cost:
    Specialists say one of the most important questions to consider is availability and how quickly you need to get your systems back online. The difference between getting back online in 10 minutes or three days could be millions of dollars, so you want to make sure you get just the right solution for your company.
    Around-the-clock availability will require mirroring content across two sites in real-time. This enables you to do an instant failover with little or no downtime, rather than force you to physically move from the office to a backup site with a stack of tapes.
    Regardless of whether you outsource or set up your own site, a high availability solution is expensive.
    "But if that is what it takes to keep your business from going under, it's worth every penny of it".
    An added benefit of having a high availability solution is that you can avoid maintenance downtime by working on one server while letting the other handle all traffic. In theory, this leaves a window of risk, but most maintenance tasks, such as backups, can be cancelled if need be.
    One consideration for mirroring data is the bandwidth to the secondary site. Replicating data in real-time requires enough capacity to handle it without hitches. Also, a secondary site will require the same disk space as your regular servers. You can probably get away with a smaller and cheaper system, but you still need enough storage space to match your primary servers.
    Whatever the choice for disaster recovery, it is vital that both the technology and the business departments know about the plan ahead of time.
    Testing your solution:
    Ok, so you have a disaster recovery solution in place. Great, you're home free, right? Not quite. It must be tested continuously it to make sure it works in real life. Sometimes management can be reluctant to spend the money for a real test, or perhaps there are pressing deadlines to keep but it should be tested one or two times a year.
    Many people who build good plans let them sit collecting dust for years, at which point half the key people in the plan have left or changed positions.Update the names, phone numbers and other vital information frequently and test them, he said. It is for the same reason you do fire drills: When the real thing strikes, there's no room for error.
    In testing, consider different scenarios and the physical steps needed to get the data center up and running. For example, many disaster recovery solutions require at least parts of a staff to get on a plane and physically move to the secondary location. But September 11 showed how that is not easy when all planes are grounded.
    Costly but vital:
    Disaster recovery is not cheap, and it requires lots of testing to stay current, but it could save your critical data.
    "Any customer who makes an investment in SAP is purchasing an enterprise-class application, and as such really should have this level of protection for their business". "I can't imagine why anybody would not have an interest in disaster recovery."
    Regards
    Yogesh

  • SharePoint 2010 backup and restore to test SharePoint environment - testing Disaster recovery

    We have a production SharePoint 2010 environment with one Web/App server and one SQL server.   
    We have a test SharePoint 2010 environment with one server (Sharepoint server and SQL server) and one AD (domain is different from prod environment).  
    Servers are Windows 2008 R2 and SQL 2008 R2.
    Versions are the same on prod and test servers.  
    We need to setup a test environment with the exact setup as production - we want to try Disaster recovery. 
    We have performed backup of farm from PROD and we wanted to restore it on our new server in test environment.Backup completed successfully with no errors.
    We have tried to restore the whole farm from that backup on test environment using Central administration, but we got message - restore failed with errors.
    We choosed NEW CONFIGURATION option during restore, and we set new database names... 
    Some of the errors are:
    FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified user or domain group was not found.
    Warning: Cannot restore object User Profile Service Application because it failed on backup.
    FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    Verbose: Starting object: WSS_Content_IT_Portal.
    Warning: [WSS_Content_IT_Portal] A content database with the same ID already exists on the farm. The site collections may not be accessible.
    FatalError: Object WSS_Content_IT_Portal failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified component exists. You must specify a name that does not exist.
    Warning: [WSS_Content_Portal] The operation did not proceed far enough to allow RESTART. Reissue the statement without the RESTART qualifier.
    RESTORE DATABASE is terminating abnormally.
    FatalError: Object Portal - 80 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    ArgumentException: The IIS Web Site you have selected is in use by SharePoint.  You must select another port or hostname.
    FatalError: Object Access Services failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: Object parent could not be found.  The restore operation cannot continue.
    FatalError: Object Secure Store Service failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified component exists. You must specify a name that does not exist.
    FatalError: Object PerformancePoint Service Application failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: Object parent could not be found.  The restore operation cannot continue.
    FatalError: Object Search_Service_Application_DB_88e1980b96084de984de48fad8fa12c5 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    Aborted due to error in another component.
    Could you please help us to resolve these issues?  

    I'd totally agree with this. Full fledged functionality isn't the aim of DR, getting the functional parts of your platform back-up before too much time and money is lost.
    Anything I can add would be a repeat of what Jesper has wisely said but I would very much encourage you to look at these two resources: -
    DR & back-up book by John Ferringer for SharePoint 2010
    John's back-up PowerShell Script in the TechNet Gallery
    Steven Andrews
    SharePoint Business Analyst: LiveNation Entertainment
    Blog: baron72.wordpress.com
    Twitter: Follow @backpackerd00d
    My Wiki Articles:
    CodePlex Corner Series
    Please remember to mark your question as "answered" if this solves (or helps) your problem.

  • Datafiles Mismatch in production & disaster recovery server

    Datafiles Mismatch in production & disaster recovery server though everything in sync
    We performed all the necessary prerequisites & processes related to DRS switchover successfully also our DRS server is constantly updated through logs application in mount state whan we opened DRS database in mount state it opened successfully however when we issued command
    Recover database
    It dumped with foll errors:
    ora-01122 database file 162 failed verification check
    ora-01110 data file 162 </oracle/R3P/sapdata7/r3p*****.data>
    ora-01251 unknown file header version read for file 162
    as I do not remember the exact name now
    However upon comparing the same file at production(which was now in shutdown state) the file sizes were mismatching so we went ahead to copy the datafile from production to DRS in database shutdown state at DRS through RCP utitlity of AIX 5.3 since our Operating system is AIX 5.3
    Though this remote copy was success
    we again started database in mount state & issued command
    Recover database
    But still it dump the same error but now for another datafile in DRS
    I mean I would appreciate if somebody could point out that despite our DRS was constantly being updated through our Production in terms of continuous log application before this activity logs were sync in both the systems also origlos,mirrorlogs & controlfiles were pulled from production system as it is infact the whole structure was as replica of production system , then why did datafile sizes mismatch??
    Details
    SAP version :- 4.7
    WAS :- 620
    Database :- Oracle 9.2
    Operating system :- AIX 5.3

    I am in a process of DR Planning.
    My present setup :
    O/S : HP-UX 11.31
    SAP ECC 6
    Oracle 10g Database.
    I know about dataguard and i have implemnted same on our other application but not for SAP.
    I have ame doubts and I request you to please guide me for the same.
    1. SAP License key -- it required hardware key. So will i have to generate license key another hardware key i.e. for machine where standby database.
    2. Database on standby site will be in manage recovery mode. Will I have to make any changes or any other file realted to SAP on standby host ? (some i know such as I upgrade SAP kernal, same i have to upgrade at standby host)
    Will you give me some link related to document for DR

  • Looking for comment on disaster recovery plan

    Looking for some comment on where we are and where we’re planning on going with our Continuity of Operations / Disaster Recovery plan.
    Current setup is Oracle 10.2.0.2.0 on HP-UX Itanium 64-bit. OAS / Forms 9i on a separate HP-UX Itanium box. We do not have control over the version of Oracle or the hardware. I am pushing the powers that be on upgrading to 10.2.0.4, but am not holding my breath.
    Basic backup is threefold. DB is in archivelog mode. RMAN full backup is taken weekly. Incremental backup taken daily. These backups are to disk. In addition, a daily full-system backup (to tape) is done with the db shut down, providing a daily cold backup as well as backup of the online backups. Also daily exports – full and schema specific.
    All db files are on an HP SAN unit. SAN replication software is used to mirror all i/o to a similar unit located at a sister site. Likewise, they replicate their SAN to ours. Each site has a pair of backup servers to be used by the other site if needed. Sister site has already tested loading our backup servers, mounting the mirror of their SAN, and starting the db.
    Since sister site is a couple of hours away and crosses major network boundaries, management would like to also put up a DataGuard standby at a facility only 30 minutes away and within our own network. Plan is to host that on duplicate HP servers, but disk would be on a NAS instead of a SAN.
    I have managed a DataGuard setup, but did not create/configure it, so am now digging into the docs, starting with the DataGuard Concepts. I’ve never worked with NAS and am not clear on how it differs from SAN, and what the implications are.
    I'm looking for any comment, especially any red flags, on what I’ve described.

    Satish Kandi wrote:
    Not an expert comment on your setup but just the thoughts that came to my mind while reading this...
    1. What is the setup for backup preservation? Is it in one of the servers only? Is the backup also replicated?Belt and suspenders. The backup location on disk is part of what gets mirrored to the sister SAN. Plus, the nightly system backup to tape picks it up as well.
    2. What about OAS/Forms 9i backup? You have not mentioned anything about that.That is also mirrored to the sister SAN.
    3. What is the archive log generation frequency? Or how much data loss can you afford?A typical day will see one or two dozen archivelog files.
    >
    In addition, a daily full-system backup (to tape) is done with the db shut down, providing a daily cold backup as well as backup of the online backupsLucky you :-) but I don't see a need for this with the levels of backup that you have (especially with RMAN weekly backup and daily archive log backup). No, no need, but it does give us another layer at no real cost. This cold backup is really just a byproduct of a daily full system shutdown and backup. We have a high tolerance for system down time, but low tolerance for data loss.

  • Disaster recovery question

    Your forum was recommended for my question/issue
    We have two semi-annual disaster recovery tests annually. The pre-test process is composed of restoring the Disaster Recovery Client from the Production client, and allowing one set of users to perform their process procedures on the DRS server, while the other users continue normal business operations on the PRD server.
    Earlier this year we upgraded from SAP 4.0b --> 4.7c.
    During our first disaster recovery test this year users received a message 'change and transport system not configured' . They were able to continue master data updates, even though they received the message.
    We would like to have a real disaster recovery test where we switch from the production system to the disaster recovery system (DRS) and continue our business operations from that server (DRS)(located in another state).
    <u><b>What must be done to perform this switchover/switchback process between the two clients once the test is completed</b></u>?

    If I interpreted you correctly: you want to bring up a DR-copy of the PRD system on another server. Then let a limited number of users test this one while all other users continue normal operation on the PRD server.
    Be aware that it is pretty dangerous to have both servers running at the same time
    You're main concern will be to isolate the DR system from the PRD system and interfacing systems !!!
    The result will not be 100% correct:
    - it will not be possible to test all interfaces - thus not all business functionality...
    the main obstacle with failover systems is interfaces.
    - you will not be able to see if the DR system can handle the load
    you will not see if the server and/or infrastructure is sufficient
    P.S. Remember that the interfacing go both ways.
    - You want someone to reach the system - as such you will need to open access into the DR system...
    but - you will want "only a limited number of users" to do it.. as such you must play around with SAP logon and/or DNS/IP addresses.
    - You do NOT want the DR system to update the PRD system (or send out info to any other partners)
    as such - you must restrict outbound traffic from the DR system.
    If you do not know what/where to isolate... then rip out the network cable and place the users next to the server

  • ADFS Disaster Recovery

    Hi,
     I'm looking at designing an ADFS solution to accomodate 4,500 users to provide SSO with a web application.
    I'm thinking of using 2 2012 R2 ADFS servers on my internal network and 2 2012 R2 Web Application Proxies in my DMZ, then load balancing the connections using an F5.
    What I'm not sure about is how to achieve DR in another data center. For example is active-active in 2 different data centres supported? I was thinking of replicating my ADFS servers across the Data Centres, then simply performing a failover in the event
    of a disaster, but I'm not sure how well that would work in reality.
    It would be great to hear from someone who's already done this.
    Thanks
    IT Support/Everything

    Hi,
    To provide Office 365 Single Sign-On with integration of on-premises AD and Windows Azure, it is recommended to use the on-premises
    environment for active use and Azure for business continuity. In case of a disaster, the failover between the on-premises infrastructure and the hosted infrastructure is a manual operation.
    It is not recommended to setting up a cross-premises, high-availability (active/active)
    configuration.
    For more detailed information please refer to these articles below:
    White paper: Office 365 Adapter - Deploying Office 365 single sign-on using Azure Virtual Machines
    http://technet.microsoft.com/library/dn509539.aspx
    Deployment scenario: Directory integration components in Azure for disaster recovery
    http://technet.microsoft.com/en-US/library/dn509536.aspx
    To get more efficient assistance, I suggest you refer to Azure and ADFS forums below:
    Azure Active Directory Forum
    https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=WindowsAzureAD
    Claims based access platform (CBA), code-named Geneva Forum
    http://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=Geneva
    Best Regards,
    Amy

  • Disaster Recovery and GSPS

    Hi all.
    I have recently been investigating the GSPS model.
    I would like to know how the GSPS model provides for Disaster Recovery(Incedent Response Planning, Disaster Recovery Planning and Business Continuity Planning).
    Any links or documents related to that topic would be greatly appreciated.
    Regards
    Ibrahim

    Ever wondered why you can get water or soda in a plastic bottle, but notBeer? It's becauseplastic is more porous than glass, which could allow the beer to go flat. But science marches on, and now Mitsubishi Plastics says it will provide one-liter plastic bottles to Kirin that have aspecial coating– " a thin carbon film on the inside that makes them 10 times better at preventing the loss of oxygen." Will it work? We'll be sending SpiceWorkers over to Japan to test the new bottles as soon as they're on the market.

  • Slides explaining EM High Availability & Disaster Recovery

    Slides explaining EM High Availability & Disaster Recovery
    http://www.oracle.com/technology/deploy/availability/pdf/oracle-openworld-2008/298477.pdf

    Check the following note, it should be helpful.
    Note: 216212.1 - Business Continuity for Oracle Applications Release 11i, Database Releases 9i and 10g
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216212.1

  • EBS 11i Disaster Recovery using tape backup on alternate Unix Server.

    Hi,
    I would really appreciate if you could please share with us what is the process to recovery EBS 11i environment to an alternate DR server using the latest tape backup in case any disaster happens to the existing production server.
    I need to prepare and test disaster recovery plan document using the latest tape backups.
    We have single node EBS 11i environment with Apps and DB installed on the same AIX Unix Server.
    We also have full database RMAN backup every night.
    The new alternate DR Unix server will have different hostname and IP address.
    Thanks in advance.
    Regards.

    Please refer to the following docs.
    Business Continuity for Oracle Applications Release 11i, Database Releases 9i and 10g [ID 216212.1]
    Business Continuity for Oracle E-Business Release 11i Using Oracle 11g Physical Standby Database - Single Instance and Oracle RAC [ID 1068913.1]
    Thanks,
    Hussein

  • Webinar: Unified Business Process Management

    <b>SAP NetWeaver Know-How Network Webinar: 
    Unified Business Process Management
    Wednesday 28 July 2004
    11 a.m. EDT</b>
    On Wednesday 28 July, George Yu hosts the webinar titled <b>Unified Business Process Management</b> as part of the ongoing SAP NetWeaver Know-How Network Webinar Series.
    Here’s how George describes his webinar presentation:
    “In the famous SAP NetWeaver refrigerator, we have seen two Business Process Management (BPM) execution products: the Ad-hoc Workflow for People Integration and Cross-Component BPM for Process Integration. In addition, we have business process monitoring in SAP Solution Manager and business process modeling in ARIS for SAP NetWeaver (just released). SAP has a plan to form a BPM lifecycle including modeling, model-based configuration, execution and monitoring, with a shared repository and a consistent appearance.  This is our Unified BPM. We will give a detailed discussion on this subject in the coming Webinar on July 28, 2004.”
    SDN invites you to post your questions to the presenter prior to the webinar and continue the online discussion afterward.
    <b>How to Participate</b>
    (Please go to the SDN Events page to see the article and download the PDF presentation)
    Dial-in Information:
    Date: Wednesday 28 July 2004
    Time: 11 a.m. EDT
    Within the U.S., call: +1.888.428.4473
    Outside the U.S., call: +1.651.291.0618
    Password: NetWeaver04
    WebEx Information:
    Topic: SAP NetWeaver Know-How Network
    Date: Wednesday 28 July 2004
    Time: 11 a.m. EDT
    Meeting Number: 742391500
    Meeting Password: netweaver04 (lowercase)
    WebEx Link: sap.webex.com
    Replay Information:
    A recorded replay of this call will be available for approximately three months after the webinar. Access this recording by dialing the appropriate number and using the replay access code 720151.
    Toll-free: +1.800.475.6701
    International: +1.320.365.3844
    <b>About the SAP NetWeaver Know-How Webinar Series</b>
    The SAP NetWeaver Know-How Webinar Series is driven by the SAP NetWeaver Regional Implementation Group (RIG), part of the SAP Development organization. The mission of the SAP NetWeaver RIG is to enable customers, employees, and partners to successfully implement the SAP NetWeaver solution. This SAP RIG has expertise in BI, EP, XI, and WebAS. They contribute their implementation expertise to the SDN implementation forums as well as to the SAP NetWeaver Know-How Webinar Series.
    <b>Disclaimer</b>
    SDN is not responsible for any changes to the webinar schedule. The webinar schedule may be changed or cancelled without prior notice.

    Sasmit wrote:
    can any1 please reply to this questionWell I couldn;t reply sensibly so i didnt .....
    Patience is a virtue
    Virtue Makes a saint
    Saints are a football team
    Who think Pompey ain't.
    Brandye is probably best suited to answer this ..... and it is currently the weekend......
    So please make the hard working Brandye have the weekend for her enjoyment!
    (Of course she may be on vacations for all I know and you will have a 2 week wait!)
    In practice this will come live 10 to 13 weeks after the beta ( I think 13 or just over is more common ...) ... could be a little later due to it being the holiday season for most people.

  • Seeking best practice for disaster recovery for osx server

    I am seeking a solid disaster recovery solution. I would be happy if it complements Time Machine, but I don't require that.
    I use DAR2 for the linux systems, but with the aid of nice UI provided by the distro vendor. It provides all the files necessary to follow up a clean base OS install with a file restore to provide a full recovery from a disaster.
    I just placed a new Mac Mini Server with the Promise Tech DS4600 in service, so I am eager to put a plan in place.
    What do other Mac OS Server users do?
    I would be most grateful for links to articles, products and suggestions.

    For anyone who comes across this post, here is what I settled on:
    The server is a mac mini server and is configured with RAID 1 over the two hard disks. I use DAR to backup the linux boxes to the OSX Server. Carbon Copy Cloner is used to backup the OSX Server to a 2 terabyte storage array.
    Hopefully this will suffice for the time being. I continue to look for better approaches.

Maybe you are looking for

  • IPod won't sync any of my purchased mucic

    After restoring my iPod to factory settings it is not syncing any purchased music I have lost everything. Ant ideas? Thanks

  • Need advice on JVM GC tuning for server jdk 1.5.0_08

    We are getting full GC , can you suggest optimal GC for the following logs: /usr/java1.5/jdk1.5.0_08/bin/java -server -Xms2048m -Xmx2048m -XX:NewSize=512m -XX:SurvivorRatio=8 -Xss1024k -verbosegc -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+Disab

  • MOD OC4J errors in HTTP server Error log

    these errors are continously shown in the error log of HTTP server. and my error rate reaches upto 8% somtimes MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request. MOD_OC4J_0119: Failed to get an oc4j process

  • Reinstalling but nothing happens

    I just recently tried downloading and installing itunes 9 on my computer and everything seemed to go fine in the process. when i tried to open itune though, it gave me a pop up saying that it was installed incorrectly. so i then reinstalled and resta

  • SAP Basis Documents

    can any bdy help me for getting SAP Basis documents