High Availability Production Server using online replication

Hi
In One box, stopped online replication from one box to second box located in remote ofsite. Converted code page conversion from EBCDIC code to ASCII code and then performed Upgrade SAP 4.6C with SAP ECC 5.0. Everything ran smoothly.
Secondly Customer wanted to to do online replication. For this purpose there are two options,
First Option is:
Install SAP ECC 5.0 in second box
Second Option is Homogeneous copy from One box to Second box.
I am not sure which option is better.
After that, we have to start online replication to transfer journal receiver from one box to second box and apply it to the database in Second box.
I am not familiar the OS400 iseries V5R3M0 command how to perform Online replication .
COuld you please send us procedure how to perform Online replication .
Thanks and Regards
A Prasad Rao
After that we have to start

Hi TCS-Support,
I would say, it is always a bit a pitty, to do consulting and to not know the specials of a platform :-((
You can use both ways - installation or homo-copy. I would use the homo copy, but that is my feeling - and in the end you need in order to refresh the stuff a new DB-Copy anyway.
Then you need either iASP, manual apply of journal receivers or to buy a product like Mimix, DataMirror etc. of the HA-Suites.
Regards
Volker Gueldenpfennig, consolut.gmbh
http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

Similar Messages

  • Taking snapshot of oracle tables to sql server using transactional replication is taking a long time

    Hi All,
    I am trying to replicate around 200 oracle tables onto sql server using transaction replication and it taking a long time i.e the initial snapshot is taking more than 24 hrs and it still going on.
    Is there any way to replicate those these tables faster?
    Kindly help me out..
    Thanks

    Hi,
    According to the description, I know the replication is working fine. But it is very slow. 
    1. Check the CPU usage on Oracle publisher and SQL Server. This issue may due to slow client processing (Oracle performance) or Network performance issues.
    2. Based on SQL Server 2008 Books Online ‘Performance Tuning for Oracle Publishers’ (http://msdn.microsoft.com/en-us/library/ms151179(SQL.100).aspx). You can enable the transaction
    job set and follow the instructions based on
    http://msdn.microsoft.com/en-us/library/ms147884(v=sql.100).aspx.
    2. You can enable replication agent logging to check the replication behavior. You may follow these steps to collect them:
    To enable Distribution Agent verbose logging. Please follow these steps:
    a. Open SQL Server Agent on the distribution server.
    b. Under Jobs folder, find out the Distribution Agent.
    c. Right click the job and choose Properties.
    d. Select Steps tap, it should be like this:
    e. Click Run agent and click Edit button, add following scripts by the end of scripts in the command box:
            -Output C:\Temp\OUTPUTFILE.txt -Outputverboselevel 2
    f. Exit the dialogs
     For more information about the steps, please refer to:
    http://support.microsoft.com/kb/312292
    Hope the information helps.
    Tracy Cai
    TechNet Community Support

  • System Copy of our Production Server using orabrcopy method.

    Hi
    We are planning to do a System Copy of our Production Server using orabrcopy method.
    But as the Production Server Java Version is below 1.4.1, Orabrcopy could not be executed.
    As per the planned downtime of the Production Server, we have copied all the database offline. ( say 9th of August)
    but we could not create the control , trace, init<sid>.ora files using ORABRCOPY.. during offline database copy.
    Now the system is up and running.. We havent created the files yet.
    I have 2 questions..
    1. Is there any other method so that we can create the control , trace, and init<sid>.ora files.
    2. If we create the control trace files now ( 13th of August ) can we use the offline backup that we took on (9th of August) to perform a System Copy..
    Need your addvice..
    Thanks and Regards
    Paguras

    Basically, orabrcopy does (aside from other things) an
    alter database backup controlfile to trace;
    You can enter that manually (anytime) and use the resulting .trc file in saptrace directory as a base for control.sql, however, in this case you need to "know" what you're doing. ORABRCOPY is a "nice frontend" for this because it uses the same statement to create CONTROL.SQL, it just edits it appropriately which you will need to do manually.
    Markus

  • Building Temp SAP Production Server using DB Restore Method

    Dear Gurus,
    we are underway of a hardway migration of HP-UX/Oracle 10.2/4.7R3
    This is my first of a kind project. We are currently underway of moving a client production server to our site..... we are using db restore method to build the sap system. we have already received the online  backup tapes from the client along with the corresponding logs.
    I am incharge of restoring the database once the unix team completes the fille system restore. i will really appreciate if the more experienced people among us can give me a quick run down of how thsi process shoudl run, what kind of steps to start with.
    or important sap links for this procedure.
    Thanks and Regards to all

    Hi,
    Best way to start with is reading system copy guide which will give you step by step process.
    Thanks
    Sunny

  • 2012 R2 - Clustering High Availability File Server

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.
    What's your goal? If you want to have file server with no service interruption then you need generic SMB file server built on top of a guest VM cluster. HA VM is not going to work for you (service interruption) and SoFS is not going to work for you (workload
    different from SQL Server and / or Hyper-V is not supported). So... Tell what do you want to do and not what you're doing now :)
    For a fault-tolerant file server scenarios see following links (make sure you replace StarWind with your shared storage specific, I'd suggest to use shared VHDX). See:
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    http://technet.microsoft.com/en-us/library/cc753969.aspx
    http://social.technet.microsoft.com/Forums/en-US/bc4a1d88-116c-4f2b-9fda-9470abe873fa/fail-over-clustering-file-servers
    http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
    http://www.starwindsoftware.com/configuring-ha-file-server-for-smb-nas
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.
    The goal is a file server that's as resilient as possible. I need to still use DFS namespace though. Its on a Dell VRTX server with built-in shared storage.
    I'm having issues getting the High Availability wizard for General Purpose File Server to see local drives :(
    So I'm just trying to understand the key differences between creating one of these and  a hyper-v server with file services installed.

  • TFS work item store is not connecting in production server using server side event handler code

    Server side plugin code to connect to work item store
    tfs = new TfsTeamProjectCollection(new Uri(tfsUri));
    store = (WorkItemStore)tfs.GetService(typeof(WorkItemStore));
    I used the above code and accessed work item store in my TFS DEV server without username and password. But the same plugin code is not working for the TFS Production server.
    Exception:
    Account name:
    domain\TFSService Detailed Message: : TF30063: You are
    not authorized to access http://localhost:8080/tfs/xx. Exception Message: TF30063: You are not
    authorized to access http://localhost:8080/tfs/xx
    Please guide

    Hi divya,
    From the error message, you might not have the permissions to get the work item store in TFS production server. You can check the permissions on your production server by using
    tfssecurity command in VS command line.
    Please execute tfssecurity /imx “domain\username” /collection:url, for more information about tfssecurity /imx, please refer to:
    http://msdn.microsoft.com/en-us/library/ms400806.aspx 
    If you have the permission, then you can also clean team foundation cache and try again.
    Best regards,

  • High availability xMII server configuration copy

    We are setting up two xMII servers in a Hight Availability configuration. What files/directories do I need to ensure are copied between the main and backup servers so that the configuration is completely duplicated between them? I know I can copy the entire Lighthammer directory but that is kind of a brute force method. What is the more elegant approach?

    Please refer to the
    <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/30f43a39-be98-2910-9d9c-a59785f44e41">xMII Best Pratices Guide</a>
    Please also note the file SystemConfig.xml found in LighthammerIlluminatorConf
    contains the Security Server URL of the machine.
    Regards,
    Jamie

  • High availability SQL Server requirements for Remote Desktop Services in Windows Server 2012

    Good night,
    Thanks for reading this question, I do not write much English.
    I am implementing Remote Desktop Services in Windows Server 2012, I need to know the size of the database to create and feature on the .mdf and .ldf, I searched in different microsoft link but I have not received a response.
    Kindly appreciate your cooperation and attention.

    Hi Alejandro,
    I am implementing Remote Desktop Services in Windows Server 2012, I need to know the size of the database to create and feature on the .mdf and .ldf
    If you want to know the size requirements of .mdf abd .ldf files, since they are parts of SQL database, I suggest you refer to SQL forums below to get more professional support:
    https://social.technet.microsoft.com/Forums/sqlserver/en-US/home?forum=sqlgetstarted
    https://social.technet.microsoft.com/Forums/sqlserver/en-US/home
    In addition, here are some articles regarding RDS deployment for you:
    Remote Desktop Services Deployment Guide
    https://technet.microsoft.com/en-us/library/ff710446(v=ws.10).aspx
    Remote Desktop Services (RDS) Quick Start Deployment for RemoteApp, Windows Server 2012 Style
    http://blogs.technet.com/b/yungchou/archive/2013/02/07/remote-desktop-services-rds-quick-start-deployment-for-remoteapp-windows-server-2012-style.aspx
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Automatic Reprocess of Trfc messages in PI production server using RSARFCEX

    Hello All,
    i am getting frequent errors of SYSFAIL in TRFC queue in PI. I am trying to schedule RSARFCEX report as the Background job, please advice me "is this correct solution" and also please tell whether this will effect any other messages other than reprocessing the failed TRFC messages.

    Hi
    RSARFCEX is the report for restarting messages with trfc errors. But if the messages are failed because of wrong entries in ALE configurations in case of IDoc scenarios then you need to resolve them manually.

  • High availability when using Oracle Service Bus as Middleware?

    Hi there,
    We are designing a new solution here which will be composed of SAP WM, Middleware and a Java application.
    We have been using OSB when sending data from Java to SAP, and we can get many "channels" working. So we get higher throughput and also HA.
    Now, in that new project we will need to send data from SAP to a Java application, but some people told me we can't have more than one channel when sending data from SAP to OSB. I mean, we won't be able to have High Availability unless we use SAP PI. The only option we would have is to set the gateway to broadcast the message to all available channels, which would duplicate the messages got by the Java application.
    Is that true? Is there any alternative for using OSB with SAP and still get HA and High throughput (SAP -> OSB).
    What do you guys think?
    Regards

    anyone?
    Please

  • High Availability (HA) questions

    Hi Gurus,
    First of all not sure is it a right forum to post this thread. But if anybody has any idea which forum, then I will do that again.
    But here is my scenarios and questions. Currently I am doing High Availability (HA) implementation using PowerHA & MSCS for SAp and NON-SAP environment. Using Oracle 10g possible datguard (still test is going how feasible is this), AIX 6.1 and Windows server. As per SAP docs, there are 3 categories for HA Testing.
    1. Before Failover u2013 Before Failing over to Passive Node
    2. After Failover u2013 After Failing over to Passive Node
    3. After Failback u2013 After Failing back to Active Node
    So I prepared my test plan for the said above categories for the system ECC, SRM, SCM, SCM-TM, SCM-Optimizer, BI, PI, EP MDM etc..
    Question(s) is/are:
    1. When switchover(or failover) from active to passive or when switchback(or failback) from passive to active, what transaction or command need to run to check all process or system is good. I mean initially I thought SGEN to run to do that. I am not sure SGEN is right one or not. If anybody has any idea, would be great. Although I am checking system consistency, ERS for any lock, message server, gateway etc. Other than that I mean while switchover suppose something running internally and do not know and did switchover. Then how will make sure nothing happened.
    2. What are the high availability functionality test for F5 Big-IP loadbalancer?
    3. What are the high availability functionality test plan for End-to-End process?
    4. What are the high availability test plan for VMWARE and WINDOWS environment?
    There are lots of questions, but the said above are initial to start and move on with HA.
    Thanks for effort and time.
    Krish

    Hi Krish,
    Answers to some questions.
    1. You can use JCMON tool to check the Java services are up after the switchover to any node ( either passive or active)
          For ABAP, you can just try to login and check if the services are up.
    2.  F5 - BIG IP Load Balancer is used to balance the load if you have multiple application servers i.e additional dialog instances. You need to refer F5 load balancer documents for the same.
    3.  End to End process test plan might change from client to client. So this should be drafted in conjunction with client and developers.
    4. VMWARE and WINDOWS - Not worked extensively on these platforms. So no comments on this
    Let me know if you need more information.
    Cheers...,
    Raghu

  • Getting font problem for printing form in Production server.

    Hi experts,
                   We are getting problem while printing form in production server. As it gives no problem in Quality server while printing the same, the print is fine. We applied packages from 26 to 28 level for version 4.7. When we moved this packages to Production server we got problem of font while printing the form for some of scripts. But the surprise is that in Quality server the print is fine, no problem at all.
                  Can any one suggest me on this issue.
    Regards,
    Sagun Desai.

    Hi sagun,
    Please check the spool request generated on production server using tcode SP01.
    Goto dispaly spool request and check the spool.If the spool too displays your script correctly then there is a problem in printer connected to production server.
    The printer connected to the test server  might be rightly conffigured to print the output.Please check the settings of the printer connected to the production.
    Hope this helps.
    Regards,
    Subodh

  • NCS Appliance Ver 1.1.1.24 VM High availability disk options

       Hello,
    I currently am running an NCS appliance running version 1.1.1.24.  I am looking to set up a VM server as a redundant High Availability backup server. Per the config guides the VM server needs to be set up with identical physical specifications as the appliance. That means I will need to set a vm up with the following specs:
    CPU: 2 x Intel Xeon Processor (2.4-GHz 12-MB Cache)
    •Memory: 16-GB (1×2-GB, 2Rx8) PC3-10600 CL9 ECC DDR3
    •Network Interface Cards: 2 x 10/100/1000 Gigabit
    The config guide for the VM recommends that I start with a minimum 400Gb disk space. Does anyone have experience deploying a VM as a backup for an  appliance?  Does this sound correct?
    Thanks,

       Hello,
    I currently am running an NCS appliance running version 1.1.1.24.  I am looking to set up a VM server as a redundant High Availability backup server. Per the config guides the VM server needs to be set up with identical physical specifications as the appliance. That means I will need to set a vm up with the following specs:
    CPU: 2 x Intel Xeon Processor (2.4-GHz 12-MB Cache)
    •Memory: 16-GB (1×2-GB, 2Rx8) PC3-10600 CL9 ECC DDR3
    •Network Interface Cards: 2 x 10/100/1000 Gigabit
    The config guide for the VM recommends that I start with a minimum 400Gb disk space. Does anyone have experience deploying a VM as a backup for an  appliance?  Does this sound correct?
    Thanks,

  • Do NOT use Jdeveloper 10.1.3 in PRODUCTION environment = use 10.1.3.1

    Hi,
    We experienced a FATAL StackOverflowError on our production server
    using JDeveloper 10.1.3 BC4J.
    After a certain amount of activity on the server, the bc4j provoke a deadly StackOverflowError,
    see Bugreport: Stackoverflow in "MetaObjectsManager.getDynamicObjectsPackage"
    2006-11-02 12:29:42,741 INFO [STDOUT] java.lang.StackOverflowError
    2006-11-02 12:29:42,746 INFO [STDOUT] at com.sun.java.util.collections.Collections$SynchronizedMap.put(Collections.java:1354)
    2006-11-02 12:29:42,746 INFO [STDOUT] at com.sun.java.util.collections.Collections$SynchronizedMap.put(Collections.java:1354)
    due to MetaObjectsManager.getDynamicObjectsPackage
    We had to reboot the server 3 times a day because of this bug...
    It is said to be solved in 10.1.3.1 so be careful when going production (I cannot confirm yet that it works)

    Hello Eric,
    I wonder if the Metalink Note 370759.1
    Title: JDeveloper Page Errors With "Cannot Find Class
    Java\Lang\StackOverflowError"
    would be of any help to you.
    May be worth having a look at it.
    Regards,
    Steff

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

Maybe you are looking for

  • My disk says it is locked while i am doing a clean install and i cannot find anywhere to unlock it.

    The disk is not shown as a selection to install to, but when I restart with options, it is given as an option as the startup disk. MacBook Pro early 2011 Mountain Lion OS

  • Exception, Fault code or Error Object

    For most cases I find it quite easy to decide whether to return a special value (most times null) or throw an exception. But this time i have doubts. Imaging a graphical client invokes a server side service. The service delegates the task to a Class

  • Set TextFlow scroll

    Hi Guys, no solution found about this problem... how set an auto scrollbar for TLF? This code was working with the first version of TLF Component for Flash CS4: private function setTextFromXML(oXML:XML):void {      oTextFlow = TextFilter.importToFlow

  • Corrupted Pictures in Adobe Lightroom

    Hi, Recently my imported pictures in Lightroom have been corrupted one they have fully loaded. It does not happen to them all but increasingly up to 50% of an import is damaged. The images have the appearance of a partly devloped slide film (if anyon

  • ILA issues with Vivado 2014.3

    I have been trying to run the Avnet Zedboard lab regarding the 802.11 Beacon frame receiver. http://zedboard.org/course/integrated-software-defined-radio-zynq%C2%AE-7000-all-programmable-soc I have gotten through all the lab work and am ready now to