OracleAS cold failover cluster

I am installing oracleAS cold failover cluster. but during installing a am getting error in database configuration
assistant. can anybody help me?

This is as simple as "Two heads think better that once", for this you will have two points (or more) of access to your applications, the process are split between servers and if one of the nodes fails, there are other to backup them. That's the only think that a Cold Fail Over cluster does, as well the administration is centralized if you have a nice configuration you will need to do only one configuration and all nodes will notice about it, finaly when you have a cold fail over cluster you have 50% of your potential workload in use, you have a server as a portrait that only is good to look at it but doesn't do nothing else, you dont need benchmarks or notes, this is really simple as it sounds.
Greetings.

Similar Messages

  • Oracle 10gR2 Installation on Cold Failover Cluster

    Installation document recuired for Oracle 10g R2 on Cold Failover Cluster in Redhat Linux 4 platform.
    Message was edited by:
    user491196

    user12974178 wrote:
    checking network configuration requirements
    check complete: The overall result of this check is : Not Executed
    Recommendation: Oracle supports installation on systems with DHCP - assigned public IP addresses.
    However , the primary network Interface on the system should be configured with a static IP address
    in order for the Oracle Software to function properly. See the Installation Guide for more details on installing the
    software on systems configured with DHCP
    Could anyone help me in this regard
    Thanks in AdvanceYour problem is:
    "However , the primary network Interface on the system should be configured with a static IP address in order for the Oracle Software to function properly"
    Your solution is
    "See the Installation Guide for more details on installing the software on systems configured with DHCP"
    That book is at http://tahiti.oracle.com and the steps you need are in Chapter 2 of the Installation Guide for Linux.
    Now, I could assume this is Oracle 11gR2 and you are using Oracle Enterprise Linux 5 Update 4, but I won't. Assuming things gets me into trouble.

  • Oracle Infrastructure in Cold Failover Cluster

    Hello,
    I have browsed through the Oracle docs for High Availability, but I cannot find any information abourt achieving that CFC HA for OAS Infrastructura using RedHat EL AS and HP Serviceguard...
    Can anyone help me?
    thanks in advance, and best regards

    user7575753 wrote:
    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
    For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share .

  • How to deploy oracle 11g under cold failover cluster

    Hi,
    I am trying to install oracle 11g under failover cluster environments. I am trying to find some oracle documentation about it. But not able to find one. Can anyone help me. Please i need it badly.
    we have two servers sun solaris 10. on which we have failover cluster being installed. one is active while one is passive. How can i install oracle 11g on this failover clusters. did i have to install oracle of shared SAN. what are the extra configuration i have to do please help
    Also is it wise to install 11g warehouse with fail safe or use RAC.
    regards
    Nick
    Edited by: Nick Naughty on Feb 1, 2009 4:02 AM

    thanks alot for reply.
    I am of the same point of view. but the machine i am working on is sun solaris 10 with two partitions finance and accounts which have been protected by a failed over machine having two partitions finance and accounts and i think you are right these are the same shared storage folder which are mounted on active machine as well as on passive machine.
    Now my question is if i install oracle 11g on one machine without oracle cluster then sun cluster should be able to mount the database in case the primary machine fails.
    I have tried hard to get some documentation on this top but nothing so far. please help
    regards
    Nick

  • OC4J Instances in Cold Failover Cluster

    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!

    user7575753 wrote:
    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
    For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share .

  • Cold failover cluster for repository ?

    Hi,
    We have Box A (infra and mid) and B(mid),
    we want to use Box B for cold failover claster
    Any suggestion on how to do this?
    TIA

    Similar question was asked recently in our forum:
    http://social.msdn.microsoft.com/Forums/en-US/18239da7-74f2-45a7-b984-15f1b3f27535/biztalk-clustering?forum=biztalkgeneral#4074a082-8459-420f-8e99-8bab19c8fba2
    White paper from Microsoft on this context shall you with step-by-step guidance of failover cluster for BizTalk 2010.
    http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=2290
    Also refer this blog where authors in series of his post guides with the steps required:
    Part 1: BizTalk High Availability Server Environment – Preparations
    Part 2: BizTalk High Availability Server Environment–Domain Controller Installation
    Part 3: BizTalk High Availability Server Environment – SQL & BizTalk Active Directory Accounts
    Part 4: BizTalk High Availability Server Environment – Prepping our SQL & BizTalk Failover Clusters
    Part 5: BizTalk High Availability Server Environment – SQL Server 2008r2 Failover Cluster
    Part 6: BizTalk High Availability Server Environment–BizTalk 2010 Failover Cluster Creation
    Part 7 – BizTalk High Availability Server Environment –BizTalk 2010 Installation, Configuration
    and Clustering
    Par 8 - Adding Network Load balancing to our High Availability Environment
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • 10g in a Cold Failover Cluster

    Hallo,
    I have some doubts about configuration of a single instance in a two node cluster for a cold failover solution.
    I mean,
    1 - in listener.ora I should put the floating-ip hostname?
    2 - to create a dbconsole, do I have to use ORACLE_HOSTNAME variable?
    thanks in advance

    Even if in Metalink note 362524.1,
    a "failing over" db console need a shared home,
    I successed in creating hostname indipendent one in few steps
    1 - Setting ORACLE_HOSTNAME=clustername (and resolving the clustername with the floating ip in etc/hosts in server and client machines)
    2 - Creating the db console with repository in active node and w/o repository on passive one.
    That's all,
    Bye

  • SCAN & Cold Failover

    Hi Folks
    OEL 5.4 64bit, 11gR2
    SCAN
    I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
    what's your opinion about this?
    ASM for voting&ocr
    I'm also planning to create one diskgroup in asm for 3xvoting and ocr, normal redundancy and 3 failgroups. Do I have to create the diskgroup with the QUORUM Keyword for each failgroup or not? as described here http://download.oracle.com/docs/cd/E11882_01/server.112/e10500/asmdiskgrps.htm#CHDBGAFB. I assume not, as they cannot store ocr files (http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_5008.htm#SQLRF01114).
    what is your way to store the ocr and voting files?
    is there a 11gR2 Version of this paper available (http://www.oracle.com/technology/products/database/clusterware/pdf/SI_DB_Failover_11g.pdf) ?
    thanks
    regards oviwan

    SCAN
    I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
    1- You can not install the grid infrastructure without the SCAN even if you will not use it.
    2 - The old fashion you described is what is still in use independently from SCAN. You still have a vip which relocates in case the node goes down.
    Where you use the SCAN is when configuring the REMOTE LISTENER and ALIAS for clients. But for those purposes, you can use the «old fashion» if you don't want to use the SCAN. But i believe that you should use it because it makes things become more easier....

  • Cold-Failover with Oracle Weblogic Server 10 R3

    Hi ALL,
    It is possible to configure cold-failover using Oracle Weblogic Server?/
    - 2 instances with the same ip, if one fails the second will run
    - Not comunication between nodes like in normal cluster
    Thanks

    Hello All,
    Have anybody configured auto server migration for cold failover?
    I have a few questions regarding this setup, any help is much appreciated.
    1. for the floating ips to work, do I need 2 NICS and two cables on each server?
    2. Do I have to configure 2 different ips on one interface, which will be used as node manager listen address's for each of the 2 machines?
    3. What will be the listen address of each of these servers(primary server & failover server)? Is it the floating ip address? or empty (all local addresses)?
    4. can we start both the primary server & failover server so when primary goes down, the failover will be quick?
    Thanks,
    Chandra

  • Installation guide Oracle cloud Control Agents on a Windows failover cluste

    Hello Experts,
    i have to install Oracle cloud control agent on windows fail-over cluster. i could not found any document on oracle support site/ internet.
    here are the requirement:
    i have two hosts (host1 and host2) which are in windows cluster environment (active/passive). means, one node will hold the cluster service/resource at a time.
    if primary node (host1) is down, Microsoft cluster ware will fail-over cluster resource to host2 automatically.
    any one deployed 12c Agents on windows fail-over clustered environment. if yes, please provide hints / procedure.
    Thanks
    pavan

    Hi Pavan,
    Please find the following document:
    How to Configure Grid Control Agents in Windows HA - Failover Cluster Environments (Doc ID 464191.1)
    For 12C Agent install, you can proceed with the silent install passing a virtual hostname in silent method also using the following document:
    How to Install EM 12c Agent using Silent Install Method with Response File (Doc ID 1360083.1)
    Best Regards,
    Venkat

  • TO setup Cold Failover in ESP Cluster

    Hi,
    I created two nodes (node1 and node2) and started in defferent machines, node1 in vewin764SA016(1st machine) and node2 in vewin764SA046(2nd machine)(I have attached both files).
    I created an ESP project which has input window attached to XML Input Adapter.
    I setup cluster configuration in CCR file as follows (I have attached CCR file also. I could not able to attach .ccr, so it change extension to txt):
    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration xmlns="http://www.sybase.com/esp/project_config/2010/08/">
      <Runtime>
      <Clusters>
          <Cluster name="esp://vewin764SA016:19011" type="remote">
            <Auth>user</Auth>
            <Username encrypted="false">sybase</Username>
            <Password encrypted="false">sybase</Password>
            <Rsakeyfile></Rsakeyfile>
      <Managers>
      <Manager>vewin764SA016:19001</Manager>
      <Manager>vewin764SA046:19002</Manager>
      </Managers>
          </Cluster>
        </Clusters>
        <Bindings>
          <Binding name="Battery_Life_Input_Adapter_window">
            <Cluster>esp://vewin764SA016:19011</Cluster>
            <Workspace>test</Workspace>
            <Project>test</Project>
            <BindingName>b1</BindingName>
            <RemoteStream>remote1</RemoteStream>
            <Output>false</Output>
          </Binding>
        </Bindings>
        <AdaptersPropertySet>
          <PropertySet name="Atom Feed Input">
            <Property name="URL"></Property>
          </PropertySet>
        </AdaptersPropertySet>
      </Runtime>
      <Deployment>
        <Project ha="false">
          <Options>
            <Option name="time-granularity" value="5"/>
            <Option name="debug-level" value="4"/>
          </Options>
          <Instances>
            <Instance>
              <Failover enable="true">
                <FailureInterval>120</FailureInterval>
                <FailuresPerInterval>5</FailuresPerInterval>
              </Failover>
              <Affinities>
                <Affinity charge="positive" strength="strong" type="controller" value="node1"/>
                <Affinity charge="positive" strength="weak" type="controller" value="node2"/>
              </Affinities>
              <Affinities/>
            </Instance>
          </Instances>
        </Project>
      </Deployment>
    </Configuration>
    I added project to node1 using following command and started the project
    esp_cluster_admin --uri=esp://vewin764SA016:19011 --username=sybase --password=sybase --add_project --project-name=test --workspace-name=test --ccx=test.ccx --ccr=test.ccr
    Project started in both machines and both getting data.
    As per ESP document: "If cold failover is enabled, a failover occurs when a failed project switches to another server to continue processing."
    But in my case project is running in both the nodes and both getting data.
    I tried all possible combinations for Affinity attributes charge and strength(positive/negative, strong/weak). But it did not work out.
    How can I set up a cluster such that when i add a project, it should start running on one node and when that node goes down, it should bring up second node and start running on that.
    Thanks
    Shashi

    Hi Shashi,
    Reviewing the configuration of your ccr file, you've got
              <Affinities>
                <Affinity charge="positive" strength="strong" type="controller" value="node1"/>
                <Affinity charge="positive" strength="weak" type="controller" value="node2"/>
              </Affinities>
              <Affinities/>
    Just note, that in your set-up, you've set strong and positive for node1. This means that it will only run on node1.So if your node1 is down, the project will not start up on node2. Is that what you are trying to accomplish?
    "</Affinities>" - that fifth line there is extra. I don't think it is harming anything though.
    I am assuming that you are using Studio to see the that the "Project started in both machines and both getting data".
    There is an outstanding New Feature Request CR for Studio to support HA and failover. Studio is really not compatible with HA.  Studio makes it look like there are multiple instances of the project running when in fact there is only one instance running. Studio will start getting errors if the project crashes and is failed over to the secondary node.  I can’t really recommend using Studio at all when dealing with HA:
    CR 732974 - Request for Studio support of HA and failover.
    To confirm that there is only one instance of the project running, I look at esp_cluster_admin:
    $ esp_cluster_admin --uri esp://bryantpark:19011 --username=sybase
    Password:
    > get managers
    Manager[0]:     node1@http://bryantpark.sybase.com:19011
    Manager[1]:     node2@http://bryantpark.sybase.com:19012
    > get controllers
    Controller[0]:  node1@http://bryantpark.sybase.com:19011
    Controller[1]:  node2@http://bryantpark.sybase.com:19012
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node1
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
            -------------- Affinities --------------
            Affinity Type:                CONTROLLER
            Affinity Charge:              POSITIVE
            Affinity Strength:            STRONG
            Affinity Subject:             node1
            Affinity Type:                CONTROLLER
            Affinity Charge:              POSITIVE
            Affinity Strength:            WEAK
            Affinity Subject:             node2
    >
    The output from "get projects" shows that there are two active managers and controllers, and that the single instance is running from the controller named node1.
    Next, I run the "get project" command, and it shows me the Pid:
    > get project test/test
    Workspace:                    test
    Project:                      test
        ------------------ Instance : 0 ------------------
        Instance Id:                  Id_2000003_1393600620713
        Command Host:                 bryantpark.sybase.com
        Command Port:                 44905
        Gateway Host:                 bryantpark.sybase.com
        Gateway Port:                 41162
        Sql Port:                     47244
        SSL Enabled:                  false
        Big Endian:                   false
        Address Size:                 8
        Date Size:                    8
        Money Precision:              4
        Pid:                          20637
        Topology Ignored:             false
        Timer Interval:               5
        Active-Active:                false
    I run these Linux commands that also confirm to me that I have only one instance running:
    $ ps -ef | grep esp_server
    alices   19963 19961  1 09:45 pts/0    00:00:27 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   20243 20241  1 09:59 pts/1    00:00:15 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   20637 19963  0 10:16 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2000003_1393600620713 --log-to 4
    alices   20729 18201  0 10:18 pts/2    00:00:00 grep esp_server
    From here, I see that pid 20637's parent pid is 19963, which is the pid for node1 server.
    Next, I kill the active manager and the project to simulate a catastrophic failure:
    $ kill -9  20637 19963
    Due to the affinities, no failover happens to node2:
    $ ps -ef | grep esp_server
    alices   20243 20241  1 09:59 pts/1    00:00:17 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   20764 18201  0 10:21 pts/2    00:00:00 grep esp_server
    Next, I use a different ccr file that does not have affinities configured.
    From esp_cluster_admin, I stop and remove the test/test project and then re-run using the modified ccr that has no affinities.
    $ esp_cluster_admin --uri esp://bryantpark:19012\;bryantpark:19011 --username=sybase
    Password:
    > remove project test/test
    [done]
    > add project test/test bin/test.ccx test-without-affins.ccr
    [done]
    > start project test/test
    [done]
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node2
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
    This time, the project started on node2.
    > get project test/test
    Workspace:                    test
    Project:                      test
        ------------------ Instance : 0 ------------------
        Instance Id:                  Id_1000003_1393601774031
        Command Host:                 bryantpark.sybase.com
        Command Port:                 46151
        Gateway Host:                 bryantpark.sybase.com
        Gateway Port:                 36432
        Sql Port:                     50482
        SSL Enabled:                  false
        Big Endian:                   false
        Address Size:                 8
        Date Size:                    8
        Money Precision:              4
        Pid:                          21422
        Topology Ignored:             false
        Timer Interval:               5
        Active-Active:                false
    >
    The pid is 21422.
    $ ps -ef | grep esp_server
    alices   21213 21211  4 10:33 pts/0    00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   21282 21280  4 10:33 pts/1    00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   21422 21282  1 10:36 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_1000003_1393601774031 --log-to 4
    alices   21500 18201  0 10:36 pts/2    00:00:00 grep esp_server
    Pid 21422's parent pid is 21282, which is the pid for node2 server.
    Kill the project and node2:
    $ kill -9  21422 21282
    $ ps -ef | grep esp_server
    alices   21213 21211  4 10:33 pts/0    00:00:08 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   21505 21213 12 10:36 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2_1393601813749 --log-to 4 --clus
    alices   21578 18201  0 10:36 pts/2    00:00:00 grep esp_server
    The project automatically fails over now to node1.
    > get managers
    Manager[0]:     node1@http://bryantpark.sybase.com:19011
    > get controllers
    Controller[0]:  node1@http://bryantpark.sybase.com:19011
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node1
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
    Thanks,
    Alice

  • Active Failover Cluster of Identity Managment

    Could be possible create an AFC of IM using WebCache WC as Load BalanceR?
    Or we need to adquiere an LBR fron third parts tested list?
    http://www.oracle.com/technology/products/ias/hi_av/Tested_LBR_FW_SSLAccel.html
    If the first answer is YES, could someone redirect me to some notes, please.
    We are talking about a complete IM (SSO, DAS, OCA, OID).
    I guess OID could be a problem... :( due to WC can't do an ldap balancing
    tx for your help.
    diego

    When I started this, we had two different vmware servers set up in two isolated datacenters across the factory from one another. The two nodes of our RAC system (mounted on Netapp) follow the same topology. The tech doing the vmware tested a forced failover using a manual method (we didn't need it active at that point), and I didn't even see any downtime. I have no clue how this really works, but it seems fine. The vmware disks are mounted on a SAN which has striping between the two datacenters.
    Since then, they've started creating a vmware cluster which should fail over automatically (apparently), but I haven't seen it tested.
    Your (cold failover) solution sounds fine to me. Your only problem is that the SSOs will stop working while you're doing the failover.
    I once saw a solution somewhere (either at a client or in metalink) that had two OIDs running at the same time behind a hw load balancer with the MR RAC-mounted. I suspect that only one was "active" at a time. I think the solution is difficult because of the way that OID has cached data. I think that as well, some applications keep a persistent connection to OID (Forms runtime?) and these would break on failover.
    Maybe what you could do is have your standby "up" like this and somehow set up an automatic reload of the data on the standby when it becomes primary...

  • Difference between scalable and failover cluster

    Difference between scalable and fail over cluster

    A scalable cluster is usually associated with HPC clusters but some might argue that Oracle RAC is this type of cluster. Where the workload can be divided up and sent to many compute nodes. Usually used for a vectored workload.
    A failover cluster is where a standby system or systems are available to take the workload when needed. Usually used for scalar workloads.

  • How to install a database on a MS Windows 2008 R2 failover cluster?

    Hello,
    I have to install a Oracle 11g R2 database onto a MS cluster (Windows Server 2008 R2 Failover Cluster with 2 nodes and shared storage). I installed the DBMS on both nodes. On the first node, I created a tablespace, the files are on the shared storage. I can connect to that database, up to here everything is fine.
    But the problem is: How do I have to configure the DBMS on the second node to use the existing database files in the case when the second node gets the Cluster IP and the shared storage? The switchover is done, all database files (in the filesystem) can be seen on the second node.
    The goal is to connect to the database from an external application, regardless which node is active.
    I know the "Oracle Fail Safe Manager", it is already installed on both nodes.
    A link to the related documentation would be helpful, I can't find anything.
    Best regards,
    C. Jost
    Edited by: user13250186 on 07.06.2010 09:40

    Hello,
    But the problem is: How do I have to configure the DBMS on the second node to use the existing database files in the case when the second node gets the >Cluster IP and the shared storage? The switchover is done, all database files (in the filesystem) can be seen on the second node.When you switch the database from one node to another one do you have any error ?
    Can you connect to the database and check for the Tablespaces ?
    Normally, with OFS you define Groups. In one Group you'll have the Database, the Disk, the IP address and the Listener. So when you move the Group from one node to another one, the disk will be recognized on the second node then, the database, listener, ... will start and, there's no reason to loose a datafile.
    Please, find enclosed, an introduction to Oracle Fail Safe concept:
    http://www.oracle.com/technology/tech/windows/failsafe/pdf/fisc32.pdf
    Hope this help.
    Best regards,
    Jean-Valentin

  • Data Guard Broker and Cold Failover clusters

    Hi,
    I wanted to use Data Guard Broker to control primary/standby systems on AIX clusters but found that without Oracle clusterware it is not supported on failover of a database to another node (unless you drop and recreate the configuration which in my opinion is not support at all!).
    The 11g documentation states that DG Broker offers support for single instance databases configured for HA using Oracle Clusterware and "cold failover clusters".
    Does anyone know whether this support for cold failover clusters in 11g means "proper support" i.e the configuration detects that the hostname has changed and automatically renames it so it continues to work?, or is the support in 11g the same as that in 10g?.
    Thanks,
    Andy

    Hi Jan,
    We already use virtual hostnames.
    When I set up the configuration the hostname for the databases default to the server name. The only way I know of changing this is to disable the configuration, then use the "edit database...set property" command but when I enable the configuration again it reverts back to its previous value.
    regards,
    Andy

Maybe you are looking for

  • Invoice Cancellation in MR8M

    Dear Consultants, I have posted invoice in MIROu2026 Now I want to cancel the Invoice document in MR8Mu2026 When I am doing so, the system is throwing a Error message as The Tax amount must not be greater than the tax base. I Checked the Invoice Docu

  • What graphic cards can I run on my PC?

    I have recently bought this PC and have been using it for gaming. Unfortunately, my PC is only running games like,skyrim, at 20 frames per second or maybe 25. I was recomended to get a better graphics card but I have no idea which my PC can run. This

  • Numbers on iOS 7 crashes when switching tabs

    About a year ago I created a simple spreadsheet using Numbers on my iPad 2. The first tab has one small sheet on top with a few rows and a second sheet on the bottom. I created a form from the three columns on the second sheet. Every week I make a co

  • Error :The user cannot be enabled because the end date is passed.

    Hi While running eBusiness HRMS Terminated Reconciliation, OIM picks the Users End Date and if that End Date is passed, the user is Disabled. I think the connector should behave this way. Pls correct me if I am wrong. But, what's happening is, when I

  • Porque rechaza mi metodo de pago?

    hola quisiera saber xq el appstore rehaza mi metodo de pago no me deja bajar nada!