License Probelm  Fail over node.

Dear Gurus,
I am reciving an error while applying the liscense to the fail over server.
SAPLICENSE (Release 700) ERROR ***
ERROR: Can not set DbSl trace function
DETAILS: DbSlControl(DBSL_CMD_IMP_FUNS_SET) failed with return code
20
RC-INFO: error loading dynamic db-library - check environment for:
dbms_type = <db-type> (e.g. ora)
DIR_LIBRARY = <path to db-dll>
(e.g. /usr/sap/SID/SYS/exe/run)
LD_LIBRARY_PATH = <path do db and sap libs>
(e.g. /oracle/SID/lib)
My Production system no is 00000000031XXXXXXX but the license key is for 00000000031XYYYYY.
I am anot able to login to fail over node due to this.
How can I resolve this problem ?
Sachin

sachin,
   how r u applying the license through slicense???
unable to login in failover??  have u deleted the temporary license??? or has it crossed 4 weeks??
if u have deleted the temp license apply the license through visual admin
if it has crossed 4 weeks , revert the date backwards and apply license
u must have applied for a license in oss using other system with same sid,
u have to copy the system number from the node u have applied license on and apply for a license under the same system  in the oss and give the system number.
when u get the license apply as mentioned above.

Similar Messages

  • SQL not caching on failed over node

    Hi Friends,
    Running SQL 2008 SP4 Cluster , but when I'm on second node the queries are extremely slow and noticed that the SQL is not caching properly the data after the first warmup run. any thoughts and suggestions? have you had this problem before?
    Thanks,
    Patrick
    Patrick Alexander

    Hi Patrick,
    According to your description, you face slow running queries. Based on my research, the issue could be due to the lack of useful statistics, or the lack of useful indexes according to this
    article.
    To trouble shoot the issue, you could follow the steps below:
    1. Use SQL Server Profiler to find the slow queries. SQL Server Profiler provides a graphical user interface to create, manage, analyze, and replay SQL traces. SQL Server Profiler provides several built-in templates where tracked events and columns for each
    event are defined. You could select TSQL_Duration from the templates, and select the save to file option to save the captured event information into a trace file (.trc) that you can analyze later, or replay in SQL Server Profiler. For how to monitor a query
    using SQL Server Profiler, please refer to the article:
    http://solutioncenter.apexsql.com/monitor-sql-server-queries-find-poor-performers-sql-server-profiler/
    2. Analyze the performance of a slow-running query according to the following article:
    Displaying Graphical Execution Plans (SQL Server Management Studio). The information gathered allows you to determine how a query is executed by the SQL Server query optimizer
    and which indexes are being used. Using this information, you can determine if performance improvements can be made by changing the indexes on the tables. For more information about how to use indexes to improve query performance, see
    General Index Design Guidelines.
    3. You could create additional statistics, or update statistics. For more information about how to use statistics to improve query performance, please refer to the article:
    https://technet.microsoft.com/en-us/library/ms190397(v=sql.105).aspx#CreateStatistics.
    Regards,
    Michelle Li

  • Failover cluster server - File Server role is clustered - Shadow copies do not seem to travel to other node when failing over

    Hi,
    New to 2012 and implementing a clustered environment for our File Services role.  Have got to a point where I have successfully configured the Shadow copy settings.
    Have a large (15tb) disk.  S:
    Have a VSS drive (volume shadow copy drive) V:
    Have successfully configured through Windows Explorer the Shadow copy settings.
    Created dependencies in Failcover Cluster Server console whereby S: depends on V:
    However, when I failover the resource and browse the Client Access Point share there are no entries under the "Previous Versions" tab. 
    When I visit the S: drive in windows explorer and open the Shadow copy dialogue box, there are entries showing the times and dates of the shadow copies ran when on the original node.  So the disk knows about the shadow copies that were ran on the
    original node but the "previous versions" tab has no entries to display.
    This is in a 2012 server (NOT R2 version).
    Can anyone explain what might be the reason?  Do I have an "issue" or is this by design?
    All help apprecieated!
    Kathy
    Kathleen Hayhurst Senior IT Support Analyst

    Hi,
    Please first check the requirements in following article:
    Using Shadow Copies of Shared Folders in a server cluster
    http://technet.microsoft.com/en-us/library/cc779378(v=ws.10).aspx
    Cluster-managed shadow copies can only be created in a single quorum device cluster on a disk with a Physical Disk resource. In a single node cluster or majority node set cluster without a shared cluster disk, shadow copies can only be created and managed
    locally.
    You cannot enable Shadow Copies of Shared Folders for the quorum resource, although you can enable Shadow Copies of Shared Folders for a File Share resource.
    The recurring scheduled task that generates volume shadow copies must run on the same node that currently owns the storage volume.
    The cluster resource that manages the scheduled task must be able to fail over with the Physical Disk resource that manages the storage volume.
    If you have any feedback on our support, please send to [email protected]

  • Exception while failing over to 2nd RAC Node

    We are using Weblogic 10.3.4. Our setup is that we have a Web Application (A tapestry front end Web UI) and EJb 2.1 back-end talking to the Oracle database. The EJB’s are CMP. Our product always was just stand alone and it wasn’t until this release we needed to make it work with RAC. To get this to work we followed the model of having a Multidatasource with datasources pointing to our RAC nodes. We have two types of datasources that we use persistent and non-persistent. And we are using the Oracle thin driver – non-XA for RAC Service Instances, supporting global transactions.
    When we do failover to the 2nd node we get a nasty exception in our GUI but after logging out and logging back it we are fine.
    My question is that I assumed I shouldn't have to restart our web-application and it should have stayed up ?? Or is there something wrong with our setup ?
    Thanks,
    Ian

    Showing us the exception and/or the error messages at the server might help...
    Note that failing over does not save any ongoing connection or transaction that
    had been to the dead RAC node... Does your web-app get-use-close JDBC
    connections on a per-user-invoke basis, or does it hold onto connections?
    Joe

  • ISE admin , PSN and monitoring node fail-over and fall back scenario

    Hi Experts,
    I have question about ISE failover .
    I have two ISE appliaces in two different location . I am trying to understand the fail-over scenario and fall-back scenario
    I have gone through document as well however still not clear.
    my Primary ISE server would have primary admin role , primary monitoring node and secondary ISE would have secondary admin and secondary monitoring role .
    In case of primary ISE appliance failure , I will have to login into secondary ISE node and make admin role as primary but how about if primary ISE comes back ? what would be scenario ?
    during the primary failure will there any impact with users for authentication ? as far as PSN is available from secondary , it should work ...right ?
    and what is the actual method to promote the secondary ISE admin node to primary ? do i have to even manually make monitoring node role changes ?
    will i have to reboot the secondary ISE after promoting admin role to primary  ?

    We have the same set up across an OTV link and have tested this scenario out multiple times. You don't have to do anything if communication is broken between the prim and secondary nodes. The secondary will automatically start authenticating devices that it is in contact with. If you promote the secondary to primary after the link is broke it will assume the primary role when the link is restored and force the former primary nodes to secondary.

  • 2 node Sun Cluster 3.2, resource groups not failing over.

    Hello,
    I am currently running two v490s connected to a 6540 Sun Storagetek array. After attempting to install the latest OS patches the cluster seems nearly destroyed. I backed out the patches and right now only one node can process the resource groups properly. The other node will appear to take over the Veritas disk groups but will not mount them automatically. I have been working on this for over a month and have learned alot and fixed alot of other issues that came up, but the cluster is just not working properly. Here is some output.
    bash-3.00# clresourcegroup switch -n coins01 DataWatch-rg
    clresourcegroup: (C776397) Request failed because node coins01 is not a potential primary for resource group DataWatch-rg. Ensure that when a zone is intended, it is explicitly specified by using the node:zonename format.
    bash-3.00# clresourcegroup switch -z zcoins01 -n coins01 DataWatch-rg
    clresourcegroup: (C298182) Cannot use node coins01:zcoins01 because it is not currently in the cluster membership.
    clresourcegroup: (C916474) Request failed because none of the specified nodes are usable.
    bash-3.00# clresource status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ftp-rs coins01:zftp01 Offline Offline
    coins02:zftp01 Offline Offline - LogicalHostname offline.
    xprcoins coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    xprcoins-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    DataWatch-hasp-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    BDSarchive-res coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    I am really at a loss here. Any help appreciated.
    Thanks

    My advice is to open a service call, provided you have a service contract with Oracle. There is much more information required to understand that specific configuration and to analyse the various log files. This is beyond what can be done in this forum.
    From your description I can guess that you want to failover a resource group between non-global zones. And it looks like the zone coins01:zcoins01 is reported to not be in cluster membership.
    Obviously node coins01 needs to be a cluster member. If it is reported as online and has joined the cluster, then you need to verify if the zone zcoins01 is really properly up and running.
    Specifically you need to verify that it reached the multi-user milestone and all cluster related SMF services are running correctly (ie. verify "svcs -x" in the non-global zone).
    You mention Veritas diskgroups. Note that VxVM diskgroups are handled in the global cluster level (ie. in the global zone). The VxVM diskgroup is not imported for a non-global zone. However, with SUNW.HAStoragePlus you can ensure that file systems on top of VxVM diskgroups can be mounted into a non-global zone. But again, more information would be required to see how you configued things and why they don't work as you expect it.
    Regards
    Thorsten

  • How to add a cloud machine as a node to existing windows fail over cluster having on-premise node in Windows server 2008 R2

    Hi All,
    We have a windows fail over cluster having one windows machine on local network as one of its node.
    I want to add a virtual cloud machine available on microsoft azure as another node to this existing cluster.
    Please suggest how to do this?
    Thanking all in advance,
    Raghvendra

    Before you even start working on the SQL side, you will need to create a Windows Server 2008 R2 cluster with no shared storage.  You can actually test that in-house.  Create a VM running 2008 R2 and cluster it with your physical (from your description,
    I am assuming physical) 2008 R2 machine. Create it with a file share witness for quorum. Then configure your environment to see that it works as expected.
    Once you know how to configure the cluster between physical and VM with a file share witness, build it to Azure.  The location of the FSW gets to be an interesting choice.  To have a FSW in Azure means that you will need another VM in Azure to
    host the file share, meaning you have two quorum votes in Azure and one in-house.  Or, you could create a file share witness on an in-house system, giving you two quorum votes in-house and one in Azure.
    In the FSW in Azure scenario, if you have a loss of the in-house server, automatic failover occurs because two quorum votes exist in Azure.  With FSW in-house, depending on the loss you have in-house, you might have to force quorum to get the Azure
    single-node cluster to run.  Loss of access to Azure reverses those scenarios.  Neither one is optimal, but it does provide some level of recoverability.
    . : | : . : | : . tim

  • VIP is not failed over to surviving nodes in oracle 11.2.0.2 grid infra

    Hi ,
    It is a 8 node 11.2.0.2 grid infra.
    While pulling both cables from public nic the VIP is not failed over to surviving nodes in 2 nodes but remainng nodes VIP is failed over to surviving node in the same cluster. Please help me on this.
    If we will remove the power from these servers VIP is failed over to surviving nodes
    Public nic's are in bonding.
    grdoradr105:/apps/grid/grdhome/sh:+ASM5> ./crsstat.sh |grep -i vip |grep -i 101
    ora.grdoradr101.vip ONLINE OFFLINE
    grdoradr101:/apps/grid/grdhome:+ASM1> cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: eth0
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    Slave Interface: eth0
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 84:2b:2b:51:3f:1e
    Slave Interface: eth1
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 84:2b:2b:51:3f:20
    Thanks
    Bala

    Please check below MOS note for this issue.
    1276737.1
    HTH
    Edited by: krishan on Jul 28, 2011 2:49 AM

  • 3 node cluster with 1 vInstance. vInstance can not to fail-over to one specific node.

    I have a 3 node cluster all running Windows Server 2008 R2. Roughly once a month I see my vInstance become degraded and attempt to fail-over. Everything is good as long as it fail-over to SQL01 or SQL02. However if it attempts to fail-over to SQL03, it does
    not come online
    Quick resolution is to move it manually to SQL01 or SQL02. What could be causing it to fail every time on SQL03.
    A couple points:
    I did not build the environment.
    I am not a DBA.
    I only have general knowledge of SQL clustering.
    I always get two EVENT ID's: 1069
    Cluster resource 'SQL Server (VSQL04)' in clustered service or application 'SQL Server (VSQL04)' failed.
    and then
    EVENT ID 1205
    The Cluster service failed to bring clustered service or application 'SQL Server (VSQL04)' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.
    Where should I begin to look for issues?

    Here is the cluster event prior to offline state. I will have to go check the cluster log.
    The Cluster service failed to bring clustered service or application 'SQL Server (VSQL04)' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.
    i do not think this helps.. it just says..a resource in offline state.. you need to dig more and see which one it is and why it did not come banck on ..it should be mentioned in the log and\or event viewer.
    Hope it Helps!!

  • Problems with Oracle FailSafe - Primary node not failing over the DB to the

    I am using 11.1.0.7 on Windows 64 bit OS, two nodes clustered at OS level. The Cluster is working fine at Windows level and the shared drive fails over. However, the database does not failover when the primary node is shutdown or restarted.
    The Oracle software is on local drive on each box. The Oracle DB files and Logs are on shared drive.

    Is the database listed in your cluster group that you are failing over?

  • SQL Server 2014 Always on HA takes 8-14 seconds to fail over. Application side timeouts occur

    Hi All,
    I have a very similar post in the SQL Server 2014 forums too (https://social.technet.microsoft.com/Forums/sqlserver/en-US/adb5e338-907e-4405-aa62-d3ea93c7a98a/sql-server-2014-always-on-ha-takes-814-seconds-to-fail-over-application-side-timeouts-occur?forum=sqldisasterrecovery) -
    advice in the end was to post a question here.
    SQL Server Nodes, 2014 (12.0.2480.0)
    1 Share witness (on separate subnet)
    1 Cluster
    1 Listener
    I have been testing the response time to failovers – both manual (right-click, fail over in SSMS) and Automatic (shut down the primary host). The way I am testing response is to have a SSMS query running on my desktop, connected to the listener querying
    a small table and hit execute.
    The Query response time, from execute to receiving the result, has been between 8 and 14 seconds based on my testing. My previous experience (in a separate environment) showed around 2 second fail over times in a very similar configuration.
    Availability DB is 200Mb and is not actively used. The nodes are synchronised.
    SQL Server Hosts: Windows 2012, 2 cpu, 8gb RAM.
    Questions:
    1: It’s a big question but what should I expect for a ‘normal’ fail over time. Keep in mind this scenario is about as simple as it gets.
    2: As it stands an 8 to 14 second ‘outage’ could cause some applications to time out. Or am I being un-reasonable? I am seeing the very simple query in SSMS to time out with this:
    Msg 983, Level 14, State 1, Line 2
    Unable to access availability database 'DATABASE' because the database replica is not in the PRIMARY or SECONDARY role. Connections to
    an availability database is permitted only when the database replica is in the PRIMARY or SECONDARY role. Try the operation again later.
    Cluster logs are long - this section accounts for 8 seconds of the 11 second outage I experienced. I can supply the full log if required. Also this log is just the 2 cluster nodes, I removed the witness share to make sure it was as simple as possible.
    00001090.00002128::2015/02/25-03:05:08.255 INFO  [GEM] Node 2: Deleting [1:65 , 1:71] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:10.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:11.888 INFO  [GEM] Node 2: Deleting [1:72 , 1:73] (both included) as it has been ack'd by every node
    00001090.00002698::2015/02/25-03:05:11.889 INFO  [GUM] Node 2: Processing RequestLock 2:49
    00001090.00002128::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 67)
    00001090.00002698::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: executing request locally, gumId:68, my action: /dm/update, # of updates: 1
    00001090.00002128::2015/02/25-03:05:12.890 INFO  [GEM] Node 2: Deleting [1:74 , 1:74] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:15.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:16.988 INFO  [GUM] Node 2: Processing RequestLock 1:28
    Thanks in advance.
    Keegan

    Hi Keegan,
    From these event log , what I can see is "Sending request Netname" wasted the time .
    Could you please tell us the network configuration of that cluster nodes ?
    If I recall correctly , it is recommended to only remain Tcp/IP protocol and disable NetBIOS over TCP/IP for "Private Network" , also do not configure DNS/Wins default gateway for "Private Network" :
    https://support.microsoft.com/kb/258750?wa=wsignin1.0
    After that please test again .
    Best Regards,
    Elton JI
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • How Front End pool deals with fail over to keep user state?

         Hello to all, I searched a lot of articles to understand how Lync 2010 keeps user state if a fail happens in a Front Pool node, but didn't find anything clear.
         I found a MS info. about ths topic : " The Front End Servers maintain transient information—such as logged-on state and control information for an IM, Web, or audio/video (A/V) conference—only for the duration of a user’s session.
    This configuration
    is an advantage because in the event of a Front End Server failure, the clients connected to that server can quickly reconnect to another Front End Server that belongs to the same Front End pool. "
        As I read, the client uses DNS to reconnect to another Front End in the pool. When it reconnects to an available server, does he lose what he/she was doing at Lync client? Can the server that is now hosting his section recover all
    "user's session data"? Is positive, how?
       Regards, EEOC.

    The presence information and other dynamic user data is stored in the RTCDYN database on the backend SQL database in a 2010 pool:
    http://blog.insidelync.com/2011/04/the-lync-server-databases/  If you fail over to another pool member, this pool member has access to the same data.
    Ongoing conversations and the like are cached at the workstation.
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications

  • Is it possible to add hyper-V fail over clustering afterwards?

    Hi,
    We are testing Windows 2012R2 Hyper-V using only one stand alone host without fail over clustering now with few virtual machines. Is it possible to add fail over clustering afterwards and add second Hyper-V node and shared disk and move virtual
    machines there or do we have to install both nodes from scratch?
    ~ Jukka ~

    Hi Jukka,
    Inaddition, before you build hyper-v failover cluster please refer to these requirements within the article below :
    http://technet.microsoft.com/en-us/library/jj863389.aspx
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Which role do I need DFS or File server on fail over cluster server 2012 R2?

    what I want to achieve is that I want to share all my user data files in a central location and to be highly available all the time whether it's a general share or folder redirection data. BUT I'm a bit confused;  I have fail over cluster  set-up
    on server 2012, now I would like to add DFS as a role but than we have another role called File server and virtually it does the same thing as DFS? Means it creates a namespace share that can be access even one of the nodes goes down. Now I am thinking is
    that DFS does the replication between two physical location but fail over cluster works slightly differently  and with file server it pretty much does the same thing except for replicating data from one drive to another. Now what do you suggest I do or
    did I get the concept wrong like a noob?

    DFS and Failover Clustering for file shares provides a similar end result for file access, but they are significantly different implementations.
    Clustering provides high availability to files by presenting shared access to set a files served from a cluster.  With 2012 R2 Microsoft added the ability to create a Scale-out File Server that even allows all nodes of the cluster to server access to
    the files for a higher level of performance and other great things.  Bottom line with Failover Clusters for files is that there is a single copy of the file presented from the cluster.
    DFS on the other hand provides high availability to files by presenting multiple copies of the file by making a copy in two or more locations and presenting a naming space that allows access to the file through any of the network paths.  DFS works very
    well for files that are primarily read-only.  When you get into a situation where there is a lot of updating of the shared files, DFS is not a very good solution.  There are ways to implement DFS for read/write files, but it generally requires a
    good knowledge of how the files are used and how you want to manage them.
    The key to answering your question comes in your first sentence "I want to share all my user data files in a central location and to be highly available all the time".  My initial reaction to this is that central location means Failover Cluster
    - there is only a single copy of the file.  However, "all the time" can be compromised by network failures to the central site.  Remote sites would not have access if they can't access the central site.  DFS provides the ability to
    have copies remotely, but then if you allow updating at multiple sites, you have to manage the merging of the changes, among other things.
    . : | : . : | : . tim

Maybe you are looking for

  • ABAP Database Table error

    Hi Gurus, While creating the database table it giving the error that SAP System has status 'not modifiable'. Pls help us. Regards Sachin Patil Moderator message: please search for available information before asking. Edited by: Thomas Zloch on Dec 1,

  • Can't reinstall Acrobat from CS4

    Hi Here are my symptoms--for some reason Adobe PDF 9.0 wouldn't work. So I figured that the best thing to first try is uninstalling Acrobat Pro 9 and reinstalling it. I used the uninstaller to uninstall, then ran the installer from CS4 Design Standar

  • Open in Browser changes 2 pages

    There are several pages for reference, but they all act the same. www.shallowdivemedia.com Click "compare" at the bottom. A pop-up will come up - which is supposed to happen But then the main screen also shows the comparison chart I can't find any ex

  • User Defined Field migration with long predelined list in DTW

    Hi Expert, The project is using 2007A PL30. Due to mapping to customer existing legacy system, there are UDF with a set of predefined value so that user can select from drop down list. The list is very long and description is more than 30 characters

  • Making of a new generation phone

    Making of a new generation phone that can play a mini disc (md).This phone shuold be designed to have a space compatible for operating a "md".After installing the "md" into the phone it should automatically display and save the information on the cd