Moving Standalone Fileshares to Cluster Shares

Hi,
I have a standalone file server running Win2003 where shares are on E Drive.I need to migrate my shares with security on to a new Fileserver cluster running on Windows 2008R2.The cluster drive here is E As well.
On Windows 2003 the shares are located in LANMANServer\Shares whereas
On Windows 2008R2 as it is a cluster it is located on HKEY_LOCAL_MACHINE\Cluster\Resources\d8022efd-a98f-b58de1977c23\Parametres
Kindly help me on how to export from standalone to Cluster.
Regards

Hi,
Copy files to cluster is easier. We can use Robocopy to copy files with NTFS permissions. However if you mean you would like to "export share permission from registry key and import to cluster", since cluster share is different as share folders
on file server, this will not work.
Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Similar Messages

  • Create Notification for Cluster Share Volume Disk Capacity

    I am new to System Center Operations Manager but am slowly learning. We are running SCOM 2012 R2 and I want to get notifications when there are issues in our 2012 Hyper-V Cluster, specifically when the free space on our CSVs gets below
    100 GBs.  I have a notification channel setup and I am receiving notifications on other issues on monitored servers as they occur.  I have imported the Cluster, Core OS and Windows Server OS management packs which have enabled me to see in Monitoring
    > Microsoft Windows Server > Performance > Cluster Share Volume Disk Capacity the Free Space for the CSVs in a graph, now what do I need to do so I can get a notification when the free space on our CSVs gets below 100 GBs or whatever other
    level I want to set it at?  Thanks.

    Hi WSUAL2,
    Have you created a separate Subscription for Disk alerts? Please refer below link for "How to configure notifications".
    http://blogs.technet.com/b/kevinholman/archive/2012/04/28/opsmgr-2012-configure-notifications.aspx
    In your case, you need to select the respective Cluster Disk monitor while creating new Subscription.
    Check mark this --> "Created by specific Rule or Monitor"
    Before that make sure that you are getting alert in console. By seeing Alert Details, you can find the correct Rule/Monitor name.
    Regards, Suresh

  • Converting Standalone Agents to CLuster Agents

    Greetings,
    I have been reading about converting STandalone Agents to Cluster Agents and again am suffering a bit of confusion. Using the Enterprise Manager 12c Upgrade Console to upgrade from 11g to 12c via the 2-System method I have deployed and configured some agents on clusters. I do not know how all of the agents on these clusters were originally installed. After deploying, when converting the 12c agents I did specify the related targets for the clusters.
    My question is whether or not the agents on these clusters is indeed configured correctly for a clustered environment. This is based on the fact that some appear to behave incorrectly (status pending is one issue). Is there a way from within the 12c console (or any other way) to determine if these agents are in fact properly configured for a clustered environment? My initial hope was that they would have been but given some of the differences between 11g and 12c I would like to verify that.
    I would appreciate knowing if that is possible.
    Thank you.
    Bill Wagman

    Hi Bill,
    In 12c, there are no special settings or steps for deploying agents to a cluster. You simply deploy an agent to every node of the cluster, just like you would on an individual server. Once the agent is deployed, the data collected about these nodes will be used to construct the cluster targets in EM. Unfortunately, any issues with data collections or dynamic properties or associations, etc can cause these clusters or its nodes to show incorrect status, or not be formed as complete targets.
    Best to file an SR and have support look at your environment.

  • CDOT cluster share limit (cifs NFS) 8.3

    Regardless of node count, what's the cluster share / export rule limit
    Is it Cifs 40K or no?
    NfS 12,000?
    IS that right?

    Hi, The maximum number of NFS export rules depends on the size of the cluster. For Large sized clusters (24 nodes)  :  140,000For Medium (8 nodes) and small (4 nodes):70,000 Maximum number of regular shares  for CIFS(does not apply to dynamic shares created using the home directory feature):40,000 for Large, medium and small sized clusters. Thanks

  • Wallboard stats broken after moving to a HAoWAN cluster

    Hello, I am looking for some help trying to figure out what changes are made after moving to a HAoWAN cluster in relation to DB reporting for the wallboard app. 
    I had/have the free comminuty wallboard script (thank you comminuty) working just great before the upgrade to a HA cluster.  I am using UCCX 8.0.2SU3 and I have just reenctly added a second node for redundancy.  My stats for the wallboard app seem to have stopped working the moment I started the upgrade.  I have done the steps outline in the "Using Wallboard Software in a High Availability (HA) Deployment" guide but I am still not seeing current data.
    Has anybody else had this problem or have any suggestions on a workaround?
    John P

    Hi
    Well... not sure which wallboard/version of that wallboard you are using, so no idea if it checks that API.
    However.. If you think you have it pointed at the pub, and the pub is master, then I would check that the actual tables are updating:
    Try:
    run uccx sql db_cra select * from rtcsqssummary
    That should hopefully get you a list of the CSQs and stats; if it's not current/updating regularly (run it a few times on each server and compare) then the stats may not be updating and you need to look at the UCCX server.
    If you see them updating on the pub, then you know you have a problem with the wallboard.
    Aaron

  • File loss moving files to a network share

    While moving some files (silly me to use move vice copy and delete later) to a SMB network share about half way through the move OSX lost connection to the share. Error message popped up telling me it couldn't complete the move and the files yet to be moved were deleted! Fortunately they were just some podcasts but I certainly didn't expect that a move operation would delete unmoved files. Is this common behavior under the Mac OSX?
    Lawrence

    Satoru Murata wrote:
    And moving a file from one directory to another on the same drive is "physically" possible?
    Actually, yes. They are fundamentally different. The actual file itself does not change. There is one copy of the file. There is also (at least) one hard link to the file. The hard link is what you actually see in the file system as a "file". In a true move operation, you create two links to the file. There is then one physical file with two pointers to it. Then, after that 2nd link is successfully created, you "unlink" the first one. Correspondingly, to "delete" a file, you just "unlink" the one and only link to it. Time Machine, for example, make extensive use of links to create snapshots of your files. These links are only possible on a single file system.
    Why shouldn't we expect the Finder to be able to do that properly?
    We should. But this is not a bug that should really affect anyone. I'm quite sure that there are more people who, after hearing about this, have tried to use a move across volumes than have ever tried it before. Yes, it is a bug. Yes, you could lose data. But it is a minor operation that few people knew about until recently.
    And there's no guarantee that when you hit the "Send" button in Mail your e-mail will reach its destination. Perhaps you should snail mail it. Strike that, the USPS is even less reliable. Go deliver the mail in person.
    That is a poor analogy. You still have a copy of the sent mail in your system. A better analogy would be to click "send" and then immediately deleting your copy of the sent message. Surely, that would be foolish. And I prefer DHL. I have had much better luck with them.
    No one friggin' needs to use a computer, but a lot of people WANT to. And I'm saying there are people, if you can get it in your stubborn little head, who WANT to use Move instead of Copy, go back to original file/directory, click icon, drag to trash, empty trash, and click OK in confirmation dialog (which, by the way, is a few more than "one additional key" that you allude to).
    My goodness! No need to get so upset
    You can move files to new volumes all you want to. It is always going to be a risky operation, riskier than copying and then deleting. When Apple releases a fix, the risk will decrease, but I would not advise using it, on any platform.
    First of all, who the heck are you telling me I'm the minority so I should just bite the bullet? Since apparently you're such an aficionado of the Terminal, which only 5% of Mac users use on a daily basis, I guess you would be just fine and dandy if there was a glaring bug in Leopard that prevented you from using it "until the software update is available", huh?
    I'm "random guy on the Internet". I thought you knew that. That is an interesting question. I have always liked UNIX and I'm particularly fond of the Mac giving me the best of both worlds. But, I have to admit, I was a very happy Mac user long before MacOS X, so I think I could survive.
    And second of all, you obviously think I'm some Vista troll or something, but I certainly am not, and it's quite insulting. I'm usually the first one to come to Apple's defense when need be. And, as I mentioned in my first post, I agree that this "problem" is totally overblown on various sites. HOWEVER, you are just an idiot fanboy of the worst kind if you maintain that there's really nothing wrong with this bug and suggest that, and I quote:
    Really, the only reason for even attempting this operation is so you can get on the Internet and post a message about how Apple's horrible bug has caused you to lose data.
    No, I didn't think that much about you at all. I was just trying to explain the difference between moving vs. copying and how those operations changed when performing them across two different file systems. I admit I am a bit sensitive about the Mac. I try not to evangelize anymore. I do try to respond to FUD when I see it. There has been a lot of that recently with Leopard.
    I never said this wasn't a bug and that it shouldn't be fixed. I said it was a bug in a non-default behavior that few people even knew about. I also said that even attempting a move across file systems (even without this bug) is not a wise idea. It has a fairly high risk factor, a fairly low convenience factor, and no savings of space or time. It is my opinion that the only reason people care about it is so they have something to bash Apple about. I wasn't including you, personally, in that category. I was just trying to explain my thinking.
    I know I'm fighting a losing battle here. In the "marketplace of opinions" the matter is already decided. Leopard is just as buggy as Vista, if not more so. I've never used Vista, so I can't say for sure. I know the Leopard is perhaps the best, fastest, most bug-free system I have ever used in 25 years. But you wouldn't think that from reading the Internet commentary-du-jour.

  • Event ID 33020 LS Centralized Logging Agent - Error while moving cache files to network share

    I have the "AlwaysOn" CLS logging scenario running in my Lync 2013 Enterprise deployment.  I did not configure the CacheFileNetworkFolder option since i don't care about retaining these logs anywhere other than on the local drives of the Lync
    servers so i just left it blank.  Now every few hours or so I am getting Event ID 33020 in each Lync server and SCOM is firing an alert as well.
    The CsClsLogging configuration is as follows:
    PS C:\> Get-CsClsConfiguration
    Identity                      : Global
    Scenarios                     : {Name=AlwaysOn, Name=MediaConnectivity, Name=ApplicationSharing,
                                    Name=AudioVideoConferencingIssue...}
    SearchTerms                   : {Type=Phone;Inserts=ItemE164,ItemURI,ItemSIP,ItemPII,
                                    Type=URI;Inserts=ItemURI,ItemSIP,ItemPII,
                                    Type=CallId;Inserts=ItemCALLID,ItemURI,ItemSIP,ItemPII,
                                    Type=ConfId;Inserts=ItemCONFID,ItemURI,ItemSIP,ItemPII...}
    SecurityGroups                : {}
    Regions                       : {}
    EtlFileFolder                 : C:\CLSTracing
    EtlFileRolloverSizeMB         : 20
    EtlFileRolloverMinutes        : 60
    TmfFileSearchPath             : C:\Program Files\Common Files\Microsoft Lync Server 2013\Tracing\
    CacheFileLocalFolders         : C:\CLSTracing
    CacheFileNetworkFolder        :
    CacheFileLocalRetentionPeriod : 14
    CacheFileLocalMaxDiskUsage    : 80
    ComponentThrottleLimit        : 5000
    ComponentThrottleSample       : 3
    MinimumClsAgentServiceVersion : 6
    Is there a way to stop the flow of these events without having to configure CLS to transfer the logs to a network share?

    Yes, the CacheFileLocalFolders path of 'c:\CLSTracing' is valid on all the Lync servers.  The AlwaysOn scenario is started, running and producing .hdr & .cache files in this folder on all servers across my front-end, director and edge pools.  
    In addition, i am able to search for and extract, valid logging information that i can analyze using Snooper.exe.
    Reconfigure the CentralizedLoggingConfiguration how???
    Tried setting the CacheFileNetworkFolder value to null by running: Set-CsClsConfiguration -CacheFileNetworkFolder $null  and then restarting the CLS agent.  As expected, event 33037 fired confirming the settings were received from the CMS.
    New config received from CMS
    Following are the changed settings:
    EtlFileRolloverSizeMB: Old - NULL, New - 20
    CacheFileLocalRetentionPeriod: Old - NULL, New - 14
    CacheFileLocalMaxDiskUsage: Old - NULL, New - 80
    ComponentThrottleLimit: Old - NULL, New - 5000
    ComponentThrottleSample: Old - NULL, New - 3
    MinimumClsAgentServiceVersion: Old - NULL, New - 6
    TmfFileSearchPath: Old - NULL, New - C:\Program Files\Common Files\Microsoft Lync Server 2013\Tracing\
    CacheFileLocalFolders: Old - NULL, New - C:\CLSTracing
    CacheFileNetworkFolder: Old - NULL, New -
    SearchTerms: Old - NULL, New - Type=Phone;Inserts=ItemE164,ItemURI,ItemSIP,ItemPII,Type=URI;Inserts=ItemURI,ItemSIP,ItemPII,Type=CallId;Inserts=ItemCALLID,ItemURI,ItemSIP,ItemPII,Type=ConfId;Inserts=ItemCONFID,ItemURI,ItemSIP,ItemPII,Type=IP;Inserts=ItemIP,ItemIPAddr,ItemIPv6Addr,ItemURI,ItemSIP,ItemPII,Type=SIPContents;Inserts=ItemSIP
    Scenarios Added:
      Scenario: Name - AlwaysOn
        Provider List:...................... omitted to save space.

  • Moving photos between iPads that share an iMac (separate accounts)

    I have my own ipad and my mom has her own iPad. We each have our own iTunes account, but I manage them both as separate accounts on my iMac. We recently took a vacation and I have all of the photos in my iphoto (syncs w/ my iPad of course), but would like to also put them on her iPad to share with her friends. I thought I could copy them all, use fast switching and paste to her iphoto (synch to her iPad), but that doesn't work?! How can I get them to her account so they show in her ipad photos? I know I can simply email them to her, but isn't there an easier way to just give her the whole album?

    Double-click on your startup volume, usually called +Macintosh HD+, on the Desktop.
    At the top level, there is a folder called Users. Double-click it.
    This should show the home folders for all the user accounts on the Mac. There should be one other folder there, called Shared. Things you put in there are accessible from all the user accounts.

  • BO4 - Moving from 2 node cluster to 1 node

    Hi,
    At moment we have a 2 node cluster in a virtual environment (2 virtual hosts but actually 1 physical host) so no added resilience.
    We have been asked by the hardware team to look at consolidating all the bo4 services onto 1 node.
    Plan would be
    1) create the services which currently running on node 2 on node1 - they would be created in stopped and disabled state by default
    2) Stop the sia on node 2 and activate the services on node 1 so that node 1 contains all services
    Is there anything else we would need to do? e.g deleting the node 2 via the cmc?
    Thanks

    Hi Philip,
    I will suggest you to not to create all the services in Node 1 as Node2.
    analyze your system usage and create the services based on the usage and traffic especially on the processing side and APS.
    First stop the SIA in node 2 and create few Processing services in node1.
    Now remove the node:
    In CCM:
    To remove the node from a cluster you need to delete SIA from CCM.you need to have at least one running CMS in this cluster.
    If it will not work, then we can try to delete from the CMS DB and CMC

  • Mavericks Server: Fileshare: Setting a Share quota to use

    Hello
    I've a Mavericks Server running  here. All runs well. Version 10.9.1
    But i'am looking for a feature.
    I want to set a limit of GB use for 1 share.
    Example:
    SEVER:  testserver
    SHARE:  Data
    Local User:  luser
    Space on RAID in real:  12 TB
    Connection: SMB and AFP
    Mainquestion:
    What can i do to limit the User "luser" to use max. 500 GB in Share "Data". And not the full avaiable 12 TB?
    There is no OD running. Only a few local users to manage. A Simple Fileserver...
    Thanks for ideas and helps...

    You are correct.  I tend to be lazy and do the following.
    1:  Create a local user account (quotatemplate) and set a quota on that account.
    2:  Use the edquota command to copy quotatemplate's quota to other accounts.
    For example.  Let's say I have a few AD users named alincoln and tjefferson.  I want them to all have a quota of 500 GB.  I would create the quotatemplate user and set a quota of 500 GB.  (yes, I get a home folder created in /Users which is stupid and useless... but more importantly you get the .quota.ops.user and .quota.user files on the root of the drive.)  You can view the settings for quotatemplate with:
    sudo quota -vu quotatemplate
    Then you can apply these settings to other users using:
    sudo edquota -u -p quotatemplate alincoln
    sudo edquota -u -p quotatemplate tjefferson
    etc...
    This should allow you to apply quota to users outside the GUI.
    R-
    Apple Consultants Network
    Apple Professional Services
    Author "Mavericks Server – Foundation Services" :: Exclusively available in Apple's iBooks Store

  • I can not get my imovie clips transferred to my external hard drive. I did it before but I don't remember how I did it. I tried dragging it and I tried moving it under the share tab too. Help!

    I have a Mac 10.8.2 and iMovie 08. I have uploaded movie's from my video camera onto iMovie, and now I cannot move them to my external hard drive. I was able to do this before, but I don't remember how I did it. I tried to drag it and I tried moving it by using the share tab.
    Any information on how I can go about transferring from iMovie to my external hard drive would be helpful.
                                                                     AND
    Any information on how I can transfer movies and pictures from my camera directly to my external hard drive would be helpful as well.
    Thank You!

    I cannot find this 300GB "Backup" in the Finder, only in the Storage info when I check "About This Mac".
    You are probably using Time Machine to backup your MacBook Pro, right? Then the additional 300 GB could be local Time Machine snapshots.  Time Machine will write the hourly backups to the free space on your hard disk, if the backup drive is temporarily not connected. You do not see these local backups in the Finder, and MacOS will delete them, when you make a regular backup to Time Machine, or when you need the space for other data.
    See Pondini's page for more explanation:   What are Local Snapshots?   http://pondini.org/TM/FAQ.html
    I have restarted my computer, but the information remains the same. How do I reclaim the use of the 300GB? Why is it showing up as "Backups" when it used to indicate "Photos"? Are my photos safe on the external drive?
    You have tested the library on the external drive, and so your photos are save there.  
    The local TimeMachine snapshot probably now contains a backup of the moved library.  Try, if connecting your Time Machine drive will reduce the size of your local Time Machine snapshots.

  • How do I use iTunes shar to share apps with in my household

    I heard that sharing apps with in a household is ok. So I wanna co it but not sure how to do it with I tunes share. Help please

    The apps that you share will be associated with your Apple ID, even if you share them with family members. So if an app needs to be updated, then you shall have to sign into your Apple ID on the computers or iOS devices which have the apps installed in order to update them.
    Probably the easiest way to share the apps is to copy them from your computer to the other computers in your family and then authorize those computers with your Apple ID. You can authorize up to five (5) computers, including your own, to use your iTunes stuff. On a Mac the apps are stored at this location; ~/Music/iTunes/iTunes Media/Mobile Applications/ (I have no idea where they are on a Windows box!) Drag & drop the apps that you wish to share from that location onto the iTunes app on the other Macs or PCs over your LAN or other direct connection. Or copy them from that location onto a USB stick and then drag & drop them onto iTunes on the other computers. After you authorize those computers for your iTunes stuff, the apps can be synced to iOS devices. You authorize the computer to use your iTunes stuff in the computer's iTunes app's Store menu.
    There are other more complicated methods of moving the apps around to share.

  • OPMN fails in cluster environment after the machie reboots.

    Dear All:
    Have you ever meet the following problem or have any suggestion for it? Thanks a lot.
    There are machine A and B have been setup successfully in a cluster environment, everything is fine.
    However, after the machine B reboots, the Oracle BI Server and Presentation Server in machine B could not start.
    The error message for Oracle BI Server is:
    [2011-06-02T10:40:34.000+00:00] [OracleBIServerComponent] [NOTIFICATION:1] [] [] [ecid: ] [tid: 1060] Server start up failed: [nQSError: 43079] Oracle BI Server could not start because locating or importing newly published repositories failed.
    The error message for presentation server is:
    [2011-06-02T18:40:24.000+08:00] [OBIPS] [ERROR:1] [] [saw.sawserver.initializesawserver] [ecid: ] [tid: ] Another instance of Oracle Business Intelligence is trying to upgrade/update/create the catalog located at \\?\UNC\10.91.61.158\BIEEShare\CATALOG\SampleAppLite\root. Retry after it finishes.[
    The detail information is:
    1)     Machine A and B both run Windows 2008 server 64bit.
    2)     Machine A runs Admin Server+BI_Server1(managed server). Machine B is configured with “Scale out an existing installation” option and runs BI_Server2(managed server).
    3)     BIEE 11.1.1.3
    4)     In EM, the shared location of shared repository has been set to something like \\10.91.61.158\shared\RPD. The catalog path has been set to \\10.91.61.158\BIEEShare\CATALOG\SampleAppLite. Both path could be accessed by machine A and B.
    5)     Before machine B reboots, everything is fine. All the servers could be started.
    6)     After machine B reboots, the managed server BI_server2 could be started successfully. However, “opmnctl startall” would incur the above error message.
    Any suggestion is welcome. Thanks a lot!

    Hello,
    Thanks for answering.
    I am using OBIEE 11.1.1.5.0 enterprise software-only install scaleout existing Bi system option on HostMachine2. All services and coreapplication systems are running except coreapplication_obis1, coreapplication_obis2, coreapplication_obips1 and coreapplication_obips2 of bi_server2 on HostMachine2.
    Here is my ClusterConfig.xml, as you can see all changes in this file is done by em and primary and secondary controller settings are already applied. The same file also exist in the same location on HostMachine2.
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <Cluster xmlns="oracle.bi.cluster.services/config/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="oracle.bi.cluster.services/config/v1.1 ClusterConfig.xsd">
    <ClusterProperties>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ClusterEnabled>true</ClusterEnabled>
    <ServerPollSeconds>5</ServerPollSeconds>
    <ControllerPollSeconds>5</ControllerPollSeconds>
    </ClusterProperties>
    <NodeList>
    <Node>
    <NodeType>PrimaryController</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><NodeId>instance1:coreapplication_obiccs1</NodeId>
    <!--HostNameOrIP can be a hostname, IP or virtual hostname-->
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><HostNameOrIP>HostMachine1</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ServicePort>9706</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MonitorPort>9700</MonitorPort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ListenAddress>HostMachine1.localdomain.com</ListenAddress>
    </Node>
    <Node>
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MasterServer>true</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><NodeId>instance1:coreapplication_obis1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><HostNameOrIP>HostMachine1.localdomain.com</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ServicePort>9703</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MonitorPort>9701</MonitorPort>
    </Node>
    <Node>
    <NodeType>Scheduler</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><NodeId>instance1:coreapplication_obisch1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><HostNameOrIP>HostMachine1.localdomain.com</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ServicePort>9705</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MonitorPort>9708</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MasterServer>false</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance1:coreapplication_obis2</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine1.localdomain.com</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9702</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9709</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MasterServer>false</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obis1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9761</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9762</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MasterServer>false</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obis2</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9763</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9764</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>SecondaryController</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obiccs1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9765</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9766</MonitorPort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ListenAddress>HostMachine2</ListenAddress>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Scheduler</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obisch1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9770</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9771</MonitorPort>
    </Node>
    </NodeList>
    <SSLProperties>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><SSL>false</SSL>
    <SSLCertificateFile/>
    <SSLPrivateKeyFile/>
    <SSLCACertificateFile/>
    <SSLVerifyPeer>false</SSLVerifyPeer>
    </SSLProperties>
    </Cluster>

  • Oracle 10g R2 whitout RAC in a cluster hw (Windows Server 2008 cluster)

    Hi,
    Is this possible?
    I ask it 'cause my customer wants to improve this solution.
    I know that Oracle 10gR2 vs Windows Server 2008 is not supported and Oracle Standalone vs Hardware Cluster, neither too, but it's possible or will be a source of big problems?
    Thanks in advance.
    JRC.

    Hi lain;
    Please check Re: oracle10g on linux cluster
    In my issue we were working on linux cluster. Our aim is
    we have 3 server and they are linux cluster OS level. Let us call those server A,B and C... We want to make oracle10g works on server A and when server A is gone oracle10g on B server take place of A, if server B gone too, oracle10g on C server take place of B.... In each server we had different numver of database. I mean on machine a we have 3 database, on server B we have 4 database and on server C we have 7
    What i do for this:
    1. I install 10g database software only (not created database)
    2. using netca and create listener for each database
    3. using dbca and create database for each server.While on dbca wizard i gave SAN path for dbf
    4. Linux admin arrange all other stuff(I mean when db1 on server A crash server B take responsibility of Server A and db1 keep working)
    I hope it helps you
    Regard
    Helios

  • Why the non-cluster SQL Server appeared in the cluster nodes list

    1, I install the node rs6 standalone, Why it appeared in the cluster node list by inquiry the dmv?
    2, how to removed the rs6 from the cluster node list ?
    by "set -clusterownernode -resource "XXXASQL" -owners NODE1,NODE2"?
    But how to find the resource  name? I tried to use window cluster name, SQL cluster name, and SQL role name , All of them say failed to get the cluster object.
     3,how to set the owers to {}, I try below, but failed.

    IMHO, sys.dm_os_cluster_nodes  DMV is associated with the SQL Server
    Operating System (SQLOS), sys.dm_os_cluster_nodes returns one row for each node in the failover cluster configuration.
    As you are running standalone instance on cluster I am assuming this information is being picked from
    OS and not from RS6 SQL instance.
    As you have confirmed Is_cluster is false and if you don’t see RS6 instance in failover cluster manager I don’t think anything damaged here. Everything looking as expected, dont change owner node as its standalone instance.

Maybe you are looking for