Cluster and standalone

we builted active/passive node windows server 2012 and sql server 2012 with shared storage data,log and backup.
can we install standalone instances on active node and can we use the shared drives?
please help me

When the cluster fails over, the shared drives won't be available from the node and your standalone instance will not start. You can install the standalone SQL instance on the local drives but it might also create unnecessary complications during
patching as patching entails upgrading the shared components of all SQL instances. Also, you would have to account for these standalone instances when setting SQL max memory such as you should consider the worst case when all instances are running on the same
node along with standalone instances. 
Satish Kartan www.sqlfood.com
A short answer would be that it is not recommended
Satish Kartan www.sqlfood.com

Similar Messages

  • Applying Service Pack on multiple instance SQL Server 2008 R2 Cluster and Standalone.

    Friends,
    Its my first time I am applying Service pack to both Standalone and Clustered SQL 2008 R2 servers. I have multiple instances running on both stand alone and Clustered environment. I don't know how to apply pack with multiple instances. Do I need to apply
    the service pack each instance one by one or I can just run the pack to each node of cluster and each box of stand alone server? Will I get any multi instance selection option during the installation process so that I can select multiple instances at once??

    When installing service pack for multiple instances in a standalone machine, you just run the installer and it lists all the instances eligible for the update. You could do it one by one or do it at the same time. Doing it at the same time saves time and
    energy. If you selected multiple instances then you dont have to do anything else.
    In case of cluster, if you have multiple instances then you need to be little careful.
    Eg: Assume you have a 2 node cluster Node 1 and Node 2. Two instances Instance 1 active on Node 1 and Instance 2 active on Node 2. What I would do in this case is to first failover any one instance to the passive node. Assume I failed over Instance 2 to
    Node 1. Now Node 1 is active node for both instance 1 and instance 2 and Node 2 is passive for both instances. At this point I run the service pack on Node 2. Once patching is done, I reboot the node and then failover the instaces one by one to Node
    2. At this point both instances 1 and 2 would get upgraded and will be active in Node2. Now Node 1 is passive and so run the update there. Once the service pack is installed, reboot and then fail back Instance 1 back to Node 1. Note: I would also failover
    Instance 2 to Node 1 just to confirm that failover works completely after service pack update.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Using Datasources.xml in backup of Hyper-V cluster and standalone Hyper-V hosts

    Hi!
    I have a DPM 2012 server which I have used to backup standalone Hyper-V  hosts and Exchange 2010 server active DAG node. Since yesterday I have added Hyper-V cluster backup, and now I have generated Datasources.xml using .\DSConfig.ps1 on one of the
    Hyper-V nodes.
    My question is: Do I have to manually add other protected data sources (Exchange, VMs from standalone hosts) to Datasources.xml? Or this xml is used only for CSV clusters?
    Thanks in advance for your answers!
    Kruno

    Hi,
    Yes, but the key is wrong - this is required as part of the csv serialization configuration.
    On the DPM Server, Copy / paste below into notepad, then save as MaxAllowedParallelBackups.reg on the DPM server, then right-click and select merge.
    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\2.0\Configuration\MaxAllowedParallelBackups]
    "Microsoft Hyper-V"=dword:00000001
    Please review these two sources to assist you.
    http://social.technet.microsoft.com/wiki/contents/articles/17493.protecting-hyper-v-virtual-machines-with-system-center-dpm-2012.aspx
    http://blogs.technet.com/b/dpm/archive/2010/12/09/system-center-data-protection-manager-2010-hyper-v-protection-configuring-cluster-networks-for-csv-redirected-access.aspx
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Some error in cluster alertlog but the cluster and database is normal used.

    hi.everybody
    i used RAC+ASM,RAC have two nodes and ASM have 3 group(+DATA,+CRS,+FRA),database is 11gR2,ASM used ASMlib,not raw.
    When i start cluster or auto start after reboot the OS,in the cluster alertlog have some error as follow:
    ERROR: failed to establish dependency between database rac and diskgroup resource ora.DATA.dg
    [ohasd(7964)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    I do not know what cause these error but the cluster and database can start normal and use normal.
    I do not know whether these errors will affect the service.
    thanks everybody!

    anyon has the same question?

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • After bring the Cluster and ASM is up, I'm not able to start the rdbms ins.

    I am not able to bring the database up.
    Cluster and ASM is up and running.
    ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/oracle/app/oracle/product/11.1.0/db_1/dbs/initTMISC11G.ora'

    That is wre the confusion is: I coud nt find it under O_H/dbs,
    However the log file indicates that it is using spfile which is located at :
    spfile = +DATA_TIER/dintw10g/parameterfile/spfile_dintw10g.ora
    Now it checks only the /dbs directory only i guess.
    Could you please how can i bring it up now.
    Just for the information: ASM instance is up.

  • Tax calculation in backend and in SRM (classic and standalone scenario)

    Hello,
    I have an issue regarding calculation of tax. 2 scenarios are used in common: classic and standalone. One node of Organizational structure is using classic scenario, another node is using standalone. So I have 2 Accounting Systems defined (one for each node).
    Requirement is:
    - For classic scenario, setting the tax calculation in backend
    - For standalone scenario, setting the tax calculation to no tax calculation
    Problem is that I can not customize the system this way. I do not know any mean to set 2 ways for tax calculation. Would you have any idea to do this?
    Thanks,
    Patrick

    Hi
    <b>Please go through this -></b>
    http://help.sap.com/saphelp_srm50/helpdata/en/f4/8b1d40bb37e569e10000000a155106/frameset.htm
    Note 848164 SUS3.0: FAQ: Logic of Tax calculation within SUS
    958273 FAQ: Taxes in SRM-SUS
    999896 EBP tax codes not correctly mapped to back-end tax codes.
    908659 Follow-up note for Note 509594
    931198 Missing Tax Code in Limit Shopping Cart
    430202 Delivery address in shopping cart in BAPIEKPOC
    931198 Missing Tax Code in Limit Shopping Cart
    881086 Country-specific tax calculation
    703292 TAX. Input/Output tax in SAP Supplier Self-Services with TTE
    513250 ERS. Tax calculation with the new 3.0 function 
    390861 Transferring tax table for ERS from backend system
    Note 657146 Enhancement for US tax calulation: Functions
    510142 EDI-Customizing: Tax code tax-exempted transactions
    317040 MR08: Message M8267
    49436 MR44: No message if the tax amount is different
    Note 84127 Tax Jurisdiction Code for EDI/ALE and Intercompany
    Also Try BBP_DET_TAX_BADI.
    Regards
    - Atul

  • Using ASM in a cluster, and new snapshot feature

    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?
    How does ASM work in a cluster environment? How does each node in a cluster connect to it?
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?
    Thanks!

    Markus Waldorf wrote:
    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?Well, you are missing one point that it would entirely depend on the architecture that you are going to use. If you are going to use ASM for a single node, it would be available right there. If installed for a RAC system, ASM instance would be running on each node of the cluster and would be managing the storage which would be lying on the shared storage. The process ASMB would be responsible to exchange messages, take response and push the information back to the RDBMS instance.
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?Listener is not need Markus as its required to create the server processes which is NOT the job of the ASM instance. ASM instance would be connecting the client database with itself immediately when the first request would come from that database to do any operation over the disk group. As I mentioned above, ASMB would be working afterwards to carry forward the request/response tasks.
    How does ASM work in a cluster environment? How does each node in a cluster connect to it? Each node would be having its own ASM instance running locally to it. In the case of RAC, ASM sid would be +ASMn where n would be 1, 2,... , going forward to the number of nodes being a part of teh cluster.
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?You are probably talking about the ACFS Snapshot feature of 11.2 ASM. This is not to take the backup of the disk group but its like a o/s level backup of the mount point which is created over the ASM's ACFS mount point. Oracle has given this feature so that you can take the backup of your oracle home running on the ACFS mount point and in the case of any o/s level failure like someone has deleted certain folder from that mountpoint, you can get it back through the ACFS snapshot taken. For the disk group, the only backup available is metadata backup(and restore) but that also does NOT get the data of the database back for you. For the database level backup, you would stillneed to use RMAN only.
    HTH
    Aman....

  • Question about the Cluster and Column width in Graphs in Illustrator CS2

    How do they relate to each other? How do the Percentages work together? Is there any information about how you can mathematically figure how much area each Cluster width takes. If you make a graph and make both the cluster and column widths 100%, the entire area is filled. If you make the Column width 50% the bar will sit exactly in the center of the Cluster width, but if you make the cluster width 75%, the bars move closer together.

    Gregg,
    The set of bars that represent each row of data in the graph spreadsheet is called a "cluster".
    Let's let
       W = the width of the whole area inside the graph axes
       R = the number of rows of data
       C = the number of columns of data
    Then the maximum potential space to use for each row of data is W/R. The cluster width controls how much of this potential space is assigned to each row. This cluster space is then divided by C, giving the maximum potential width of each bar. The maximum potential width for each bar is then multiplied by the column width percentage to get the actual width of each bar.
    So the actual width of each bar is ((W/R)*Cluster width)/C)*Column width
    The graphs illustrated below have three rows of data, with two columns each. The solid colored areas in the background have the same cluster width as the main graph, but they all have a column width of 100%. This lets you see more easily how the gradient bars that have a column width of 80% are using up 80% of their potential width.
    Notice that as the Cluster percentage gets lower, the group of bars that represent a row of data get farther apart, so that you can more easily see the "clumping" of rows. When the Cluster percentage is 100%, the columns within each row are no closer to each other than they are to the columns in other rows.
    As the Column percentage gets lower, the bars for each data value occupy less of the space within their row's cluster, so that spaces appear between every bar, even in the same row.

  • ECC 6.0 installation on HPUX Cluster and oracle database

    Hi,
    I need to install ECC6.0 installation on HPUX Cluster and oracle database. Can anybody plz suggest me or send any document to me ..
    Thanks,
    venkat.

    Hi Venkat,
    Please download installation guide from below link:-
    https://websmp105.sap-ag.de/instguides
    Regards,
    Anil

  • Solaris 8 patches added to Solaris 9 patch cluster and vice versa

    Has anyone noticed this? On the Solaris 8 and 9 patch cluster readmes, it shows sol 9 patches have been added to the sol 8 cluster and sol 8 patches have been added to the sol 9 cluster. what's the deal? I haven't found any information about whether or not this is a mistake.

    Desiree,
    Solaris 9's kernel patch 112233-12 was the last revision for that particular patch number. The individual zipfile became so large that it was subsequently supplanted by 117191-xx and that has also been supplanted when its zipfile became very large, by 118558-xx.
    Consequently you will never see any newer version than 112233-12 on that particular patch.
    What does <b>uname -a</b> show you for that system?
    Solaris 8 SPARC was similarly effected, for 108528, 117000 and 117350 kernel patches.
    If you have login privileges to Sunsolve, find <font color="teal">Infodoc 76028</font>.

  • Cluster and Backup server

    Hello guys;
    I want to use a cluster oracle server and another server in another place as a backup server to copy the database on line from the cluster server.
    If I want to make the backup server a primary server if something wrong happened to the cluster server.
    what the steps should I do? or if there is any document to read I'll be thankful
    thanks in advance

    Hi,
    What do you refer by saying cluster ? If you want to have a HA (High Availability) with Active/Passive cluster then you can go with 3rd party cluster vendor, like HACMP from IBM, ServiceGuard from HP and so on. If you want to create this with Oracle technologies then you are talking about Oracle RAC One Node. Of course you always go with Oracle RAC (Real Application Cluster) if you want to have a Active/Active cluster. These are in terms of cluster and High Availability.
    There is another point if you want to create a DR (Disaster Recovery) of you primary database then you would consider implementing Oracle Dataguard.
    Oracle has a rich documentation for blueprints and best practices, my sincerely advice is to get familiar with it:
    http://www.oracle.com/technetwork/database/features/availability/maa-090890.html
    Regards,
    Sve

  • Cluster and space consumption

    I'm confused about some thing in regards to clusters. I have about 1GB wort of data when it is not in a cluster, just in a normal table. If I create a cluster and insert the data in to this cluster it consumes more space, and that was fully expected. When I tried the first few times it took 13GB of space. The table space was 30GB large and contained a few other tables. Then I cleaned out the table space so only 2% of the space was used and added another 30GB to the table space and ran the test once more. Now it is consuming more than 40GB and is still not completed. So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in? Is there a way to limit the size it is allowed to use other than to put it in a separate tablespace?

    Hi Marius,
    First, by "cluster" you mean sorted cluster tables, right?
    http://www.dba-oracle.com/t_sorted_hash_clusters.htm
    So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in?These are "hash" clusters, and you govern the range of hash cluster keys, and hence, the range where Oracle will store the rows.
    http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10739/hash.htm
    Oracle Database uses a hash function to generate a distribution of numeric values, called hash values, that are based on specific cluster key values. The key of a hash cluster, like the key of an index cluster, can be a single column or composite key (multiple column key). To find or store a row in a hash cluster, the database applies the hash function to the cluster key value of the row. The resulting hash value corresponds to a data block in the cluster, which the database then reads or writes on behalf of the issued statement.

  • Cluster and Read Write XML

    In my applications I allow users to save their settings. I used to do this in a Ini file. So I wrote a Vi that could write any cluster and it worked ok. When I discovered that in the newer versions of LabVIEW you could Read/Write From/To xml, I changed inmediatly because it have some advantages form me but I am having some trouble.
    Every time I want to save a cluster I have to use
    Flatten To XML -> Escape XML -> Write XML
    and to load
    Load From XML -> Unescape XML -> Unflatten from XML.
    I also do other important things each time I save or load settings. So it seems to be reasonable to put all this in just two subvi's. (One for loading, One for saving). The problem is that I want to use it with any cluster
    What I want with can be summarized as following:
    - SaveSettings.vi
    --Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    --Outputs
    ---Error Out
    -LoadSettings.vi
    Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    Outputs
    ---DataFromXML
    ---Error Out
    I have tried using variants or references and I was not able to make generic sub vi (sub vi that don't know what is in the cluster). I don't want to use a polymorphic vi because it will demand making one load/save pair per each type of settings.
    Thanks,
    Hernan

    If I am correct you still you need to wire the data type to the Variant To Data. How can you put in a subvi ALL that is needed to handle the read/write of the cluster? I don't want to put any Flatten To XML or Unflatten From XML outside.
    The solution I came out with INI files was passing a reference to a cluster control but it is real unconfortable because I have to itereate through all items.
    When a control has a "Anything" input, is there any way to wire that input to a control and remains "Anything"?
    Thanks
    Hernan

  • Cluster and transparent table

    Hi
    Can some one explain me what is meant by cluster and transparent table
    Points will be rewarded.
    Regards
    Raghu Ram.

    hi Raghu,
    just make a search about this in the ABAP Forum, there are tons of threads about the topic, here is just the most recent example:
    transparent vs pooled vs cluster tables
    hope this helps
    ec

Maybe you are looking for