Geographic Cluster and zones

Hello,
I am attempting to setup a geographic cluster to failover an Informix application to our disaster recovery (BCP) site. I have a Sun Fire V440 in each location running Solaris 10 08/07 update. The application is currently running on a Solaris 8 02/04 server and must continue to do so. The catch is that the server in the BCP site is also used as a QA server. My thought was to create on the Solaris 10 server, two Solaris 8 containers, one for failover from the home office and the other used for QA. At the home office site, the server would run one Solaris 8 container. We are using EMC SRDF for replication and storage of the Informix database. The container on the home office server would failover to the BCP container on the server in the BCP site. My questions are: 1) Is this scenario possible, and 2) How would I configure the clustering on the servers? Should I be using the the data services for Containers and for Informix? I have so far created one node clusters in each site and was in the process of configuring the resource groups but was unsure how to proceed. Thanks for any help anyone can give. Thanks.

I work for the Sun Cluster Geographic Edition (SCGE) team so I hope I can give some definitive answers...
First, I'm slightly confused as to whether this is a single cluster with geographically split nodes or two single node clusters joined together with SCGE. From re-reading your posting it looks like it is the latter, which although far from optimal, is possible. The point to make is that any failure on the primary site is probably going to have to be treated as a disaster. You will need to decide whether the primary site node will be back up any time soon and if not, take-over the service on the remote (DR) site. Once you've taken over the service, the reverse re-sync is going to be quite expensive. If you'd had a local cluster at the primary site, then only site failures would have forced this decision.
Back to the configuration. You'll need to install single node Solaris Clusters (3.2 01/09) at each site. You would then create an HA-container agent resource group/resource to hold you Solaris 8 brand zone. You'd then put your Informix database in this container. You'd do the same at the remote site. Your options for storage are raw DIDs devices or file system. You can't use SVM with SRDF yet and I don't think there is a supported way to map VxVM volumes into the HA-container (though I may be wrong). Personally, I'd use UFS on raw DID (or VxVM) in the global zone mounted with forcedirectio and map that into the HA-container. (http://docs.sun.com/app/docs/doc/820-5025/ciagbcbg?l=en&a=view)
I don't know off-hand whether the the Informix agent will run in with an HA-container agent with a Solaris 8 brand container. I'll ask a colleague.
If you need any more information, it might be more helpful to contact me directly at Sun. (First.Last)
Regards,
Tim
---

Similar Messages

  • Failover Zones / Containers with Sun Cluster Geographic Edition and AVS

    Hi everyone,
    Is the following solution supported/certified by Oracle/Sun? I did find some docs saying it is but cannot find concrete technical information yet...
    * Two sites with a 2-node cluster in each site
    * 2x Failover containers/zones that are part of the two protection groups (1x group for SAP, other group for 3rd party application)
    * Sun Cluster 3.2 and Geographic Edition 3.2 with Availability Suite for SYNC/ASYNC replication over TCP/IP between the two sites
    The Zones and their application need to be able to failover between the two sites.
    Thanks!
    Wim Olivier

    Fritz,
    Obviously, my colleagues and I, in the Geo Cluster group build and test Geo clusters all the time :-)
    We have certainly built and tested Oracle (non-RAC) configurations on AVS. One issue you do have, unfortunately, is that of zones plus AVS (see my Blueprint for more details http://wikis.sun.com/display/BluePrints/Using+Solaris+Cluster+and+Sun+Cluster+Geographic+Edition). Consequently, you can't built the configuration you described. The alternative is to sacrifice zones for now and wait for the fixes to RG affinities (no idea on the schedule for this feature) or find another way to do this - probably hand crafted.
    If you follow the OHAC pages (http://www.opensolaris.org/os/community/ha-clusters/) and look at the endorsed projects you'll see that there is a Script Based Plug-in on the way (for OHACGE) that I'm writing. So, if you are interested in playing with OHACGE source or the SCXGE binaries, you might see that appear at some point. Of course, these aren't supported solutions though.
    Regards,
    Tim
    ---

  • Renaming default Datacenter, Cluster, and Datastore object names in EVO:RAIL

    Hello,
    The Datacenter, Cluster, and vSAN Datastore objects in my newly-installed EVO:RAIL appliance all default to a label of "MARVIN-...". Is it safe to rename these labels appropriate to my environment, such as a geographic location for my Datacenter object? I suspect that it is okay because the underlying UID will not change, but am looking for confirmation from someone who has successfully done this in a lab or production environment and who has also been successful in applying EVO:RAIL patches afterward.
    Thanks in advance.

    Just to close the loop, I successfully renamed the datacenter, cluster, and vSAN objects in vCenter Server.
    I kept a close watch on Log Insight at each step, and did log one error that I found suspicious following the object rename of the vSAN datastore:
    2015-01-29T19:03:27.158Z <My 4th Host's FQDN> Hostd: [783C2B70 info 'DiskLib'] DISKLIB-LINK :DiskLinkGetSpaceUsedInfo: Failed to get the file system unique id for file vsan://118eb054-f123-469a-d542-XXXXXXXXXXXX, using id as "remote"
    Where in my case the 4th host happens to currently be the HA master. I'm guessing this was just a transient error as it hasn't reappeared in the last 30 minutes, but I'll keep an eye out for it. There was no corresponding error viewed in vCenter Server, and I seem to be able to refresh the vSAN storage counter without issue.
    Thanks again.

  • Some error in cluster alertlog but the cluster and database is normal used.

    hi.everybody
    i used RAC+ASM,RAC have two nodes and ASM have 3 group(+DATA,+CRS,+FRA),database is 11gR2,ASM used ASMlib,not raw.
    When i start cluster or auto start after reboot the OS,in the cluster alertlog have some error as follow:
    ERROR: failed to establish dependency between database rac and diskgroup resource ora.DATA.dg
    [ohasd(7964)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    I do not know what cause these error but the cluster and database can start normal and use normal.
    I do not know whether these errors will affect the service.
    thanks everybody!

    anyon has the same question?

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • After bring the Cluster and ASM is up, I'm not able to start the rdbms ins.

    I am not able to bring the database up.
    Cluster and ASM is up and running.
    ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/oracle/app/oracle/product/11.1.0/db_1/dbs/initTMISC11G.ora'

    That is wre the confusion is: I coud nt find it under O_H/dbs,
    However the log file indicates that it is using spfile which is located at :
    spfile = +DATA_TIER/dintw10g/parameterfile/spfile_dintw10g.ora
    Now it checks only the /dbs directory only i guess.
    Could you please how can i bring it up now.
    Just for the information: ASM instance is up.

  • Using ASM in a cluster, and new snapshot feature

    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?
    How does ASM work in a cluster environment? How does each node in a cluster connect to it?
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?
    Thanks!

    Markus Waldorf wrote:
    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?Well, you are missing one point that it would entirely depend on the architecture that you are going to use. If you are going to use ASM for a single node, it would be available right there. If installed for a RAC system, ASM instance would be running on each node of the cluster and would be managing the storage which would be lying on the shared storage. The process ASMB would be responsible to exchange messages, take response and push the information back to the RDBMS instance.
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?Listener is not need Markus as its required to create the server processes which is NOT the job of the ASM instance. ASM instance would be connecting the client database with itself immediately when the first request would come from that database to do any operation over the disk group. As I mentioned above, ASMB would be working afterwards to carry forward the request/response tasks.
    How does ASM work in a cluster environment? How does each node in a cluster connect to it? Each node would be having its own ASM instance running locally to it. In the case of RAC, ASM sid would be +ASMn where n would be 1, 2,... , going forward to the number of nodes being a part of teh cluster.
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?You are probably talking about the ACFS Snapshot feature of 11.2 ASM. This is not to take the backup of the disk group but its like a o/s level backup of the mount point which is created over the ASM's ACFS mount point. Oracle has given this feature so that you can take the backup of your oracle home running on the ACFS mount point and in the case of any o/s level failure like someone has deleted certain folder from that mountpoint, you can get it back through the ACFS snapshot taken. For the disk group, the only backup available is metadata backup(and restore) but that also does NOT get the data of the database back for you. For the database level backup, you would stillneed to use RMAN only.
    HTH
    Aman....

  • Question about the Cluster and Column width in Graphs in Illustrator CS2

    How do they relate to each other? How do the Percentages work together? Is there any information about how you can mathematically figure how much area each Cluster width takes. If you make a graph and make both the cluster and column widths 100%, the entire area is filled. If you make the Column width 50% the bar will sit exactly in the center of the Cluster width, but if you make the cluster width 75%, the bars move closer together.

    Gregg,
    The set of bars that represent each row of data in the graph spreadsheet is called a "cluster".
    Let's let
       W = the width of the whole area inside the graph axes
       R = the number of rows of data
       C = the number of columns of data
    Then the maximum potential space to use for each row of data is W/R. The cluster width controls how much of this potential space is assigned to each row. This cluster space is then divided by C, giving the maximum potential width of each bar. The maximum potential width for each bar is then multiplied by the column width percentage to get the actual width of each bar.
    So the actual width of each bar is ((W/R)*Cluster width)/C)*Column width
    The graphs illustrated below have three rows of data, with two columns each. The solid colored areas in the background have the same cluster width as the main graph, but they all have a column width of 100%. This lets you see more easily how the gradient bars that have a column width of 80% are using up 80% of their potential width.
    Notice that as the Cluster percentage gets lower, the group of bars that represent a row of data get farther apart, so that you can more easily see the "clumping" of rows. When the Cluster percentage is 100%, the columns within each row are no closer to each other than they are to the columns in other rows.
    As the Column percentage gets lower, the bars for each data value occupy less of the space within their row's cluster, so that spaces appear between every bar, even in the same row.

  • ECC 6.0 installation on HPUX Cluster and oracle database

    Hi,
    I need to install ECC6.0 installation on HPUX Cluster and oracle database. Can anybody plz suggest me or send any document to me ..
    Thanks,
    venkat.

    Hi Venkat,
    Please download installation guide from below link:-
    https://websmp105.sap-ag.de/instguides
    Regards,
    Anil

  • Solaris 8 patches added to Solaris 9 patch cluster and vice versa

    Has anyone noticed this? On the Solaris 8 and 9 patch cluster readmes, it shows sol 9 patches have been added to the sol 8 cluster and sol 8 patches have been added to the sol 9 cluster. what's the deal? I haven't found any information about whether or not this is a mistake.

    Desiree,
    Solaris 9's kernel patch 112233-12 was the last revision for that particular patch number. The individual zipfile became so large that it was subsequently supplanted by 117191-xx and that has also been supplanted when its zipfile became very large, by 118558-xx.
    Consequently you will never see any newer version than 112233-12 on that particular patch.
    What does <b>uname -a</b> show you for that system?
    Solaris 8 SPARC was similarly effected, for 108528, 117000 and 117350 kernel patches.
    If you have login privileges to Sunsolve, find <font color="teal">Infodoc 76028</font>.

  • Patchadd and zones??

    I feel dumb asking this question especially since I think its due to my lack of knowledge with zones. I am using Solaris 10 with only a global zone, at least thats what I think? I ran a "zoneadm list" and only listed "global".
    I have downloaded uwc [yes I posted something similar on the JMS forum] patch 118540-42 and have tried to add it with patchadd. I get an error saying " Package SUNWuwc from directory SUNWuwc in patch 118540-42 is not installed on the system" and the patch does not install [I can not see it with showrev -p nor in /var/sadm/patch]. I also tried the "-G" option but no difference.
    Then I tried using Solaris Management Console 2.1 using its patch tool. Action -> add patch and I get an error saying "Some or all of the software packages that patch 118540-42 patches are not installed on the target host.".
    So I decided to look for SUNWuwc with pgkinfo and there is no listing, in fact none of the Java comm suite is found [nor most of the stuff in /opt and /usr/local ]. Yet I know JES is all in /opt and is working fine. I have never created a zone on this system and thought I had installed everything to just a global zone. I have had this system running for over a year and some updates have happened [just not sure what changed].
    So I am wondering is there another way to add patches with Sol 10? another flag or utility? Is there a way to know what zone one is working in? or has installed stuff to. Is there a way at this point to stop using zones altogether?
    Maybe I am missing a Sol 10 patch?
    I had also run into a file /var/sadm/install/gz-only-packages and sure enough all my installed packages were listed here [all the JES stuff and web server etc]. I thought this meant "global zone only" and I only have a global zone??? so why can I not patch??
    Apologies for the confusion [and lack of more details], although I have worked with Solaris for years this new 10 version and zones is a bit confusing for me. Plus the problem might not be zones related at all :^)
    Thanks in Advance
    -James

    try zoneadm list -vc
    this shows all zones on the system, whether they are configured, running or not running.
    Next, try to get the 10-recommended patches from sunsolve.sun.com and add those, but do it in single-user mode.
    /tony

  • Cluster and Backup server

    Hello guys;
    I want to use a cluster oracle server and another server in another place as a backup server to copy the database on line from the cluster server.
    If I want to make the backup server a primary server if something wrong happened to the cluster server.
    what the steps should I do? or if there is any document to read I'll be thankful
    thanks in advance

    Hi,
    What do you refer by saying cluster ? If you want to have a HA (High Availability) with Active/Passive cluster then you can go with 3rd party cluster vendor, like HACMP from IBM, ServiceGuard from HP and so on. If you want to create this with Oracle technologies then you are talking about Oracle RAC One Node. Of course you always go with Oracle RAC (Real Application Cluster) if you want to have a Active/Active cluster. These are in terms of cluster and High Availability.
    There is another point if you want to create a DR (Disaster Recovery) of you primary database then you would consider implementing Oracle Dataguard.
    Oracle has a rich documentation for blueprints and best practices, my sincerely advice is to get familiar with it:
    http://www.oracle.com/technetwork/database/features/availability/maa-090890.html
    Regards,
    Sve

  • Cluster and space consumption

    I'm confused about some thing in regards to clusters. I have about 1GB wort of data when it is not in a cluster, just in a normal table. If I create a cluster and insert the data in to this cluster it consumes more space, and that was fully expected. When I tried the first few times it took 13GB of space. The table space was 30GB large and contained a few other tables. Then I cleaned out the table space so only 2% of the space was used and added another 30GB to the table space and ran the test once more. Now it is consuming more than 40GB and is still not completed. So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in? Is there a way to limit the size it is allowed to use other than to put it in a separate tablespace?

    Hi Marius,
    First, by "cluster" you mean sorted cluster tables, right?
    http://www.dba-oracle.com/t_sorted_hash_clusters.htm
    So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in?These are "hash" clusters, and you govern the range of hash cluster keys, and hence, the range where Oracle will store the rows.
    http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10739/hash.htm
    Oracle Database uses a hash function to generate a distribution of numeric values, called hash values, that are based on specific cluster key values. The key of a hash cluster, like the key of an index cluster, can be a single column or composite key (multiple column key). To find or store a row in a hash cluster, the database applies the hash function to the cluster key value of the row. The resulting hash value corresponds to a data block in the cluster, which the database then reads or writes on behalf of the issued statement.

  • Cluster and Read Write XML

    In my applications I allow users to save their settings. I used to do this in a Ini file. So I wrote a Vi that could write any cluster and it worked ok. When I discovered that in the newer versions of LabVIEW you could Read/Write From/To xml, I changed inmediatly because it have some advantages form me but I am having some trouble.
    Every time I want to save a cluster I have to use
    Flatten To XML -> Escape XML -> Write XML
    and to load
    Load From XML -> Unescape XML -> Unflatten from XML.
    I also do other important things each time I save or load settings. So it seems to be reasonable to put all this in just two subvi's. (One for loading, One for saving). The problem is that I want to use it with any cluster
    What I want with can be summarized as following:
    - SaveSettings.vi
    --Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    --Outputs
    ---Error Out
    -LoadSettings.vi
    Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    Outputs
    ---DataFromXML
    ---Error Out
    I have tried using variants or references and I was not able to make generic sub vi (sub vi that don't know what is in the cluster). I don't want to use a polymorphic vi because it will demand making one load/save pair per each type of settings.
    Thanks,
    Hernan

    If I am correct you still you need to wire the data type to the Variant To Data. How can you put in a subvi ALL that is needed to handle the read/write of the cluster? I don't want to put any Flatten To XML or Unflatten From XML outside.
    The solution I came out with INI files was passing a reference to a cluster control but it is real unconfortable because I have to itereate through all items.
    When a control has a "Anything" input, is there any way to wire that input to a control and remains "Anything"?
    Thanks
    Hernan

  • Cluster and transparent table

    Hi
    Can some one explain me what is meant by cluster and transparent table
    Points will be rewarded.
    Regards
    Raghu Ram.

    hi Raghu,
    just make a search about this in the ABAP Forum, there are tons of threads about the topic, here is just the most recent example:
    transparent vs pooled vs cluster tables
    hope this helps
    ec

Maybe you are looking for

  • Help Needed - Remote Disk Access via MobileMe for AEBS

    OK, I've done a lot of reading in this forum, but didn't find a resolution to my problem. Here are my configuration: 1) ADSL Router(2Wire 2700HGV-2) in bridged mode 2) Airport Extreme Base Station (Simultaneous Dualband) with 7.4.1 firmware acting as

  • Inbound idoc type

    i have to do an inbound idoc. first i need to retrieve data from one function module and use the data of that FM into another function module. so there is a Z function module which has information of both the function module. i am using the following

  • Will a bluetooth adapter work with AirPort

    Hey Ladies and Gents, If I were to buy a bluetooth adapter for an iPod Classic, will it communicate with an AirPort system? My main goal is to send music to my home stereo from my iPod wirelessly. Is this possible? Thanks for you help, Jason

  • Dongle

    Hi , I have a BT Dual band 2x2 wireless usb dongle, will this only work when I'm in a bt hotspot or will it give me mobile internet connection?

  • Double charge for a movie

    please I paid for toy story 3 already please return my the money in to my account