Solaris Cluster  - two machines - logical host

Good morning!
I am a complete dummie Solaris Cluster, buuuuuuuuuuuuuuuuuuuut... I need to create a cluster and install an application:
I have two V440, with Solaris 10;
I need to put the two machines in the cluster;
I have CE0 of each of the machines plugged into the network;
I have CE1 and CE2 of each machine connected together via a crossover cable;
According to the documentation "Oracle Solaris Cluster HA for Alliance Access" there are prerequisites (http://docs.oracle.com/cd/E19680-01/html/821-1547/ciajejfa.html) as creating HAstoragePlus , and logical host resource group;
Could anyone give me some tips on how to create this cluster and the prerequisites?
tanks!
Edited by: user13045950 on 05/12/2012 05:04
Edited by: user13045950 on 05/12/2012 05:06

Hi,
a good source of information for the beginner is: http://www.oracle.com/technetwork/articles/servers-storage-admin/how-to-install-two-node-cluster-166763.pdf
To create a highly available logical IP address just do
clrg create <name-for-resource-group>
clrslh create -g <name-for-resource-group> <name-of-ip-address> # This IP address should be available in /etc/hosts on both cluster nodes.
clrg online -M <name-for-resource-group>
Regards
Hartmut

Similar Messages

  • Zone Cluster: Creating a logical host resource fails

    Hi All,
    I am trybing to create a logical host resource with two logical addresses to be part of the resource, however, the command is failing. Here is what I run to create the resource:
    clrslh create -Z pgsql-production -g pgsql-rg -h pgprddb-prod,pgprddb-voip pgsql-halh-rs
    And I am presented with this failure:
    clrslh: specified hostname(s) cannot be hosted by any adapter on bfieprddb01
    clrslh: Hostname(s): pgprddb-prod pgprddb-voip
    I have pgprddb-prod and pgprddb-voip defined in the /etc/hosts files on the 2 global cluster nodes and also within the two zones in the zone cluster.
    I have also modified the zone cluster configuration as described in the following thread:
    http://forums.sun.com/thread.jspa?threadID=5420128
    This is what I have done to the zone cluster:
    clzc configure pgsql-production
    clzc:pgsql-production> add net
    clzc:pgsql-production:net> set address=pgprddb-prod
    clzc:pgsql-production:net> end
    clzc:pgsql-production> add net
    clzc:pgsql-production:net> set address=pgprddb-voip
    clzc:pgsql-production:net> end
    clzc:pgsql-production> verify
    clzc:pgsql-production> commit
    clzc:pgsql-production> quit
    Am I missing something here, help please :)
    I did read a blog post mentioning that the logical host resource is not supported with exclusive-ip zones at the moment, but I have checked my configuration and I am running with ip-type=shared.
    Any suggestions would be greatly appreciated.
    Thanks

    I managed to fix the issue, I got the hint from the following thread:
    http://72.5.124.102/thread.jspa?threadID=5432115&tstart=15
    Turns out that you can only define more than one logical host if they all reside on the same subnet. I therefor had to create 2 logical host resources for each subnet by doing the following in the global zone:
    clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-prod pgsql-halh-prod-rs
    clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-voip pgsql-halh-voip-rs
    Thanks for reading :)

  • Cluster changing hostname & Logical host

    hello I have sunCluster 3.1 update 4.
    the cluster is 2(v440) + 1(v240) installed on solaris8.
    Did anyone actualy succeeded to change the hostnames of all the nodes & Logical host on SunCluster system like above?
    I saw a procedure that is not realy supported :
    1. Reboot cluster nodes into non-cluster node (reboot -- -x)
    2. Change the hostname of the system (nodenames, hosts etc)
    3. Change hostname on all nodes within the files under /erc/cluster/ccr
    4. Regenerate the checksums for each file changed using ccradm -I /etc/cluster/ccr/FILENAME -0Reboot every cluster node into the cluster
    is it works?
    Thanks & Regards

    So if I understand you correctly, you have two metasets already created and mounted. If so, this is a fairly tricky process (outlined from memory):
    1. Backup your data
    2. Shut down the RGs using scswitch -F -g <RGnames>, make the RGs unmanaged
    3. Unmount the file systems
    4. Deconstruct the metasets and mediators
    5. Shutdown the cluster and boot -x
    6. Change the hostnames in /etc/inet/hosts, etc
    7. Change the CCR and re-checksum it
    8. Reboot the cluster into cluster mode
    9. Re-construct metasets and mediators with new host names
    10. scswitch -Z -g <RGs>
    If you recreate your metasets in the same way as they were originally created and the state replicas haven't changed in size, then the data should be intact.
    Note - I have not tried this process in a long time. Remember also that changing the CCR as described previously is officially unsupported (partly because of the risks involved).
    Regards,
    Tim
    ---

  • Can a SQL Server Failover Cluster Instance (FCI) be Implemented Between Two Hyper-V Hosted Virtual Machines?

    I haven't had the opportunity to implement a SQL Server Failover Cluster Instance (FCI) for over 10 years and that was done with two physical, identical database servers way back in the day of Windows Server 2003 and SQL Server 2000 (old school).
    Can a SQL Server 2008 R2 Failover Cluster Instance (FCI) be implemented between two Hyper-V hosted virtual machines? The environment in question already has Windows Server 2012 R2 Hyper-V hosts in place, so I'm just looking to see if this is even
    possible and/or supported when utilizing virtual machines.
    The client in question is currently using SQL Server 2008 R2 instances running on Win2008R2, Win2012, and Win2012R2, but I'd also be interested how this can be done or not with SQL Server 2012 or 2014 as well. Thanks in advance.
    Bill Thacker

    Yes, it can be done with Hyper-V guests. In fact, with Windows Server 2012 R2 Hyper-V, guests can use the Shared VHDX feature for shared storage used by Windows clusters. The guests can run Windows Server 2008 and higher provided that the Hyper-V Integration
    Services are installed to support Shared VHDX. The only challenge here is making the Hyper-V hosts highly available as well, running it on WSFC.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • Installing cluster in solaris 10 virtual machine

    I want to configure two node cluster in solaris 10.
    I created a partion 0f 2 GB but forgot to name it as /globaldevices while installing.
    Now, I am unable to modify my partition table as the error is " can't modify while it has mounted partitions ".
    Kindly tell me how to have a /globaldevices partition without reinstalling solaris 10 virtual machine.
    Also I want to free the 2 GB space and I want to give it to root.
    Please help

    Moderator Action:
    Moved from the General Solaris 10 forum
    to the Clustering forum, hopefully to more on-topic.
    @O.P.
    Please try to pay attention to where you actually place your posts.
    If you pick an appropriate forum, you'll get better responses.
    These are technical forums, not a bunch of chat rooms.
    For example, there are forum members that ONLY post to the clustering forum and those same people NEVER contribute to the Solaris 10 forum.

  • Creating logical host on zone cluster causing SEG fault

    As noted in previous questions, I've got a two node cluster. I am now creating zone clusters on these nodes. I've got two problems that seem to be showing up.
    I have one working zone cluster with the application up and running with the required resources including a logical host and a shared address.
    I am now trying to configure the resource groups and resources on additional zone clusters.
    In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.
    I log onto the zone and I create a failover resource group, no problem. I then try to create a logical host and I get:
    "Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11"
    This error appears to be happening on the other node, ie: not the one that I'm building from.
    Anyone seen anything like, this have any thoughts on where I should go with it?
    Thanks.

    Hi,
    In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.Look at the stack from your core dump and see whether this is matching with the bug:
    6763940 clzc dumped core after zones were installed
    As far as I know, the above bug is harmless and no functionality should be impacted. This bug is already fixed in the later release.
    "Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11" The above message is not enough to figure out what's wrong. Please look at the below:
    1) Check the /var/adm/messages on the nodes and observe the messages that got printed around the same time that the above
    message got printed and see whether that gives more clues.
    2) Also see whether there is a core dump associated with the above message and that might also provide more information.
    If you need more help, please provide the output for the above.
    Thanks,
    Prasanna Kunisetty

  • Two logical hosts

    we currently have a sun solaris box with two logical hosts dev1 and dev2. The version for oracle is 8.1.7.2. In our listener and tnsnames file we have the host=dev1. When I connect to the database on my own PC I have a problem. When I connect to the database through dev1 I have no problems but when I try to connect as dev2 it just returns the message that there is no listener. I'm thinking this could be with the host definition in the listener and tnsnames file but does anyone know what it should be so I can connect through either logical host?? Thank you in advance.

    Please refer to these docs.
    Implementing Virtual Host, Concurrent Managers and EM DBconsole on Oracle Applications R12 [ID 603883.1]
    Conc-System Node Name Not Registered After Fresh Install Using Virtual Name [ID 948644.1]
    Thanks,
    Hussein

  • A two machine cluster

    I found this tutorial and followed it. http://vimeo.com/1799266
    Only problem is it doesn't work for me.
    My two machines are connected via ethernet. I've set up the qmaster system preferences as per video. Qmaster admin shows them up as twice as many machines as there are in the bottom panel. Also I can't select a controller.
    Both machine on same OS. I have run the compressor repair app minus the preference delete.
    In previous attempts I was able to set up a cluster and even see it in compressor but it either failed or would only run the local machine and the other sit idle.
    Help please someone?

    Thanks so much for this response. I exported a reference QT movie retaining my markers and indeed it appeared to work. The messages told me that everything was successful but upon playing the transcoded videos it was obvious it hadn't worked. Both HD and SD versions had long segments of either total white or total green bars. As it seems that compressor splits the video into segments and then reassembles it(?) I'm guessing that means that one of the nodes simply didn't do its job. It didn't do its job twice so again I'm left scratching my head. Would love it if anyone has any suggestions.

  • Cluster with only two machines.

              Hi all,
              Can I make in weblogic6.0 a cluster with two machines. One Aministration and the
              other managed ?????
              regards, Juan.
              

    Is'nt there a restriction for running All WLS cluster machines on the same
              port ?
              Or is it just for the Mged Servers ...
              "Cameron Purdy" <[email protected]> wrote in message
              news:3d1e1d61$[email protected]..
              > Not exactly. You'll run a managed server on each of those on the same port
              > number, and on one of them you'll run the admin server on a different
              port.
              >
              > Peace,
              >
              > Cameron Purdy
              > Tangosol, Inc.
              > http://www.tangosol.com/
              >
              > "Juan Alonso" <[email protected]> wrote in message
              > news:3d1ca2d8$[email protected]..
              > >
              > > Hi all,
              > >
              > > Can I make in weblogic6.0 a cluster with two machines. One Aministration
              > and the
              > > other managed ?????
              > >
              > > regards, Juan.
              >
              >
              

  • 2 Logical Host, 2 Web Servers, Big Problem?

    I am setting up a sun HA cluster using 2 E4500 servers. I have created 2 logical hosts, each one needs to host a Netscape iPlanet 4.0 web server sitting at port 80. Each logical host is serving up web applications for different clients.
    If I need to fail over one of the logical hosts so that they are both running on the same system, the newly imported instance of the web server will fail because port 80 is already in use by the logical host that is on that physical host.
    At first this seemed totally wrong. Each logical host should be able to run applications on any port they need to. Then someone who has a lot more time on Solaris told me that this was not the case, and each logical host had to steer clear of using the same ports as other logical hosts in the same cluster.
    Can someone clue me into what is reality?
    Any good documentation that tells how to set this stuff up?
    Thanks!
    Bruce

    Hi,
    The best way to resolve this would be to try implementing the same.
    Instead of going through an entire cluster install/configuration
    process, you may want to try setting up two different web servers
    on a single node. You may want to set up a virtual interface (like
    le0:1, hme0:1 etc) for this. You coud then try connecting to
    individual web servers on port 80. If this works, then the two
    webserver/two node cluster implementation should also work.
    Hope this helps.
    Thanks,
    Gopinath.

  • Solaris cluster 3.2 Sparc

    Hi folks
    First things first. I may not have great knowledge about Solaris clusters, so please be merciful :)
    Here it is what I have:
    - 2 x Netra T1 AC200 each with 1GB Ram, 2x18GB disks, 500 MHZ Sparc Cpu, 4 port ethernet card
    - 1 array netra d130 3x36 GB
    -- cable et all, switches , you name it
    So, I set up the OS, all ok. I set up the cluster, all SEEMS to be ok.
    But when I define my resources and stuff like that all goes fine, except when I try top bring the resource group on line.
    On another configuration I teste the shared logical hostname and works fine.
    Group Name Resources
    Resources: ingresc nodec ingresr
    -- Resource Groups --
    Group Name Node Name State Suspended
    Group: ingresc node2 Unmanaged No
    Group: ingresc node1 Unmanaged No
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: nodec node2 Offline Offline
    Resource: nodec node1 Offline Offline
    Resource: ingresr node2 Offline Offline
    Resource: ingresr node1 Offline Offline
    scswitch: (C969069) Request failed because resource group ingresc is in ERROR_STOP_FAILED state and requires operator attention
    Now, in /var/adm/messsages I spotted this :
    Mar 6 17:09:03 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar 6 17:09:03 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_stop>:tag=<IngresNCG.nodec.1>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    A little bit of research points in the direction of a bug (see CR 6565601)
    Here it is what I see as my options:
    1 - reinstall Solaris OS, but not the Solaris Cluster 3.2, instead using Solaris Express 10/07 or 2/08. But will this combination work ? Or will it work only in the combination Solaris Cluster Express and Solaris Express Developer Edition ? If the later, which versions will work together ?
    2 - Beg for a Solaris Cluster 3.2 patch, although in my humble opinion, this should be free since it looks to me that once you write your own stuff, you run in the bug, and after all it is education
    Any ideas, help, greatly appreciated
    Many thanks
    Armand

    Although names are different since I used two setups, this is the relevant part of /var/adm/messages.
    It looks to me Ingres resource is failing:
    Mar  6 17:08:03 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_prenet_start> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:03 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_prenet_start>:tag=<IngresNCG.nodec.10>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:05 node2 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
    Mar  6 17:08:05 node2 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_prenet_start> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 1% of timeout <300 seconds>
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_PRENET_STARTED
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_STARTING
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_start> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <500> seconds
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_start>:tag=<IngresNCG.nodec.0>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nodec status on node node2 change to R_FM_ONLINE
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nodec status msg on node node2 change to <LogicalHostname online.>
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_start> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <500 seconds>
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_JUST_STARTED
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_ONLINE_UNMON
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_STARTING
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_MON_STARTING
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_UNKNOWN
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Starting>
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <bin/ingres_server_start> for resource <IngresNCR>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_monitor_start> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</global/disk2s0/ing_nc_1/ingresclu/bin/ingres_server_start>:tag=<IngresNCG.IngresNCR.0>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 268902 daemon.notice] 45 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_monitor_start>:tag=<IngresNCG.nodec.7>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:12 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_start> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <300 seconds>
    Mar  6 17:08:12 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_ONLINE
    Mar  6 17:08:13 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Bringing Ingres DBMS server online.>
    Mar  6 17:08:30 node2 sendmail[534]: [ID 702911 mail.alert] unable to qualify my own domain name (node2) -- using short name
    Mar  6 17:08:30 node2 sendmail[535]: [ID 702911 mail.alert] unable to qualify my own domain name (node2) -- using short name
    Mar  6 17:08:31 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Bringing Ingres DBMS server offline.>
    Mar  6 17:08:45 node2 SC[Ingres.ingres_server,IngresNCG,IngresNCR,stop]: [ID 147958 daemon.error] ERROR : HA-Ingres failed to stop.
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_FAULTED
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Ingres DBMS server faulted.>
    Mar  6 17:08:46 node2 SC[Ingres.ingres_server,IngresNCG,IngresNCR,start]: [ID 335575 daemon.error] ERROR : Stop method failed for the HA-Ingres data service.
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 938318 daemon.error] Method <bin/ingres_server_start> failed on resource <IngresNCR> in resource group <IngresNCG> [exit code <1>, time used: 11% of timeout <300 seconds>]
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_START_FAILED
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group IngresNCG state on node node2 change to RG_PENDING_OFF_START_FAILED
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_STOPPING
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_MON_STOPPING
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_UNKNOWN
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Stopping>
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <bin/ingres_server_stop> for resource <IngresNCR>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_monitor_stop> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</global/disk2s0/ing_nc_1/ingresclu/bin/ingres_server_stop>:tag=<IngresNCG.IngresNCR.1>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 268902 daemon.notice] 45 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_monitor_stop>:tag=<IngresNCG.nodec.8>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:47 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Bringing Ingres DBMS server offline.>
    Mar  6 17:08:48 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_stop> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <300 seconds>
    Mar  6 17:08:48 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_ONLINE_UNMON
    Mar  6 17:09:00 node2 SC[Ingres.ingres_server,IngresNCG,IngresNCR,stop]: [ID 147958 daemon.error] ERROR : HA-Ingres failed to stop.
    Mar  6 17:09:02 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_FAULTED
    Mar  6 17:09:02 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Ingres DBMS server faulted.>
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 938318 daemon.error] Method <bin/ingres_server_stop> failed on resource <IngresNCR> in resource group <IngresNCG> [exit code <2>, time used: 5% of timeout <300 seconds>]
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_STOP_FAILED
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group IngresNCG state on node node2 change to RG_PENDING_OFF_STOP_FAILED
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 424774 daemon.error] Resource group <IngresNCG> requires operator attention due to STOP failure
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_STOPPING
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nodec status on node node2 change to R_FM_UNKNOWN
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nodec status msg on node node2 change to <Stopping>
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_stop>:tag=<IngresNCG.nodec.1>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:09:04 node2 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 192.168.005.085:0, remote = 000.000.000.000:0, start = -2, end = 6
    Mar  6 17:09:04 node2 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nodec status on node node2 change to R_FM_OFFLINE
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nodec status msg on node node2 change to <LogicalHostname offline.>
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_stop> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <300 seconds>
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_OFFLINE
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group IngresNCG state on node node2 change to RG_ERROR_STOP_FAILED
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 424774 daemon.error] Resource group <IngresNCG> requires operator attention due to STOP failure
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 663692 daemon.error] failback attempt failed on resource group <IngresNCG> with error <resource group in ERROR_STOP_FAILED state requires operator attention>
    Mar  6 17:09:10 node2 java[1652]: [ID 807473 user.error] pkcs11_softtoken: Keystore version failure.Thank you
    Armand

  • Solaris Cluster Private Link Failure

    Hi,
    I have configured Solaris Cluster 3.3 and add two Back to Back interconnect cable.
    Sun Cluster is working fine but private link is fail and i can not ping the clusternode2-priv and clusternode1-priv form each other. some cammands faile
    ~ # ping clusternode2-priv
    no answer from clusternode2-priv
    ~ # metaset -s nfsds -a -h t1u331 t1u332
    metaset: 172.16.4.1: metad client create: RPC: Rpcbind failure
    ~ # scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: n1u332 Online
    Cluster node: n1u331 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path:   n1u332:nxge2           n1u331:nxge2           Path online
    Transport path:   n1u332:nxge1           n1u331:nxge1           Path online
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: n1u332 1 1 Online
    Node votes: n1u331 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d4s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    -- Device Group Status --
    Device Group Status
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- Resource Groups and Resources --
    Group Name Resources
    -- Resource Groups --
    Group Name Node Name State Suspended
    -- Resources --
    Resource Name Node Name State Status Message
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    [root @ n1u332]
    ~ # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1000802<BROADCAST,MULTICAST,IPv4> mtu 1500 index 2
    inet 0.0.0.0 netmask 0
    ether 0:15:17:e3:a4:e8
    vsw0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
    inet 10.131.58.76 netmask ffffff00 broadcast 10.131.58.255
    groupname ipmp-grp
    ether 0:14:4f:f9:1:bd
    vsw0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 10.131.58.75 netmask ffffff00 broadcast 10.131.58.255
    vsw1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
    inet 10.131.58.77 netmask ffffff00 broadcast 10.131.58.255
    groupname ipmp-grp
    ether 0:14:4f:fb:44:4
    nxge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 7
    inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
    ether 0:14:4f:a0:81:d9
    nxge2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6
    inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
    ether 0:14:4f:a0:81:da
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 8
    inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255
    ether 0:0:0:0:0:1
    [root @ n1u332]
    ~ # dladm show-dev
    vsw0 link: up speed: 1000 Mbps duplex: full
    vsw1 link: up speed: 1000 Mbps duplex: full
    e1000g0 link: down speed: 0 Mbps duplex: half
    e1000g1 link: up speed: 1000 Mbps duplex: full
    e1000g2 link: unknown speed: 0 Mbps duplex: half
    e1000g3 link: unknown speed: 0 Mbps duplex: half
    nxge0 link: up speed: 100 Mbps duplex: full
    nxge1 link: up speed: 1000 Mbps duplex: full
    nxge2 link: up speed: 1000 Mbps duplex: full
    nxge3 link: up speed: 100 Mbps duplex: full
    e1000g4 link: unknown speed: 0 Mbps duplex: half
    e1000g5 link: up speed: 1000 Mbps duplex: full
    clprivnet0              link: unknown   speed: 0     Mbps       duplex: unknown
    Edited by: 808696 on Mar 2, 2011 8:27 AM

    If your private interconnect had really failed then one or other of the cluster nodes would have panicked. I think it is more likely that either you have changed the nsswitch.conf entry for hosts such that it does not include 'cluster' first, although I would have expected that to result in an unresolved host name. The other option is that you have hardened your machine in some way with ipfilters or security settings.
    Has it ever worked?
    Tim
    ---

  • HOWTO: Create 2-node Solaris Cluster 4.1/Solaris 11.1(x64) using VirtualBox

    I did this on VirtualBox 4.1 on Windows 7 and VirtualBox 4.2 on Linux.X64. Basic pre-requisites are : 40GB disk space, 8GB RAM, 64-bit guest capable VirtualBox.
    Please read all the descriptive messages/prompts shown by 'scinstall' and 'clsetup' before answering.
    0) Download from OTN
    - Solaris 11.1 Live Media for x86(~966 MB)
    - Complete Solaris 11.1 IPS Repository Image (total 7GB)
    - Oracle Solaris Cluster 4.1 IPS Repository image (~73MB)
    1) Run VirtualBox Console, create VM1 : 3GB RAM, 30GB HDD
    2) The new VM1 has 1 NIC, add 2 more NICs (total 3). Setting the NIC to any type should be okay, 'VirtualBox Host Only Adapter' worked fine for me.
    3) Start VM1, point the "Select start-up disk" to the Solaris 11.1 Live Media ISO.
    4) Select "Oracle Solaris 11.1" in the GRUB menu. Select Keyboard layout and Language.
    VM1 will boot and the Solaris 11.1 Live Desktop screen will appear.
    5) Click <Install Oracle Solaris> from the desktop, supply necessary inputs.
    Default Disk Discovery (iSCSI not needed) and Disk Selection are fine.
    Disable the "Support Registration" connection info
    6) The alternate user created during the install has root privileges (sudo). Set appropriate VM1 name
    7) When the VM has to be rebooted after the installation is complete, make sure the Solaris 11.1 Live ISO is ejected or else the VM will again boot from the Live CD.
    8) Repeat steps 1-6, create VM2 and install Solaris.
    9) FTP(secure) the Solaris 11.1 Repository IPS and Solaris Cluster 4.1 IPS onto both the VMs e.g under /home/user1/
    10) We need to setup both the packages: Solaris 11.1 Repository and Solaris Cluster 4.1
    11) All commands now to be run as root
    12) By default the 'solaris' repository is of type online (pkg.oracle.com), that needs to be updated to the local ISO we downloaded :-
    +$ sudo sh+
    +# lofiadm -a /home/user1/sol-11_1-repo-full.iso+
    +//output : /dev/lofi/N+
    +# mount -F hsfs /dev/lofi/N /mnt+
    +# pkg set-publisher -G '*' -M '*' -g /mnt/repo solaris+
    13) Setup the ha-cluster package :-
    +# lofiadm -a /home/user1/osc-4_1-ga-repo-full.iso+
    +//output : /dev/lofi/N+
    +# mkdir /mnt2+
    +# mount -f hsfs /dev/lofi/N /mnt2+
    +# pkg set-publisher -g file:///mnt2/repo ha-cluster+
    14) Verify both packages are fine :-
    +# pkg publisher+
    PUBLISHER                   TYPE     STATUS P LOCATION
    solaris                     origin   online F file:///mnt/repo/
    ha-cluster                  origin   online F file:///mnt2/repo/
    15) Install the complete SC4.1 package by installing 'ha-cluster-full'
    +# pkg install ha-cluster-full+
    14) Repeat steps 12-15 on VM2.
    15) Now both VMs have the OS and SC4.1 installed.
    16) By default the 3 NICs are in the "Automatic" profile and have DHCP configured. We need to activate the Fixed profile and put the 3 NICs into it. Only 1 interface, the public interface, needs to be
    configured. The other 2 are for the cluster interconnect and will be automatically configured by scinstall. Execute the following commands :-
    +# netadm enable -p ncp defaultfixed+
    +//verify+
    +# netadm list -p ncp defaultfixed+
    +#Configure the public-interface+
    +#Verify none of the interfaces are listed, add all the 3+
    +# ipadm show-if+
    +# run dladm show-phys or dladm show-link to check interface names : must be net0/net1/net2+
    +# ipadm create-ip net0+
    +# ipadm create-ip net1+
    +# ipadm create-ip net2+
    +# ipadm show-if+
    +//select proper IP and configure the public interface. I have used 192.168.56.171 & 172+
    +# ipadm create-addr -T static -a 192.168.56.171/24 net0/publicip+
    +#IP plumbed, restart+
    +# ipadm down-addr -t net0/publicip+
    +# ipadm up-addr -t net0/publicip+
    +//Verify publicip is fine by pinging the host+
    +# ping 192.168.56.1+
    +//Verify, net0 should be up, net1/net2 should be down+
    +# ipadm+
    17) Repeat step 16 on VM2
    18) Verify both VMs can ping each other using the public IP. Add entries to each other's /etc/hosts
    Now we are ready to run scinstall and create/configure the 2-node cluster
    19)
    +# cd /usr/cluster/bin+
    +# ./scinstall+
    select 1) Create a new cluster ...
    select 1) Create a new cluster
    select 2) Custom in "Typical or Custom Mode"
    Enter cluster name : mycluster1 (e.g)
    Add the 2 nodes : solvm1 & solvm2 and press <ctrl-d>
    Accept default "No" for <Do you need to use DES authentication>"
    Accept default "Yes" for <Should this cluster use at least two private networks>
    Enter "No" for <Does this two-node cluster use switches>
    Select "1)net1" for "Select the first cluster transport adapter"
    If there is warning of unexpected traffic on "net"1, ignore it
    Enter "net1" when it asks corresponding adapter on "solvm2"
    Select "2)net2" for "Select the second cluster transport adapter"
    Enter "net2" when it asks corresponding adapter on "solvm2"
    Select "Yes" for "Is it okay to accept the default network address"
    Select "Yes" for "Is it okay to accept the default network netmask"Now the IP addresses 172.16.0.0 will be plumbed in the 2 private interfaces
    Select "yes" for "Do you want to turn off global fencing"
    (These are SATA serial disks, so no fencing)
    Enter "Yes" for "Do you want to disable automatic quorum device selection"
    (we will add quorum disks later)
    Enter "Yes" for "Proceed with cluster creation"
    Select "No" for "Interrupt cluster creation for cluster check errors"
    The second node will be configured and 2nd node rebooted
    The first node will be configured and rebootedAfter both nodes have rebooted, verify the cluster has been created and both nodes joined.
    On both nodes :-
    +# cd /usr/cluster/bin+
    +# ./clnode status+
    +//should show both nodes Online.+
    At this point there are no quorum disks, so 1 of the node's will be designated quorum vote. That node VM has to be up for the other node to come up and cluster to be formed.
    To check the current quorum status, run :-
    +# ./clquorum show+
    +//one of the nodes will have 1 vote and other 0(zero).+
    20)
    Now the cluster is in 'Installation Mode' and we need to add a quorum disk.
    Shutdown both the nodes as we will be adding shared disks to both of them
    21)
    Create 2 VirtualBox HDDs (VDI Files) on the host, 1 for quorum and 1 for shared filesystem. I have used a size of 1 GB for each :-
    *$ vboxmanage createhd --filename /scratch/myimages/sc41cluster/sdisk1.vdi --size 1024 --format VDI --variant Fixed*
    *0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%*
    *Disk image created. UUID: 899147b9-d21f-4495-ad55-f9cf1ae46cc3*
    *$ vboxmanage createhd --filename /scratch/myimages/sc41cluster/sdisk2.vdi --size 1024 --format VDI --variant Fixed*
    *0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%*
    *Disk image created. UUID: 899147b9-d22f-4495-ad55-f9cf15346caf*
    22)
    Attach these disks to both the VMs as shared type
    *$ vboxmanage storageattach solvm1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk1.vdi --mtype shareable*
    *$ vboxmanage storageattach solvm1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk2.vdi --mtype shareable*
    *$ vboxmanage storageattach solvm2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk1.vdi --mtype shareable*
    *$ vboxmanage storageattach solvm2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk2.vdi --mtype shareable*
    The disks are attached to SATA ports 1 & 2 of each VM. On my VirtualBox on Linux, the controller type is "SATA", whereas on Windows it is "SATA Controller".
    The "--mtype shareable' parameter is important
    23)
    Mark both disks as shared :-
    *$ vboxmanage modifyhd /scratch/myimages/sc41cluster/sdisk1.vdi --type shareable*
    *$ vboxmanage modifyhd /scratch/myimages/sc41cluster/sdisk2.vdi --type shareable*
    24) Start both VMs. We need to format the 2 shared disks
    25) From VM1, run format. In my case, the 2 new shared disks show up as 'c7t1d0' and 'c7t2d0'.
    +# format+
    select disk 1 (c7t1d0)
    [disk formated]
    FORMAT MENU
    fdisk
    Type 'y' to accept default partition
    partition
    0
    <enter>
    <enter>
    1
    995mb
    print
    label
    <yes>
    quit
    quit26) Repeat step 25) for the 2nd disk (c7t2d0)
    27) Make sure the shared disks can be used for quorum :-
    On VM1
    +# ./cldevice refresh+
    +# ./cldevice show+
    On VM2
    +# ./cldevice refresh+
    +# ./cldevice show+
    The shared disks should have the same DID (d2,d3,d4 etc). Note down the DID that you are going to use for quorum (e.g d2)
    By default, global fencing is enabled for these disks. We need to turn it off for all disks as these are SATA disks :-
    +# cldevice set -p default_fencing=nofencing-noscrub d1+
    +# cldevice set -p default_fencing=nofencing-noscrub d2+
    +# cldevice set -p default_fencing=nofencing-noscrub d3+
    +# cldevice set -p default_fencing=nofencing-noscrub d4+
    28) It is better to do one more reboot of both VMs, otherwise I got a error when adding the quorum disk
    29) Run clsetup to add quorum disk and to complete cluster configuration :-
    +# ./clsetup+
    === Initial Cluster Setup ===
    Enter 'Yes' for "Do you want to continue"
    Enter 'Yes' for "Do you want add any quorum devices"
    Select '1) Directly Attached Shared Disk' for the type of device
    Enter 'Yes' for "Is it okay to continue"
    Enter 'd2' (or 'd3') for 'Which global device do you want to use'
    Enter 'Yes' for "Is it okay to proceed with the update"
    The command 'clquorum add d2' is run
    Enter 'No' for "Do you want to add another quorum device"
    Enter 'Yes' for "Is it okay to reset "installmode"?"Cluster initialization is complete.!!!
    30) Run 'clquorum status' to confirm both nodes and the quorum disk have 1 vote each
    31) Run other cluster commands to explore!
    I will cover Data services and shared file system in another post. Basically the other shared disk
    can be used to create a UFS filesystem and mount it on all nodes.

    The Solaris Cluster 4.1 Installation and Concepts Guide are available at :-
    http://docs.oracle.com/cd/E29086_01/index.html
    Thanks.

  • Logical host resource in SC3.2 missing IPMP

    I was under the impression that when creating a logical host resource in Solaris Cluster 3.2, that it will read the hosts and hostname.* files to get the IPMP info and do this automatically for you. I am not seeing this behavior and am hoping someone can shed some light on this for me. Here is the relevant data, any help is greatly appreciated. If it matters, this is Solaris Cluster 3.2 on Solaris 10, running in a 25k domain.
    # grep cluster1vip hosts
    10.230.108.28 cluster1vip cluster1vip.example.com
    10.230.108.29 cluster1vip-ce1 #ipmp_test_cluster1vip1
    10.230.108.30 cluster1vip-ce5 #ipmp_test_cluster1vip2
    # cat hostname.ce1
    cluster1vip netmask + broadcast + group ipmp_sc1 up \
    addif cluster1vip-ce1 deprecated -failover netmask + up
    # cat hostname.ce5
    cluster1vip-ce5 deprecated -failover netmask + broadcast + group ipmp_sc1 up
    When I bring the resource online, these are the settings that get applied:
    ce6:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3
    inet 10.230.108.28 netmask ffffff80 broadcast 10.230.108.127
    Again, any help or guidance is greatly appreciated.

    For probe based IPMP the following has always worked for me:
    - IPMP interfaces of same media type
    - SPARC needs local-mac-address set to true in OBP
    - Supported interface (if using probe method - [http://docs.sun.com/app/docs/doc/816-4554/mpoverview?a=view|http://docs.sun.com/app/docs/doc/816-4554/mpoverview?a=view] )
    /etc/hosts
    192.168.1.1 hawkeye
    192.168.1.2 ipmp-test1
    192.168.1.3 ipmp-test2
    For the cluster node 'hawkeye'
    /etc/hostname.ce0 before the IPMP change:
    hawkeye group sc_ipmp0 -failover
    /etc/hostname.ce0 after the IPMP change:
    hawkeye group sc_ipmp- failover up addif ipmp-test1 -failover deprecated up
    /etc/hostname.ce.1
    group sc_ipmp0 failover up addif ipmp-test2 -failover deprecated up
    Reboot cluster node and you should be good to go.

  • Internet load balancing with two machine (NLB)

    i have install two TMG 2010 and one is manger and one managed , but seems once manager down , other server not passing the traffic with vip , my question is does it supported like NLB with two machine ?

    Hi,
    Coud you confirm you can ping sucecssfully both of two machine before?
    To check if the NLB status is green:
    In the Forefront TMG Management console, in the tree, click the Monitoring node.
    In the Alerts tab, check that NLB Started for each array member.
    In the Network Load Balancing Manager (nlbmgr), check the status of the current node for all clusters define. Repeat this for each member.
    Run nlb display to view the current state of the NLB cluster and hosts. Repeat this for each membe
    http://technet.microsoft.com/es-es/library/ff849728.aspx#s3
    Best Regards
    Quan Gu

Maybe you are looking for

  • MacBookPro Cannot Make/Receive Phone Calls

    Hello, My MacBookPro (early 2013 Retina) cannot make or receive phone calls.  I'm on the same wifi network, my bluetooth is on, I can send/receive texts and SMS, I have the most recent software upgrades, but I can't make calls.  As of right now when

  • Missing or can't read a file

    When I start iTunes there is standing right in the corner in a balloon \ipod_control\itunes\temp file 2 is missing or can not be read please use the progam: CHKDSK And then i use that progam but he is stil missing or can't read the file?

  • 3 month subscription

    Please I want to buy 3 month bundle of 400 minutes for 27.05 Euro. My concern is will I use only the 400 minutes for three months or 400 minutes each month for 3 consecutive months totalling 1200 minutes for the 3 months. Please i need urgent respons

  • Having trouble positioning text

    http://kurtesposito.com/contact.htm Looking for some help on how to position this text on this page. Not sure what I'm doing wrong. Also looking for helping on that horizontal strip of white running along the top of my image. Thanks

  • IOS 7 cheat sheet

    Take a look at: http://ios7.colibrimedia.de/ - Designed to give a quick overview of Resolutions & Display specifivations, Sizes and Typography. Download for free.