Clustering with wl 6.1 - Questions Part II

Again, here's the setup:
          Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239, 192.168.1.71
          managed server 1: 192.168.1.239:7001 (SunOS)
          managed server 2: 192.168.1.71:7001 ('98)
          I have a single SLSB deployed on the cluster. The deployment descriptor
          has been configured to stateless clustering. The bean has a single method
          (returnMessage(String msg)) that accepts the strings and returns it.
          A standalone java client acesses the cluster by specifying the JNDI URL
          "t3://192.168.1.71, 192.168.1.239:7001" and works properly. The java client
          does a JNDI lookup once, caches the home reference and call home.create in
          an infinite loop.
          1. After reading one of the posts here, I physically restarted one of the
          machines in the cluster (192.168.1.71). The java client was stuck for a 4 *
          30 secs. The managed server whose network cable I yanked out kept showing
          the following message every 30 secs (as the docs say):
          <25/08/2002 13:52:21> <Error> <Cluster> <Multicast socket receive error:
          java.io.InterruptedIOException: Receive timed out
          java.io.InterruptedIOException: Receive timed out
          at java.net.PlainDatagramSocketImpl.receive(Native Method)
          at java.net.DatagramSocket.receive(DatagramSocket.java:392)
          at weblogic.cluster.FragmentSocket.receive(FragmentSocket.java:145)
          at
          weblogic.cluster.MulticastManager.execute(MulticastManager.java:293)
          at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
          at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
          justthe way it should be (hopefully)! Now, the real fun started when i put
          the cord back. The admin server should have detected that the managed
          server, right (The console should show the managed server as "Running" in
          the "Servers" tab)? Wrong, it doesn't! Why?
          2. I restarted the weblogic server on 192.168.1.71 and sure enough, it
          joined the cluster. After it joined the cluster, the last line of
          "weblogic.log" is:
          ####<Aug 25, 2002 2:15:53 PM PDT> <Info> <Management> <DUMMY135> <wlsdomain>
          <ExecuteThread: '8' for queue: 'default'> <system> <> <140009>
          <Configuration changes for domain saved to the repository.>
          And the "wl-domain.log" ("domain" is the name of my domain) has the last
          entry as:
          ####<Aug 25, 2002 1:43:56 AM PDT> <Notice> <WebLogicServer> <dummy71>
          <dummy71> <ExecuteThread: '14' for queue: 'default'> <system> <> <000332>
          <Started WebLogic Managed Server "dummy71" for domain "domain" running in
          Development Mode>
          The last few lines on the console window of the managed server (dummy71)
          shows:
          Starting Cluster Service ....
          <25/08/2002 14:13:55> <Notice> <WebLogicServer> <ListenThread listening on
          port 7001, ip address 192.168.1.71>
          <25/08/2002 14:13:56> <Notice> <Cluster> <Listening for multicast messages
          (cluster wlcluster) on port 7001 at address 237.0.0.1>
          <25/08/2002 14:13:56> <Notice> <WebLogicServer> <Started WebLogic Managed
          Server "dummy71" for domain "domain" running in Development Mode>
          2.1 I presume the "Configuration changes for domain saved to repository"
          indicates weblogic changed "cofig.xml"?
          2.2 Why would it update config.xml?
          2.3 Where is the entry in the logs to indicate that dummy71 joined the
          cluster?
          2.4 dummy71 joined the cluster at 14:13:56 while the admin server has
          the entry at 1:43:56 AM PDT. Why is there a timezone mentioned in
          "wl-domain.log"? Further investigation found out that time zone on the admin
          server was Pacific (US). Changed it to GMT+5:30 (so its the same on both),
          restarted weblogic on dummy71. Again, the logs mention PDT on the admin
          server, while there is no mention of the time zone on the managed server?
          Strange! Finally, the system clock on admin server shows 3:01 PM, while
          weblogic is about an hour behind in a different time zone (the last entry in
          the logs now has the timestamp Aug 25, 2002 2:02:14 AM PDT).
          3. Finally, i had modified my config.xml to add the following:
          <ServerDebug DebugCluster="true"
          DebugClusterAnnouncements="true"
          DebugClusterFragments="true" DebugClusterHeartbeats="true"
          Name="dummy239"/>
          I added the lines above for both the servers in the cluster to take a look
          at the chatter between the admin server and the managed servers
          (specifically the "heartbeat" every 30 seconds, the synchronization of
          clustered services (emphasis on EJB and JNDI)). Are the entries above
          correct and in the right place? If yes, what log files do i check for this?
          Can anyone tell me a sample message to look for (assuming this chatter is
          recorded in weblogic.log or wl-domain.log (the two log files i have)) so I
          can verify the same in my logs (and probably learn something from them
          With Warm Regards,
          Manav.
          

BTW - when I said "bad test", I didn't mean you should not do it, but rather
          that it is a "worst case scenario" (and something that BEA should be testing
          Peace,
          Cameron Purdy
          Tangosol, Inc.
          http://www.tangosol.com/coherence.jsp
          Tangosol Coherence: Clustered Replicated Cache for Weblogic
          "Manavendra Gupta" <[email protected]> wrote in message
          news:[email protected]...
          > Thanks for the reply Cameron. I conducted the same test by physically
          > shutting down one of the machines (which I believe simulates a crash) and
          > got the same results.
          >
          > And since I'm rather new to weblogic i've been really keen on getting the
          > answers to my queries :-)
          >
          > --
          > With Warm Regards,
          > Manav.
          >
          > "Cameron Purdy" <[email protected]> wrote in message
          > news:[email protected]...
          > > Plugging and unplugging a network cable is a very bad test. Some of the
          > JVM
          > > implementations do not re-establish multicast traffic if you do that, or
          > > they re-establish it only in one direction (it could be a bug in the OS
          or
          > > socket libs too). We implemented a solution for this exact problem, but
          it
          > > took a lot of research and coding.
          > >
          > > A better test would be to have a computer plugged to a switch plugged to
          a
          > > switch plugged to the other computer, then unplug the connection between
          > the
          > > switches.
          > >
          > > Peace,
          > >
          > > Cameron Purdy
          > > Tangosol, Inc.
          > > http://www.tangosol.com/coherence.jsp
          > > Tangosol Coherence: Clustered Replicated Cache for Weblogic
          > >
          > >
          > > "Manavendra Gupta" <[email protected]> wrote in message
          > > news:[email protected]...
          > > > Again, here's the setup:
          > > > Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239,
          > > 192.168.1.71
          > > > managed server 1: 192.168.1.239:7001 (SunOS)
          > > > managed server 2: 192.168.1.71:7001 ('98)
          > > >
          > > > I have a single SLSB deployed on the cluster. The deployment
          > descriptor
          > > > has been configured to stateless clustering. The bean has a single
          > method
          > > > (returnMessage(String msg)) that accepts the strings and returns it.
          > > >
          > > > A standalone java client acesses the cluster by specifying the JNDI
          URL
          > > > "t3://192.168.1.71, 192.168.1.239:7001" and works properly. The java
          > > client
          > > > does a JNDI lookup once, caches the home reference and call
          home.create
          > in
          > > > an infinite loop.
          > > >
          > > > 1. After reading one of the posts here, I physically restarted one of
          > the
          > > > machines in the cluster (192.168.1.71). The java client was stuck for
          a
          > 4
          > > *
          > > > 30 secs. The managed server whose network cable I yanked out kept
          > showing
          > > > the following message every 30 secs (as the docs say):
          > > > <25/08/2002 13:52:21> <Error> <Cluster> <Multicast socket receive
          error:
          > > > java.io.InterruptedIOException: Receive timed out
          > > > java.io.InterruptedIOException: Receive timed out
          > > > at java.net.PlainDatagramSocketImpl.receive(Native Method)
          > > > at java.net.DatagramSocket.receive(DatagramSocket.java:392)
          > > > at
          > > weblogic.cluster.FragmentSocket.receive(FragmentSocket.java:145)
          > > > at
          > > > weblogic.cluster.MulticastManager.execute(MulticastManager.java:293)
          > > > at
          > weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
          > > > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
          > > >
          > > > justthe way it should be (hopefully)! Now, the real fun started when i
          > put
          > > > the cord back. The admin server should have detected that the managed
          > > > server, right (The console should show the managed server as "Running"
          > in
          > > > the "Servers" tab)? Wrong, it doesn't! Why?
          > > > 2. I restarted the weblogic server on 192.168.1.71 and sure enough, it
          > > > joined the cluster. After it joined the cluster, the last line of
          > > > "weblogic.log" is:
          > > > ####<Aug 25, 2002 2:15:53 PM PDT> <Info> <Management> <DUMMY135>
          > > <wlsdomain>
          > > > <ExecuteThread: '8' for queue: 'default'> <system> <> <140009>
          > > > <Configuration changes for domain saved to the repository.>
          > > >
          > > > And the "wl-domain.log" ("domain" is the name of my domain) has the
          last
          > > > entry as:
          > > > ####<Aug 25, 2002 1:43:56 AM PDT> <Notice> <WebLogicServer> <dummy71>
          > > > <dummy71> <ExecuteThread: '14' for queue: 'default'> <system> <>
          > <000332>
          > > > <Started WebLogic Managed Server "dummy71" for domain "domain" running
          > in
          > > > Development Mode>
          > > >
          > > > The last few lines on the console window of the managed server
          (dummy71)
          > > > shows:
          > > > Starting Cluster Service ....
          > > > <25/08/2002 14:13:55> <Notice> <WebLogicServer> <ListenThread
          listening
          > on
          > > > port 7001, ip address 192.168.1.71>
          > > > <25/08/2002 14:13:56> <Notice> <Cluster> <Listening for multicast
          > messages
          > > > (cluster wlcluster) on port 7001 at address 237.0.0.1>
          > > > <25/08/2002 14:13:56> <Notice> <WebLogicServer> <Started WebLogic
          > Managed
          > > > Server "dummy71" for domain "domain" running in Development Mode>
          > > >
          > > > 2.1 I presume the "Configuration changes for domain saved to
          > > repository"
          > > > indicates weblogic changed "cofig.xml"?
          > > > 2.2 Why would it update config.xml?
          > > > 2.3 Where is the entry in the logs to indicate that dummy71 joined
          > the
          > > > cluster?
          > > > 2.4 dummy71 joined the cluster at 14:13:56 while the admin server
          > has
          > > > the entry at 1:43:56 AM PDT. Why is there a timezone mentioned in
          > > > "wl-domain.log"? Further investigation found out that time zone on the
          > > admin
          > > > server was Pacific (US). Changed it to GMT+5:30 (so its the same on
          > both),
          > > > restarted weblogic on dummy71. Again, the logs mention PDT on the
          admin
          > > > server, while there is no mention of the time zone on the managed
          > server?
          > > > Strange! Finally, the system clock on admin server shows 3:01 PM,
          while
          > > > weblogic is about an hour behind in a different time zone (the last
          > entry
          > > in
          > > > the logs now has the timestamp Aug 25, 2002 2:02:14 AM PDT).
          > > >
          > > > 3. Finally, i had modified my config.xml to add the following:
          > > > <ServerDebug DebugCluster="true"
          > > > DebugClusterAnnouncements="true"
          > > > DebugClusterFragments="true" DebugClusterHeartbeats="true"
          > > > Name="dummy239"/>
          > > > I added the lines above for both the servers in the cluster to take a
          > look
          > > > at the chatter between the admin server and the managed servers
          > > > (specifically the "heartbeat" every 30 seconds, the synchronization of
          > > > clustered services (emphasis on EJB and JNDI)). Are the entries above
          > > > correct and in the right place? If yes, what log files do i check for
          > > this?
          > > > Can anyone tell me a sample message to look for (assuming this chatter
          > is
          > > > recorded in weblogic.log or wl-domain.log (the two log files i have))
          so
          > I
          > > > can verify the same in my logs (and probably learn something from them
          > > > :-) ).
          > > >
          > > > --
          > > > With Warm Regards,
          > > > Manav.
          > > >
          > > >
          > > >
          > >
          > >
          >
          >
          

Similar Messages

  • Clustering with wl 6.1 - Questions

    Hi,
              Sorry for the duplicity, but I realized I need to start a new thread.
              Regards,
              Manav.
              ==================
              Hi,
              My setup is:
              Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239, 192.168.1.71
              managed server 1: 192.168.1.239:7001 (SunOS)
              managed server 1: 192.168.1.71:7001 ('98)
              I have a sample SLSB deployed on the server. The deployment desriptors are
              configured as:
              <stateless-session-descriptor>
              <stateless-clustering>
              <stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
              <stateless-bean-load-algorithm>random</stateless-bean-load-algorithm>
              <stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idem
              potent>
              </stateless-clustering>
              </stateless-session-descriptor>
              A test client acess the cluster by specifying the JNDI URL
              "t3://192.168.1.71, 192.168.1.239:7001" and works properly.
              1. If I kill one of the managed servers, the requests are load balanced to
              the other server. But the behaviour is erratic - after one server dies, it
              actually executes faster!?! And no, its independed of whichever managed
              server I kill.
              2. How do I monitor how many instances were created on each managed server?
              Does it the WL console show this somewhere?
              3. I have not done anything special to setup the JNDI tree? I'm a li'l hazy
              on this (am reading documents on the same). Any pointers? I'm still trying
              to grasp the need (I understand the bit about synchronizing the JNDI tree in
              the cluster to keep all servers aware of the EJBs deployed on them, and on
              others on the cluster, but i'm sure there's more to it that I don't know
              about).
              4. I find when the managed servers join the cluster, any errors/info
              messages are not stored in their respective logs. Is this the right
              behaviour? Are these logs stored on the admin server instead? (Is this what
              the log file wl-domain.log used for?)
              5. How do i monitor the deployed ejbs through the console? Does it tell me
              the number of creations/destructions, method calls, etc? How to determine if
              the pool is adequate or requires to be increased?
              6. What is a realm? How is it helpful?
              7. What is a replication group? How does it help me in clustering?
              The more I read, the more I keep getting confused. I don't expect the gurus
              to answers all questions, but any help will be appreciated.
              With Warm Regards,
              Manav.
              

    Hi,
              Sorry for the duplicity, but I realized I need to start a new thread.
              Regards,
              Manav.
              ==================
              Hi,
              My setup is:
              Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239, 192.168.1.71
              managed server 1: 192.168.1.239:7001 (SunOS)
              managed server 1: 192.168.1.71:7001 ('98)
              I have a sample SLSB deployed on the server. The deployment desriptors are
              configured as:
              <stateless-session-descriptor>
              <stateless-clustering>
              <stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
              <stateless-bean-load-algorithm>random</stateless-bean-load-algorithm>
              <stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idem
              potent>
              </stateless-clustering>
              </stateless-session-descriptor>
              A test client acess the cluster by specifying the JNDI URL
              "t3://192.168.1.71, 192.168.1.239:7001" and works properly.
              1. If I kill one of the managed servers, the requests are load balanced to
              the other server. But the behaviour is erratic - after one server dies, it
              actually executes faster!?! And no, its independed of whichever managed
              server I kill.
              2. How do I monitor how many instances were created on each managed server?
              Does it the WL console show this somewhere?
              3. I have not done anything special to setup the JNDI tree? I'm a li'l hazy
              on this (am reading documents on the same). Any pointers? I'm still trying
              to grasp the need (I understand the bit about synchronizing the JNDI tree in
              the cluster to keep all servers aware of the EJBs deployed on them, and on
              others on the cluster, but i'm sure there's more to it that I don't know
              about).
              4. I find when the managed servers join the cluster, any errors/info
              messages are not stored in their respective logs. Is this the right
              behaviour? Are these logs stored on the admin server instead? (Is this what
              the log file wl-domain.log used for?)
              5. How do i monitor the deployed ejbs through the console? Does it tell me
              the number of creations/destructions, method calls, etc? How to determine if
              the pool is adequate or requires to be increased?
              6. What is a realm? How is it helpful?
              7. What is a replication group? How does it help me in clustering?
              The more I read, the more I keep getting confused. I don't expect the gurus
              to answers all questions, but any help will be appreciated.
              With Warm Regards,
              Manav.
              

  • Exporting data clusters with type version

    Hi all,
    let's assume we are saving some ABAP data as a cluster to database using the IMPORT TO ... functionality, e.g.
    EXPORT VBAK FROM LS_VBAK VBAP FROM LT_VBAP  TO DATABASE INDX(QT) ID 'TEST'
    Some days later, the data can be imported
    IMPORT VBAK TO LS_VBAK VBAP TO LT_VBAP FROM DATABASE INDX(QT) ID 'TEST'.
    Some months or years later, however, the IMPORT may crash: Since it is the most normal thing in the world that ABAP types are extended, some new fields may have been added to the structures VBAP or VBAK in the meantime.
    The data are not lost, however: Using method CL_ABAP_EXPIMP_UTILITIES=>DBUF_IMPORT_CREATE_DATA, they can be recovered from an XSTRING. This will create data objects apt to the content of the buffer. But the component names are lost - they get auto-generated names like COMP00001, COMP00002 etc., replacing the original names MANDT, VBELN, etc.
    So a natural question is how to save the type info ( = metadata) for the extracted data together with the data themselves:
    EXPORT TYPES FROM LT_TYPES VBAK FROM LS_VBAK VBAP FROM LT_VBAP TO DATABASE INDX(QT) ID 'TEST'.
    The table LT_TYPES should contain the meta type info for all exported data. For structures, this could be a DDFIELDS-like table containing the component information. For tables, additionally the table kind, key uniqueness and key components should be saved.
    Actually, LT_TYPES should contain persistent versions of CL_ABAP_STRUCTDESCR, CL_ABAP_TABLEDESCR, etc. But it seems there is no serialization provided for the RTTI type info classes.
    (In an optimized version, the type info could be stored in a separate cluster, and being referenced by a version number only in the data cluster, for efficiency).
    In the import step, the LT_TYPES could be imported first, and then instances for these historical data types could be created as containers for the real data import (here, I am inventing a class zcl_abap_expimp_utilities):
    IMPORT TYPES TO LT_TYPES FROM DATABASE INDX(QT) ID 'TEST'.
    DATA(LO_TYPES) = ZCL_ABAP_EXPIMP_UITLITIES=>CREATE_TYPE_INFOS ( LT_TYPES ).
    assign lo_types->data_object('VBAK')->* to <LS_VBAK>.
    assign lo_types->data_object('VBAP')->* to <LT_VBAP>.
    IMPORT VBAK TO <LS_VBAK> VBAP TO <LT_VBAP> FROM DATABASE INDX(QT) ID 'TEST'.
    Now the data can be recovered with their historical types (i.e. the types they had when the export statement was performed) and processed further.
    For example, structures and table-lines could be mixed into the current versions using MOVE-CORRESPONDING, and so on.
    My question: Is there any support from the standard for this functionality: Exporting data clusters with type version?
    Regards,
    Rüdiger

    The IMPORT statement works fine if target internal table has all fields of source internal table, plus some additional fields at the end, something like append structure of vbak.
    Here is the snippet used.
    TYPES:
    BEGIN OF ty,
      a TYPE i,
    END OF ty,
    BEGIN OF ty2.
            INCLUDE TYPE ty.
    TYPES:
      b TYPE i,
    END OF ty2.
    DATA: lt1 TYPE TABLE OF ty,
          ls TYPE ty,
          lt2 TYPE TABLE OF ty2.
    ls-a = 2. APPEND ls TO lt1.
    ls-a = 4. APPEND ls TO lt1.
    EXPORT table = lt1 TO MEMORY ID 'ZTEST'.
    IMPORT table = lt2 FROM MEMORY ID 'ZTEST'.
    I guess IMPORT statement would behave fine if current VBAK has more fields than older VBAK.

  • CSA 5.1 Agent Installation on Microsoft Clusters with Teamed Broadcom NICs

    I'm searching all over Cisco.com for information on installing CSA 5.1 agent on Microsoft Clusters with Teamed Broadcom NICs, but I can't find any information other than "this is supported" in the installation guide.
    Does anyone know if there is a process or procedure that should be followed to install this? For example, some questions that come to mind are:
    - Do the cluster services are needed to be stopped?
    - Should the cluster be broken and then rebuilt?
    - Is there any documentation indicating this configuration is approved by Microsoft?
    - Are there case studies or other documentation on previous similar installations and/or lessons learned?
    Thanks in advance,
    Ken

    Ken, you might just end up being the case study! Do you have a non-production cluster to with?
    If not and you already completed pilot testing, you probably have an idea of what you want to do with the agent. Do you have to stop the cluster for other software installations? I guess you might ask MS about breaking the cluster it since it's their cluster.
    The only caveat I've seen with teamed NICs is when the agent tries to contact the MC it may timeout a few times. You could probably increase the polling time if this happens.
    I'd create an agent kit that belongs to a group in test mode with minimal or no policies attached to test first and install it on one of the nodes. If that works ok you could gradually increase the policies and rules until you are comfortable that it is tuned correctly and then switch to protect mode.
    Hope this helps...
    Tom S

  • Had a windows partition but was too small; with bootcamp made 1 new part and then tryiing to make a new wind part wich is bigger, but that is not working

    Hello
    I have a nem apple macbook and made a 20 GB windows part on mac with bootcamp..after 1 week saw that 20 GB was too small so with bootcamp i removed the part, made 1 part of mac and then tried to make the new bigger oart of windows with bootcamp; this is not working; the mac will not recognize the windows installation cd

    hansthebeast wrote:
    Hello
    I have a nem apple macbook and made a 20 GB windows part on mac with bootcamp..after 1 week saw that 20 GB was too small so with bootcamp i removed the part, made 1 part of mac and then tried to make the new bigger oart of windows with bootcamp; this is not working; the mac will not recognize the windows installation cd
    Try asking your Boot Camp question in the Boot Camp forum rather than in the Windows Compatibility forum. The Boot Camp gurus hang out in the Boot Camp forum: https://discussions.apple.com/community/windows_software/boot_camp

  • Clustering with HttpClusterServlet

    I have created a domain on my machine in which I have configured an admin server and two more instances that are part of a cluster. When I try to connect to any of the cluster instances after I deploy an application on them it runs fine. However when I try to access the HttpClusterServlet deployed on the admin server it says Error 404--Not Found
              From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
              10.4.5 404 Not Found
              

    Sorry I meant to send this to the ejb newsgroup.
              dan
              dan benanav wrote:
              > Do any vendors provide for clustering with automatic failover of entity
              > beans? I know that WLS does not. How about Gemstone? If not is there
              > a reason why it is not possible?
              >
              > It seems to me that EJB servers should be capable of automatic failover
              > of entity beans.
              >
              > dan
              

  • HT3819 what is the best way to set up my childs iphone with apple id and as part of home sharing?

    what is the best way to set up my childs iphone with apple id and as part of home sharing?

    So do I have to hook up an Ethernet cable to both the Uverse AND my computer from the Time Capsule?
    I'm not sure that I understand everything that you want to do and where devices will be located.
    The Time Capsule must connect to the "main" Uverse router using a permanent, wired Ethernet cable connection. An Ethernet cable can be run up to 300+ feet with virtually no loss, so you should be able to locate the Time Capsule wherever you want.....unless you have a very large estate.
    If you want the Time Capsule to strengthen the wireless signal provided by the Uverse router, then the Time Capsule must be located in the area where you need that additional signal strength.
    If I understand your post correctly, you plan to install the Time Capsule in the office? When you do this, you can configure the Time Capsule to create a wireless signal that uses the exact same wireless network name and password as the Uverse wireless network.
    That will provide a much stronger signal for your Uverse wireless network in the office area. Hopefully, the bedroom that you mention is close to the office, so it will pick up the stronger wireless signal from the Time Capsule.
    The iMac in the office can connect to the Time Capsule using another short Ethernet cable connection, or the iMac can connect using wireless.  A wired connection is always preferred, if possible.
    At this point, I guess the first question would be.......
    Do you have a location for the Time Capsule that will be close to the office....and...the bedroom where you want a stronger wireless signal?

  • Best Practices for patching Sun Clusters with HA-Zones using LiveUpgrade?

    We've been running Sun Cluster for about 7 years now, and I for
    one love it. About a year ago, we starting consolidating our
    standalone web servers into a 3 node cluster using multiple
    HA-Zones. For the most part, everything about this configuration
    works great! One problem we've having is with patching. So far,
    the only documentation I've been able to find that talks about
    patch Clusters with HA-Zones is the following:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi2g0
    Sun Cluster System Administration Guide for Solaris OS
    How to Apply Patches in Single-User Mode with Failover Zones
    This documentation works, but has two major drawbacks:
    1) The nodes/zones have to be patched in Single-User Mode, which
    translates to major downtime to do patching.
    2) If there are any problems during the patching process, or
    after the cluster is up, there is no simple back out process.
    We've been using a small test cluster to test out using
    LiveUpgrade with HA-Zones. We've worked out most of bugs, but we
    are still in a position of patching our HA-Zoned clusters based
    on home grow steps, and not anything blessed by Oracle/Sun.
    How are others patching Sun Cluster nodes with HA-Zones? Has any
    one found/been given Oracle/Sun documentation that lists the
    steps to patch Sun Clusters with HA-Zones using LiveUpgrade??
    Thanks!

    Hi Thomas,
    there is a blueprint that deals with this problem in much more detail. Actually it is based on configurations that are solely based on ZFS, i.e. for root and the zone roots. But it should be applicable also to other environments. "!Maintaining Solaris with Live Upgrade and Update On Attach" (http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach)
    Unfortunately, due to some redirection work in the joint Sun and Oracle network, access to the blueprint is currently not available. If you send me an email with your contact data I can send you a copy via email. (You'll find my address on the web)
    Regards
    Hartmut

  • SAP ECC 6.0 installation in windows 2008 clustering with db2 ERROR DB21524E

    Dear Sir.,
    Am installing sap ECC 6.0 on windows 2008 clustering with db2.
    I got one error in the phase of Configure the database for mscs. The error is  DB21524E 'FAILED TO CREATE THE RESOURCE DB2 IP PRD' THE CLUSTER NETWORK WAS NOT FOUND .
    DB2_INSTANCE=DB2PRD
    DB2_LOGON_USERNAME=iil\db2prd
    DB2_LOGON_PASSWORD=XXXX
    CLUSTER_NAME=mscs
    GROUP_NAME=DB2 PRD Group
    DB2_NODE=0
    IP_NAME = DB2 IP PRD
    IP_ADDRESS=192.168.16.27
    IP_SUBNET=255.255.0.0
    IP_NETWORK=public
    NETNAME_NAME=DB2 NetName PRD
    NETNAME_VALUE=dbgrp
    NETNAME_DEPENDENCY=DB2 IP PRD
    DISK_NAME=Disk M::
    TARGET_DRVMAP_DISK=Disk M
    Best regards.,
    please help me since am already running late with this installation to run the db2mscs utility to Create resource.
    Best regards.,
    Manjunath G
    Edited by: Manjug77 on Oct 29, 2009 2:45 PM

    Hello Manjunath.
    This looks like a configuration problem.
    Please check if IP_NETWORK is set to the name of your network adapter and
    if your IP_ADDRESS and IP_SUBNET are set to the correct values.
    Note:
    - IP_ADDRESS is a new IP address that is not used by any machine in the network.
    - IP_NETWORK is optional
    If you still get the same error debug your db2mscs.exe-call:
    See the answer from  Adam Wilson:
    Can you run the following and check the output:
    db2mscs -f <path>\db2mscs.cfg -d <path>\debug.txt
    I suspect you may see the following error in the debug.txt:
    Create_IP_Resource fnc_errcode 5045
    If you see the fnc_errcode 5045
    In that case, error 5045 which is a windows error, means
    ERROR_CLUSTER_NETWORK_NOT_FOUND. This error is occuring because windows
    couldn't find the "public network" as indicated by IP_NETWORK.
    Windows couldn't find the MSCS network called "public network". The
    IP_NETWORK parameter must be set to an MSCS Network., so running the
    Cluster Admin GUI and expanding the Cluster Configuration->Network to
    view all MSCS networks that were available and if "public network" was
    one of them.
    However, the parameter IP_NETWORK is optional and you could be commented
    out. In that case the first MSCS network detected by the system was used.
    Best regards,
    Hinnerk Gildhoff

  • I need help with Changing my Security Questions, I have forgotten them.

    Its simple, I tried buying a Gym Buddy Application and I had to answer my security questions... Which I have forgotten I made this a while ago so I probably entered something stupid and fast to make I really regert it now. When i'm coming to this...

    Hello Adrian,
    The steps in the articles below will guide you in setting up your rescue email address and resetting your security questions:
    Rescue email address and how to reset Apple ID security questions
    http://support.apple.com/kb/HT5312
    Apple ID: All about Apple ID security questions
    http://support.apple.com/kb/HT5665
    If you continue to have issues, please contact our Account Security Team as outlined in this article for assistance with resetting the security questions:
    Apple ID: Contacting Apple for help with Apple ID account security
    http://support.apple.com/kb/HT5699
    Thank you for using Apple Support Communities.
    Best,
    Sheila M.

  • I have a problem with downloading Indesign CS 6 part 1 (Mac) , it always stop downloading at 6.8MB !!!

    As the title has stated, I have a problem with downloading Indesign CS 6 part 1(Mac) , it always stop downloading at 6.8MB (out of 1.2GB). I tried restarting my mac, using different browser but it is always the same result... Any help???

    Try it from this site
    Download Adobe CS6 Trials: Direct Links (no Assistant or Manager) | ProDesignTools

  • Problem with clustering with JBoss server

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • Problem with clustering with JBoss server---help needed

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • PDF prints with little boxes and question marks

    I prepared a document in Microsoft Publisher 360 and created a pdf with creative cloud.
    Looks great and prints great for me.
    I emailed it to a few people and it looks great on their screens. However, when they print it,
    it comes out with little boxes and question marks. 
    If I email them the publisher file it is fine.
    How I can get them the PDF file without all the odd markings?

    You probably used a font that is not on their system and did not embed the font in the PDF when you created it.  Don't feel bad.  We all have done it.  Just  make sure to embed your fonts in the Joboptions settings before you recreate the PDF.

  • 3 Node hyper-V 2012 R2 Failover Clustering with Storage spaces on one of the Hyper-V hosts

    Hi,
    We have 3x Dell R720s with 5x 600GB 3.5 15K SAS and 128 GB RAM each. Was wondering if I could setup a Fail-over Hyper-V 2012 R2 Clustering with these 3 with the shared storage for the CSV being provided by one of the Hyper-V hosts with storage spaces installed
    (Is storage spaces supported on Hyper-V?) Or I can use a 2-Node Failover clustering and the third one as a standalone Hyper-V or Server 2012 R2 with Hyper-V and storage spaces.  
    Each Server comes with QP 1G and a DP10G nics so that I can dedicate the 10G nics for iSCSI
    Dont have a SAN or a 10G switch so it would be a crossover cable connection between the servers.
    Most of the VMs would be Non-HA. Exchange 2010, Sharepoint 2010 and SQL Server 2008 R2 would be the only VMS running as HA-VMs. CSV for the Hyper-V Failover cluster would be provided by the storage spaces.

    I thought I was tying to do just that with 8x600 GB RAID-10 using the H/W RAID controller (on the 3rd Server) and creating CSVs out of that space so as to provide better storage performance for the HA-VMs.
    1. Storage Server : 8x 600GB RAID-10 (For CSVs to house all HA-VMs running on the other 2 Servers) It may also run some local VMs that have very little disk I/O
    2. Hyper-V-1 : Will act has primary HYPER-V hopst for 2x Exchange and Database Server HA-VMs (the VMDXs would be stored on the Storage Servers CSVs on top of the 8x600GB RAID-10). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    3. Hyper-V-2 : Will act as a Hyper-V host when the above HA-VMs fail-over to this one (when HYPER-V-1 is down for any reason). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    The single point of failure for the HA-VMs (non HA-VMs are non-HA so its OK if they are down for some time) is the Storage Server. The Exchange servers here are DAG peers to the Exchange Servers at the head office so in case the storage server mainboard
    goes down (disk failure is mitigated using RAID, other components such as RAM, mainboard may still go but their % failure is relatively low) the local exchange servers would be down but exchange clients will still be able to do their email related tasks using
    the HO Exchange servers.
    Also they are under 4hr mission critical support including entire server replacement within the 4 hour period. 
    If you're OK with your shared storage being a single point of failure then sure you can proceed the way you've listed. However you'll still route all VM-related I/O over Ethernet which is obviously slower then running VMs from DAS (with or without virtual SAN
    LUN-to-LUN replication layer) as DAS has higher bandwidth and smaller latency. Also with your scenario you exclude one host from your hypervisor cluster so running VMs on a pair of hosts instead of three would give you much worse performance and resilience:
    with 1 from 3 physical hosts lost cluster would be still operable and with 1 from 2 lost all VMs booted on a single node could give you inadequate performance. So make sure your hosts would be insanely underprovisioned as every single node should be able to
    handle ALL workload in case of disaster. Good luck and happy clustering :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for

  • Problems after moving my iTunes library to an external hard drive.

    Hello everyone. I recently followed the instructions in the Apple support article to move my iTunes library to an external drive. After completing all the steps in the article I consolidated my library on the new drive and verified that all of my fil

  • 8000 Plus Photos -- Should I upgrade?

    Hello, I have iPhoto 6 with almost 9000 photos sorted simply by year using smartfolders. None of the pics have any special tags/names. Simply the camera sequential number. Maybe I'm lazy, but this has worked for me and it's great watching them random

  • PSE9: Only 366 of really 56467 pics show in converted catalog?

    Problem: No conversion of PSE 7 catalog. Running Windows 7 on quadcore machine. Downloaded trial version of PSE9. Installed on my wife's machine. PSE 7 shows 56467 pics on 202 files. Started PSE9 Opened Organizer Convert catalog. Conversion crashes w

  • Where to buy a power adap

    Is there any retail stores that sell the power adapter and extra battery for a zen micro? I dont have a credit card so online stores are not an option.

  • Ironport C160-Best practice config for my 2 listeners?

    I am trialling an Ironport C160. I want it to scan inbound and outbound mail. I have configured a public inbound listener for mail from the internet. It is configured to accept all my domains, and forward them to my exchange server. It does LDAP look