Solaris on Dell/Compaq Multi-CPU servers?

Am interested in hearing from anyone who has used Solaris 8 on either a Compaq or Dell server with multiple CPUs. Have a client interested in such a configuration, but am concerned that since that isn't a bundle option, that I may be helping to steer them down a difficult road.
Thanks,
Ewan

Solaris 8 worked OK for us on a compaq proliant dl360.
Compaq has solaris 8 <a
href="http://www5.compaq.com/support/files/server/us/locate/38_1123.html">
driver update diskettes
</a> that you must use during install to access their built-in raid
controllers.

Similar Messages

  • Multi CPU Solaris.

    Hi,
    Is there a recommended number of Weblogic instances when running on a
    Multi-CPU machine with more than 8, in this particular case - 28.
    The server behaves really bad if I only run one instance there is too
    much time spent in the treads waiting to obtain a lock for accessing a
    SyncronizedMap (in
    webblogic.kernel.ResettableThreadLocal.currentStorage()) and the CPUs
    are underutilized.
    After a few experiments we found out that 1 instance per 4-6 CPUs is OK,
    but I was wondering whether you could recommend something. Also since
    there's limit of 2GB per JVM if we wanted to utilize more then there is
    no choice.
    Thanks,
    Deyan
    [dejan_bektchiev.vcf]

    Yes that what my problem is.
    Thanks a lot.
    --dejan
    Dimitri Rakitine wrote:
    It is not a hack - it is a trick ;-) (that's how WLS time services work). I suggested
    it because it sounded like you identified synchronization contention caused by using
    non-WebLogic threads in your application.
    Deyan D. Bektchiev <[email protected]> wrote:
    This is a multi-part message in MIME format.
    --------------67801AC9090398390A36FFEF
    Content-Type: text/plain; charset=us-ascii
    Content-Transfer-Encoding: 7bit
    Dimitri,
    Is this supported or is it really a hack?
    Because I could still run multiple JVMs and when we want to utilize more memory we
    are forced to do that anyway as the limit is just below 2GB/JVM and if you have even
    16GB that already automatically means 7-8 JVMs which usually does the trick.
    Thanks,
    Deyan
    Dimitri Rakitine wrote:
    If using non-WebLogic threads is an immediate problem, you can try to use
    this hack: http://dima.dhs.org/misc/LongRunningTask.jsp (MyThread class)
    to use WebLogic Execute threads instead of creating your own. (it works
    both on 5.1 and 6.0).
    Deyan D. Bektchiev <[email protected]> wrote:
    This is a multi-part message in MIME format.
    --------------EAB0350757F4EBC4B593EB25
    Content-Type: text/plain; charset=us-ascii
    Content-Transfer-Encoding: 7bit
    Thanks Adam,
    Actually it is true that most of our requests run on non-weblogic threads.
    We have a thread pool that runs in the WLS JVM and it is created by a startup
    class and afterwards is activated by sending JMS messages to a monitoring
    thread that dispatches the requests. The clients connect to a Session EJB and
    that session EJB launches the requests and afterwards the clients get their
    results via JMS.
    We easily support multiple JVMs on one physical server but if it wasn't the
    performance hit from accessing that collection we'd prefer to have as few JVMs
    as possible as this would also drive down the context switches and the total
    number of threads in the system.
    All the Best,
    Deyan
    Adam Messinger wrote:
    Deyan,
    Can you tell us a bit more about your application. That map in
    ResettableThreadLocal shouldn't be hit except for non-WL threads. It is
    unusual that this would be a source of contention.
    That said, I know of many people who have been successful running multiple
    server instances on a single big machine. I think that it is a great
    solution if your application is amenable to it.
    Cheers!
    Adam
    "Deyan D. Bektchiev" <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    Is there a recommended number of Weblogic instances when running on a
    Multi-CPU machine with more than 8, in this particular case - 28.
    The server behaves really bad if I only run one instance there is too
    much time spent in the treads waiting to obtain a lock for accessing a
    SyncronizedMap (in
    webblogic.kernel.ResettableThreadLocal.currentStorage()) and the CPUs
    are underutilized.
    After a few experiments we found out that 1 instance per 4-6 CPUs is OK,
    but I was wondering whether you could recommend something. Also since
    there's limit of 2GB per JVM if we wanted to utilize more then there is
    no choice.
    Thanks,
    Deyan
    --------------EAB0350757F4EBC4B593EB25
    Content-Type: text/x-vcard; charset=us-ascii;
    name="dejan_bektchiev.vcf"
    Content-Transfer-Encoding: 7bit
    Content-Description: Card for Deyan D. Bektchiev
    Content-Disposition: attachment;
    filename="dejan_bektchiev.vcf"
    begin:vcard
    n:Bektchiev;Deyan
    tel;home:1-650-363-6055
    tel;work:1-650-289-1046
    x-mozilla-html:TRUE
    url:http://www.appl.net/
    org:Application Networks
    adr:;;444 Ramona St;Palo Alto;CA;94301;USA
    version:2.1
    email;internet:[email protected]
    fn:Deyan D. Bektchiev
    end:vcard
    --------------EAB0350757F4EBC4B593EB25----
    Dimitri
    --------------67801AC9090398390A36FFEF
    Content-Type: text/x-vcard; charset=us-ascii;
    name="dejan_bektchiev.vcf"
    Content-Transfer-Encoding: 7bit
    Content-Description: Card for Deyan D. Bektchiev
    Content-Disposition: attachment;
    filename="dejan_bektchiev.vcf"
    begin:vcard
    n:Bektchiev;Deyan
    tel;home:1-650-363-6055
    tel;work:1-650-289-1046
    x-mozilla-html:TRUE
    url:http://www.appl.net/
    org:Application Networks
    adr:;;444 Ramona St;Palo Alto;CA;94301;USA
    version:2.1
    email;internet:[email protected]
    fn:Deyan D. Bektchiev
    end:vcard
    --------------67801AC9090398390A36FFEF----
    Dimitri[dejan_bektchiev.vcf]

  • Solaris 10 [11/06]: Second CPU not accessible using psradm / psrinfo / ...

    Hi all,
    Some months ago I've build up a small Server for some home projects with Solaris 10 x86 11/06. All things are running fine but Solaris does not use the second CPU that is installed.
    I have a FSC D1306 server board from an old Primergy P200 server.
    First, I saw that my system only uses one CPU:
    # psrinfo
    0       on-line   since 10/04/2007 20:13:27I then checked, if the system recognizes the second processor socket at all:
    # prtdiag
    System Configuration: FUJITSU SIEMENS D1306
    BIOS Configuration: FUJITSU SIEMENS // Phoenix Technologies Ltd. 4.06  Rev. 1.05.1306             12/12/2003
    ==== Processor Sockets ====================================
    Version                          Location Tag
    Pentium(R) III                   CPU 0
    Pentium(R) III                   CPU 1
    [ . . . ]After that I wanted to see, if the second processor has been detected properly by the BIOS:
    # smbios -t SMB_TYPE_PROCESSOR
    ID    SIZE TYPE
    4     61   SMB_TYPE_PROCESSOR (processor)
      Manufacturer: Intel
      Version: Pentium(R) III
      Location Tag: CPU 0
      Family: 17 (Pentium III)
      CPUID: 0x383fbff000006b1
      Type: 3 (central processor)
      Socket Upgrade: 4 (ZIF socket)
      Socket Status: Populated
      Processor Status: 1 (enabled)
      Supported Voltages: 1.7V
      External Clock Speed: Unknown
      Maximum Speed: 1400MHz
      Current Speed: 1400MHz
      L1 Cache: 6
      L2 Cache: 7
      L3 Cache: None
    ID    SIZE TYPE
    5     61   SMB_TYPE_PROCESSOR (processor)
      Manufacturer: Intel
      Version: Pentium(R) III
      Location Tag: CPU 1
      Family: 17 (Pentium III)
      CPUID: 0x383fbff000006b4
      Type: 3 (central processor)
      Socket Upgrade: 4 (ZIF socket)
      Socket Status: Populated
      Processor Status: 1 (enabled)
      Supported Voltages: 1.7V
      External Clock Speed: Unknown
      Maximum Speed: 1400MHz
      Current Speed: 1400MHz
      L1 Cache: 8
      L2 Cache: 9
      L3 Cache: None^^ Well, I guess it was detected properly. But after running prtconf and prtpicl I saw that there was only one processor available to Solaris.
    Can anyone help me enabling the second CPU? I would like to use it because I have some applications that would run much better on two CPU's rather than one.
    Thanks for all tips,
    C]-[aoZ
    Edited by: CHaoSlayeR on Oct 28, 2007 7:04 AM

    If memory serves, to get multi-cpu working, you must enable acpi in the bios. Given that this is a pIII, it is in the era where api was either really buggy or was off by default (so you should also check to make sure the bios was the latest...)
    -r

  • SIGBUS with -Xincgx/-Xconcgc for JDK 1.5 on multi-CPU system

    Hi,
    I've been having trouble with random crashes using 1.5.0_03 up to _07 on Solaris and Linux (x86), especially on multi-CPU hosts. This is for a Web server (Tomcat 4.1.x), where CMS has been wonderful in avoiding the sometimes-horrible (multi-minute) GC pauses that I otherwise saw inspite of paranoid care with memory (and other resource use) in my code.
    I have sent in a few crash dumps via a variety of routes but none have so far surfaced in the public bug reports.
    I suspect something like a missing memory barrier or 3 in the CMS code, as I have commented against one of the extant bug reports.
    I have had to stop using -Xincgc/-Xconcgc on a 2-CPU machine, but as I have a T1000 due for delivery within the next week, I really do not want end up using a stop-the-world GC to avoid the JVM crashing!
    Are any SIGBUS-type problems fixed in _08 or _09?
    Rgds
    Damon

    We have found, and are in the process of fixing, at least two
    long-standing bugs in the concurrent collector that may have
    affected you. (But the latter is conjecture.)
    Those bugs are still present in the public beta version of Mustang.
    So, if you are able to reproduce the crashes with Mustang,
    then please contact us at hotspotgc dash feedback at sun dot com
    so we can have you test the fixes we have made, as well
    as, if possible, get your test case so we can use it to test
    the parallel/concurrent collector more thoroughly.
    Refer to CR 6429181 and CR 6431128, and include a pointer
    to this thread. At least two of the fixes we have in mind are, however,
    orthogonal to the use of ParNew, and should exhibit even
    if you turn off UseParNewGC.
    By the way, a full complement of support options is available
    at: developer.sun.com/services

  • Clustering on multi cpu box without multicast

              Just wonder whether weblogic cluster can be configured to not use IP multicast
              if the clustered servers are running on the same multi cpu box as I would have
              thought that in memory communication would be a faster option in this case.
              This is again as we were told that Weblogic doesnt scale well for more than 2
              cpus per box.
              If we are using a 4-6 cpu box then in this case it makes more sense to have 2/3
              instances
              per server communicating via mem and not multicast
              

    Just wonder whether weblogic cluster can be configured to not use IP          multicast
              > if the clustered servers are running on the same multi cpu box as I would
              have
              > thought that in memory communication would be a faster option in this
              case.
              Multicast with a TTL of zero is basically implemented by the OS as a shared
              memory approach, so there is usually very little difference in performance.
              > This is again as we were told that Weblogic doesnt scale well for more
              than 2
              > cpus per box.
              > If we are using a 4-6 cpu box then in this case it makes more sense to
              have 2/3
              > instances
              > per server communicating via mem and not multicast
              The scalability on various numbers of CPUs differs greatly. It is vastly
              improved on more recent versions of WebLogic with more recent versions of
              the JVM. Back on JVM 1.2 with WL 5.x and earlier, you would have to run
              multiple instances in order to "soak" a box. The best way to determine if
              this is still the case is to use a load test that (with everything else
              equal) will soak the box with multiple JVMs but not with a single one.
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "r g" <[email protected]> wrote in message
              news:3f181388$[email protected]..
              >
              

  • Interrupt in a multi-cpu machine

    Hi,
    I want to know how the interrput is dealed in a multiprocessor environment. Anyone can tell me or point me to the doc. to check?
    Will interrupt be sent to different CPU equally or they are given to one default CPU?
    Thanks!
    Yong

    I've seen, however that most operating systems like processor affinity and
              won't go out of their way to spread threads across CPUs due to CPU cache
              synchronization expense. Running two WLS instances on a dual processor
              machine will give you more throughput.
              Mike Reiche <[email protected]> wrote in message
              news:3bcf1d58$[email protected]..
              >
              > You don't need two instances of WL to take advantage of two CPUs.
              >
              > Each WL instance in a cluster requires its own IP address. You can refer
              to them
              > by their IP address.
              >
              > Mike
              >
              >
              > "jyothi" <[email protected]> wrote:
              > >
              > >hi,
              > >if we need to plan a cluster on a multi-cpu machine..how does
              > >the installation of wls go..do we need to install two instances of WLS
              > >on the
              > >machine or do we have to install only one instance??
              > >
              > >also wrt wls6.0, through the admin console running on another machine..we
              > >create
              > >new entries for machine's first and then we create new servers .. how
              > >do we associate
              > >the two servers running on the same multi-cpu machine with the machine
              > >name??
              > >
              > >thanks
              > >jyothi
              >
              

  • Performance problem - mutexes with multi-cpu machine

    Hi!
    My company is developing multi-threaded server program for
    multi-cpu machine which communicates with Oracle database on separate
    machine. We use Solaris 7, Workshop 5 CC and pthreads API.
    We tested our program on 4 CPU E4500 with 2 CPU E420 Oracle server.
    We upgraded E4500 from 4 to 8 CPU and to our surprise instead of
    performance improvement we got performance degradation ( 8 CPU runs
    about 5% slower than 4 CPU ).
    After a long investigation we found out that under stress load most of the
    time our performs lwpmutex related operation.
    With truss -c statistics was 160 secs in mutex operations and
    about 2 secs was read/write in oracle client side library.
    Here is output of truss for example:
    19989 29075/5: 374.0468 0.0080 lwp_mutex_lock(0x7F2F3F60 = 0
    19990 29075/31: 374.0466 0.0006 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19991 29075/5: 374.0474 0.0006 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19992 29075/30: 374.0474 0.0071 lwp_mutex_lock(0x7F2F3F60) = 0
    19993 29075/30: 374.0484 0.0010 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19994 29075/31: 374.0483 0.0017 lwp_mutex_lock(0x7F2F3F60) = 0
    19995 29075/5: 374.0492 0.0018 lwp_mutex_lock(0x7F2F3F60) = 0
    19996 29075/31: 374.0491 0.0008 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19997 29075/5: 374.0499 0.0007 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19998 29075/30: 374.0499 0.0015 lwp_mutex_lock(0x7F2F3F60) = 0
    19999 29075/5: 374.0507 0.0008 lwp_mutex_lock(0x7F2F3F60) = 0
    20000 29075/30: 374.0507 0.0008 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20001 29075/5: 374.0535 0.0028 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20002 29075/30: 374.0537 0.0030 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20003 29075/31: 374.0537 0.0046 lwp_mutex_lock(0x7F2F3F60) = 0
    20004 29075/5: 374.0547 0.0012 lwp_mutex_lock(0x7F2F3F60) = 0
    20005 29075/31: 374.0546 0.0009 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20006 29075/5: 374.0554 0.0007 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20007 29075/30: 374.0557 0.0020 lwp_mutex_lock(0x7F2F3F60) = 0
    20008 29075/31: 374.0555 0.0009 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20009 29075/5: 374.0564 0.0010 lwp_mutex_lock(0x7F2F3F60) = 0
    20010 29075/30: 374.0564 0.0007 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20011 29075/5: 374.0572 0.0008 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20012 29075/28: 374.0574 0.0170 lwp_mutex_lock(0x7F2F3F60) = 0
    20013 29075/31: 374.0575 0.0020 lwp_mutex_wake(0x7F2F3F60) = 0
    We have a several question:
    1. We always get the same mutex address : 0x7F2F3F60 even with different
    binaries. It looks that all threads wait on one and magic
    mutex. Why?
    2. We read in article on unixinsider.com that on Solaris when mutex is
    unlocked all the threads waiting on this mutex are waked up. It also looks so
    from truss output. What is solution for this problem? unixinsider.com
    recommends native Solaris read-write lock with all threads as writers?
    Is there any other solution? Should in improve performance?
    3. We heard that Solaris 8 has better pthreads implementation using a
    one-level threading model, where threads are one-to-one with
    lwp, rather than the two-level model that is used in the
    standard libthread implementation, where user-level threads are
    multiplexed over possibly fewer lwps. Are mutexes in this library
    behave in "Solaris 7" way or do it put thread to sleep when it unlocks
    the mutex? Is it possible to use this library on Solaris 7?
    4. Is there plug - in solution like mtmalloc or hoard for new/delete that change
    pthread mutexes?
    Thank you in advance for your help,
    Alexander Indenbaum

    <pre>
    Hi!
    My company is developing multi-threaded server program for
    multi-cpu machine which communicates with Oracle database on separate
    machine. We use Solaris 7, Workshop 5 CC and pthreads API.
    We tested our program on 4 CPU E4500 with 2 CPU E420 Oracle server.
    We upgraded E4500 from 4 to 8 CPU and to our surprise instead of
    performance improvement we got performance degradation ( 8 CPU runs
    about 5% slower than 4 CPU ).
    After a long investigation we found out that under stress load most of the
    time our performs lwpmutex related operation.
    With truss -c statistics was 160 secs in mutex operations and
    about 2 secs was read/write in oracle client side library.
    Here is output of truss for example:
    19989 29075/5: 374.0468 0.0080 lwp_mutex_lock(0x7F2F3F60) = 0
    19990 29075/31: 374.0466 0.0006 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19991 29075/5: 374.0474 0.0006 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19992 29075/30: 374.0474 0.0071 lwp_mutex_lock(0x7F2F3F60) = 0
    19993 29075/30: 374.0484 0.0010 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19994 29075/31: 374.0483 0.0017 lwp_mutex_lock(0x7F2F3F60) = 0
    19995 29075/5: 374.0492 0.0018 lwp_mutex_lock(0x7F2F3F60) = 0
    19996 29075/31: 374.0491 0.0008 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19997 29075/5: 374.0499 0.0007 lwp_mutex_wakeup(0x7F2F3F60) = 0
    19998 29075/30: 374.0499 0.0015 lwp_mutex_lock(0x7F2F3F60) = 0
    19999 29075/5: 374.0507 0.0008 lwp_mutex_lock(0x7F2F3F60) = 0
    20000 29075/30: 374.0507 0.0008 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20001 29075/5: 374.0535 0.0028 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20002 29075/30: 374.0537 0.0030 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20003 29075/31: 374.0537 0.0046 lwp_mutex_lock(0x7F2F3F60) = 0
    20004 29075/5: 374.0547 0.0012 lwp_mutex_lock(0x7F2F3F60) = 0
    20005 29075/31: 374.0546 0.0009 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20006 29075/5: 374.0554 0.0007 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20007 29075/30: 374.0557 0.0020 lwp_mutex_lock(0x7F2F3F60) = 0
    20008 29075/31: 374.0555 0.0009 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20009 29075/5: 374.0564 0.0010 lwp_mutex_lock(0x7F2F3F60) = 0
    20010 29075/30: 374.0564 0.0007 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20011 29075/5: 374.0572 0.0008 lwp_mutex_wakeup(0x7F2F3F60) = 0
    20012 29075/28: 374.0574 0.0170 lwp_mutex_lock(0x7F2F3F60) = 0
    20013 29075/31: 374.0575 0.0020 lwp_mutex_wakeup(0x7F2F3F60) = 0
    We have a several question:
    1. We always get the same mutex address : 0x7F2F3F60 even with different
    binaries. It looks that all threads wait on one and magic
    mutex. Why?
    2. We read in article on unixinsider.com that on Solaris when mutex is
    unlocked all the threads waiting on this mutex are waked up. It also looks so
    from truss output. What is solution for this problem? unixinsider.com
    recommends native Solaris read-write lock with all threads as writers?
    Is there any other solution? Should in improve performance?
    3. We heard that Solaris 8 has better pthreads implementation using a
    one-level threading model, where threads are one-to-one with
    lwp, rather than the two-level model that is used in the
    standard libthread implementation, where user-level threads are
    multiplexed over possibly fewer lwps. Are mutexes in this library
    behave in "Solaris 7" way or do it put thread to sleep when it unlocks
    the mutex? Is it possible to use this library on Solaris 7?
    4. Is there plug - in solution like mtmalloc or hoard for new/delete that change
    pthread mutexes?
    Thank you in advance for your help,
    Alexander Indenbaum
    </pre>

  • Solaris 8 on Compaq laptop

    Hello,
    could you share your experience with Solaris 8 on Compaq laptops.
    I am going to buy Armada M700 - is there any problems with Solaris on it ?
    What network card do you recommend to use ?
    Thanks,
    -- Kevin

    Take a look at the HCL...this one is valid for S9 also.
    http://soldc.sun.com/support/drivers/hcl/8/202/toc.html
    Lee

  • Solaris 8 on Compaq Armada E500

    Anyone install solaris 8 on Compaq Armada E500, I am having problems with the graphics card, ATI RAGE Mobility-P AGP card.
    Is any of the other cards compatible with this system.

    The first thing to do with a Compaq is make sure the latest and greatest rompaq has been applied to the board and all devices. Next, download and use the latest and greatest Solaris DCA diskette and use it for the install boot. I am making the assumption you are using the CD this time around. If that doesnt work, you will have to remove all but the bare minimun of hardware and try the install then add each device one by one. The DCA diskettes are updated regularly so thelatest one may solve the issue.
    Lee

  • Creating database on multi cpu machines

    Hello:
    I want to issue the create database statements on a machine that has multiple cpu.
    Would like to to know what are the parameters if any need to specify in the ora file and anything specific needs to be done in the create database statement and if there are other considerations.
    Thanks,

    You can utilize there parameters if you have multiple cpu servers.
    parallel_max_servers integer 5
    parallel_min_percent integer 0
    parallel_min_servers integer 0
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_threads_per_cpu integer 2
    Regards,
    http://askyogesh.com

  • Exchange 2013 2 Node Multi role Servers with DAG issues connecting OWA users

    Hi
    I am on a job at the moment whereby I have 2 exchange 2013 multi role servers. Both are CAS and Mailbox servers. I have 2 databases, 1 called MBXDB01 and the other MBXDB02. MBXDBX01 is on Server 1 and 02 on Server 2.
    I have created a DAG and included both databases. Active copy of MBXDB01 is on Server 1 and MBXDB02 on Server 2
    I have configured the external and internal URLS of all virtual directories on both servers to be the same publically accessible FQDN. I have assigned the trusted cert to IIS and all other services on both servers. I have modified internal split brain DNS
    to point the FQDN used to both Server 1 and Server 2 IP addresses with a TTL of 30 seconds. And also for autodiscover.
    All test exchange connectivity comes back green and good from external and from outlook Test-Autoconfiguration autodiscover information is displayed correctly.
    The problem I am having is that when a user access the FQDN from a web browser i.e owa.domain.com/owa they get the login screen. This could be from either server 1 or 2 depending on DNS round robin. In this example lets say the user is accessing OWA on SERVER
    1 and their mailbox lives on SERVER 2.
    In this scenario when they login they get a page :( OOps Something Went Wrong and the exception is this
    A problem occurred while you were trying to use your mailbox.
    X-OWA-Error: Microsoft.Exchange.Data.Storage.UserHasNoMailboxException
    X-OWA-Version: 15.0.847.32
    X-FEServer: SERVER1
    X-BEServer: SERVER2
    The URL provides a little more info
    /auth/errorfe.aspx?httpCode=500&msg=861904327&owaError=Microsoft.Exchange.Data.Storage.UserHasNoMailboxException&owaVer=15.0.847.32&be=SERVER2&ts=130398071193518373
    However, if the user accesses OWA via the private FQDN of SERVER 2 i.e https://SERVER2/owa they are able to access their mailbox.
    It is driving me nuts.
    Has anyone got any suggestions? I am tearing my hair out here
    Thanks
    One very frustrated field engineer :)

    Hi,
    To narrow down the cause, I recommend the following troubleshooting:
    1. Please double check the DNS entries about the host name used in the OWA URL.
    2. Add A record that the host name used in the OWA URL points to server 2 IP address  in a user local host file. Then try to login OWA again.
    3. Check your event log and find if there is any error about OWA.
    If you have any question, please feel free to let me know.
    Thanks,
    Angela Shi
    TechNet Community Support

  • Site Resilience and Multi Role Servers without Load Balancers

    I am designing an Exchange 2013 Site Resilient Scenario with two Multi Role Exchange Servers per Site. There are Two Sites. 
    This is a small organization, 400 users and all must be virtualized.  A single forest, Two Sites 50/50
    users each Site. There is  a third Site for Witness Server collocation. 
    Our goal is to have redundancy per individual Site and Site Resiliency.  Multi Role or Dedicated CAS it's my question. I don't have load balancer, budget is a key factor. If dedicated CAS was chosen, we would have eight exchange servers total.
    My dream design is precisely two multi Role Servers per Site, four total. One DAG stretched across both sites. One third site for witness folder collocation.
    Zero load balancers and my understanding is that ‘Single Global Namespace Support’ , Round Robin and one stretched DAG in Exchange 2013 will do the magic and it'll keep connected users even when local datacenter is down (Further manual DNS changes would
    be obviously necessary after the crash to avoid sporadic connectivity loss). 
    My question is:
    1. Will
    be all my goals accomplished deploying only four Exchange Multi Role servers (Two per Site) in these locations? Or having dedicated CAS would make a difference?
    2. Even I don't want load balancer (NLB or virtual-hardware load balancer).  Is this a must? 
    3. In this scenario I pretend to use nothing but Standard versions of Exchange 2013 and Standard versions of Windows 2012/R2. Is there any reason why having Enterprise Edition would improve or validate my design?
    Thanks for your help.
    This is the scenario 

    Ya Standard edition now does clustering.  The only difference with datacenter is the number of VMs you are licensed for under Hyper-V.
    As for load balancing/HA, we set up a generic cluster resource ip/dns name and use that to point exchange clients to.  Technically this is not a supported scenario from Microsoft, but we have been using it since 2010 and know of a ton of other people
    doing it as well.  It's kind of the unofficial poor man's HA.  No load balancing, but with our load (and yours) it doesn't matter that only one server at a time is handling CAS.  Works great and automatically fails over if you reboot the active
    node.  You can of course still split the active databases out across multiple nodes.

  • Repeating Job Can Hang In "Started" Status On Multi-Cpu Windows Machine

    Dear All,
    I am struck with a know Bug 3092358 "Repeating Job Can Hang In "Started" Status On Multi-Cpu Windows Machine". The Work around is mentioned in Note:307448.1 in metalinks.
    In the above doc for the Fix-3 Another option that worked is to try:
    Upgraded to EM 9.2.0.6 and applied the Apr05 CPU.
    I searched in metalinks to get patch to upgrade EM from 9.2.0.1 to 9.2.0.6. But not able to find can some tell me where can i get this patch.
    OS: windows 2003 server
    DB:9.2.0.1
    OWB :9.2
    Pls,do let me know if there any other work arounds,
    Thanks for your help.
    Rgds,
    Satya.

    While I dunno the specifics of NT threads, one way to spread 1 thread across multiple processors is by simply round robining the thread between the processors over time. i.e. Thread A spends 0.5seconds executing on cpu 1, for the next 0.5 s the thread is moved over to cpu2 and executed there, then it is moved back to cpu1 and so on...
    I think the Convex SPP used this model, the motivation perhaps being to spread load evenly among all cpus. Made sense in a multi-user multi-cpu system with a real fast bus.
    cheers
    -Ragu
    You know you've been spending too much time on the computer when your friend misdates a check, and you suggest adding a "++" to fix it.

  • The process of Apache can not be killed on multi-CPU on Solaris 9

    Hi All,
    We have the program kill [process number] and kill -p [process number] on Solaris 9 and the Apache process is alive
    more than 10 seconds. A user saw the message like "Cannot stop Administration Server. Please kill process $hpid manually and
    try again."
    We are using 1.3.19-rev2 of Apache.
    Here is the extraction from our shell script.
    pidfile="./WebServer/logs/httpd.pid"
    if ($ps | grep "$hpid" >/dev/null) ; then kill $hpid ; sleep 10 ; else rm -f ./
    WebServer/logs/httpd.pid ; fi
    # try again with more force
    if ($ps | grep "$hpid" >/dev/null) ; then kill -9 $hpid ; sleep 10 ; else rm -f
    ./WebServer/logs/httpd.pid ; fi
    # give up.
    if ($ps | grep "$hpid" >/dev/null)
    then
    echo
    echo Cannot stop Administration Server. Please kill process $hpid manually and try again.
    We can reproduce the issue on sun4u Sun Fire V490 (2CPU, 8GB RRAM) , but we can not reproduce Sun blade (1CUP).
    If those kind of error was barely happening on Sun Fire V490 but we can not see the phenomenon on Sun blade .
    regards,

    Does
    # pkill httpd
    # pkill -2 httpd
    # pkill -9 httpd
    work ok?

  • SAP Permormance on Sun T2000 multi core servers.

    Hi guys,
    On some of the newer sun servers, the performance isn't quite as good as what you would expect.
    When you are running a specific job, let say patching using saint for instance the process works as expected, but the disp+work process seems to be just allocated to 'one' of the servers CPU's, rather than the process being distributed across the servers multi-cores, and doesn't seem to be much if any quicker.
    I'm sure some of our ZONE settings in S10 must be wrong etc, but have followed  the documentation precisely from SAP.
    Am i missing some Solaris functionality or do we have to tell SAP to use multi-cores ?
    Just intrested in other peoples experiences on the newer Sun servers
    Regards
    James

    An ABAP workprocess is single threaded. Basically that means that the speed of any ABAP program is running is CPU-wise only dependent on the speed of the CPU.
    An ABAP system can't leverage the multi-core multi-thread architecture of the new processors seen on the single process. You will see e. g. a significant performance increase if you install a Java engine since those engines have multiple concurrent threads running and can so be processed in parallel as opposed to the ABAP part.
    What you can do to speed up an import is setting the parameter
    PARALLEL
    in the STMS configuration. Set the number to the available number of cores you have. This will increase the import speed since multiple R3trans processes are forked. However, during the XPRA still only one workprocess will be used.
    Markus

Maybe you are looking for