Sun Fire 1280 capacity planning

Hello, I'm doing capacity planning for Sun Fire 1280 hardware, and I need to clarify few black spots in my knowledge :-)
I'm not able to find any information on the bus speed (max transfert rate) for this specific hardware. Can anybody help on this?
Also, checking at the stats, I can see that the IO wait for CPU and disk is the same.
Can I safely assume that the bus is allowing a single "transaction" at the time (contention)? To be read: if the CPU is transferring data to the RAM, another simultanous operation (e.g. usb to scsi) is not possible.
Of course we're talking about micro seconds, but just to clarify.
Thanks in advance!

Hanoi,
The x4500 has numerous disks in it. Adding up the sizes of the disks yields 48TB of raw disk. If you were to format these disks and install an operating system, you'd still have around 48TB of disk.
However, you may want to create a RAID array with the disks, to either improve performance (by striping the data), or improve reliability (by adding metadata information that allows you to recover from a single or dual disk failure).
Typically, folks will use a RAID 5 level, which both stripes and creates a metadata reserve. With this level, you would subtract the size of 1 or more disks (depending on the protection level), as this would be a 'holdback' to contain the metadata.
Bottom line is that you should have much more than 30TB available if you use most RAID levels that do not include mirroring of disks (where each disk has another whole disk copying its data, reducing your usable capacity by half).

Similar Messages

  • Sun Fire V250 Error while booting

    Dear all,
    I have one Sun Fire V250 which was down for fews days and when I booted now it gives the following messages after booting. Is it the problem with the memory modules. It has 2 DIMM with each of 1 GB.
    Thanks in Advance
    sc>+
    sc> poweron+
    SC Alert: Host System has Reset+
    sc> con+
    Invalid command.  Type 'help' for list of commands.+
    sc> console+
    Enter #. to return to ALOM.+
    *1>Init CPU*
    +*1>     UltraSPARC[TM] IIIi, Version 2.4*+
    *1>DMMU*
    *1>DMMU TLB DATA RAM Access*
    *1>DMMU TLB TAGS Access*
    *1>IMMU Registers Access*
    *1>IMMU TLB DATA RAM Access*
    *1>IMMU TLB TAGS Access*
    *1>Init mmu regs*
    *1>Setup L2 Cache*
    *1>L2 Cache Control = 00000000.00f04400*
    *1>     Size = 00000000.00100000...*
    *1>Scrub and Setup L2 Cache*
    *1>Setup and Enable DMMU*
    *1>Setup DMMU Miss Handler*
    *1>Test and Init Temp Mailbox*
    *1>CPU Tick and Tick Compare Registers Test*
    *1>CPU Stick and Stick Compare Registers Test*
    *1>Setup Int Handlers*
    *0>Setup Int Handlers*
    *0>Send Int CPU 1*
    *1>Send Int to Master CPU*
    *0>Initialize I2C Controller*
    *0>MB:     Part-Dash-Rev#:  3753130-02-50     Serial#:  020322*
    *0>Set CPU/System Speed*
    *0>MCR Timing index = 00000000.00000002*
    *0>..*
    *0>Send MC Timing CPU 1*
    *0>Probe Dimms*
    *1>Probe Dimms*
    *1>Init Mem Controller Regs*
    *0>Init Mem Controller Regs*
    *1>Set JBUS config reg*
    *0>Set JBUS config reg*
    *0>IO-Bridge unit 0 init test*
    *0>IO-Bridge unit 1 init test*
    *0>Do PLL reset*
    *0>Setting timing to 8:1 10:1, system frequency 160 MHz, CPU frequency 1280 MHz*
    *0>Soft Power-on RST thru SW*
    *0>CPUs present in system: 0 1*
    +*0>*+
    *0>Resume selftest...*
    *0>Init SB*
    *0>Init CPU*
    *0>Init mmu regs*
    *0>Setup L2 Cache*
    *0>L2 Cache Control = 00000000.00f04400*
    *0>     Size = 00000000.00100000...*
    *0>Setup and Enable DMMU*
    *0>Setup DMMU Miss Handler*
    *0>Timing is 8:1 10:1, sys 159 MHz, CPU 1279 MHz, mem 127 MHz.*
    +*0>     UltraSPARC[TM] IIIi, Version 2.4*+
    *1>Init CPU*
    +*1>     UltraSPARC[TM] IIIi, Version 2.4*+
    *1>Init mmu regs*
    *1>Setup L2 Cache*
    *1>L2 Cache Control = 00000000.00f04400*
    *1>     Size = 00000000.00100000...*
    *1>Setup and Enable DMMU*
    *1>Setup DMMU Miss Handler*
    *1>Timing is 8:1 10:1, sys 159 MHz, CPU 1279 MHz, mem 127 MHz.*
    *0>Initialize I2C Controller*
    *1>Init Mem Controller Sequence*
    *0>Init Mem Controller Sequence*
    *0>IO-Bridge unit 0 init test*
    *0>IO-Bridge unit 1 init test*
    *0>Select Bank Config*
    *0>Probe and Setup Memory*
    *0>INFO:     1024MB Bank 0, Dimm Type X4*
    *0>INFO:     1024MB Bank 1, Dimm Type X4*
    *0>INFO:     No memory detected in Bank 2*
    *0>INFO:     No memory detected in Bank 3*
    +*0>*+
    *0>Data Bitwalk on Master*
    *0>     Test Bank 0.*
    +*0>*+
    *0>ERROR: TEST = Data Bitwalk on Master*
    *0>H/W under test = CPU0: Bank0 DIMM0, Motherboard*
    *0>Repair Instructions: Replace items in order listed by 'H/W under test' above*
    *0>MSG = Pin 79 failed on CPU0: Bank0 DIMM0, Motherboard*
    *0>END_ERROR*
    *0>     Test Bank 1.*
    +*0>*+
    *0>ERROR: TEST = Data Bitwalk on Master*
    *0>H/W under test = CPU0: Bank1 DIMM0, Motherboard*
    *0>Repair Instructions: Replace items in order listed by 'H/W under test' above*
    *0>MSG = Pin 79 failed on CPU0: Bank1 DIMM0, Motherboard*
    *0>END_ERROR*
    +*0>*+
    *0>ERROR: TEST = Data Bitwalk on Master*
    *0>H/W under test = CPU, Memory, Motherboard*
    *0>Repair Instructions: Replace items in order listed by 'H/W under test' above*
    *0>MSG =*
    *      *** Test Failed!! ****
    *0>END_ERROR*
    +*0>*+
    *0>ERROR: TEST = Data Bitwalk on Master*
    *0>H/W under test = CPU, Memory, Motherboard*
    *0>Repair Instructions: Replace items in order listed by 'H/W under test' above*
    *0>MSG = No good memory available on master CPU 0, rolling over to new Master.*
    *0>END_ERROR*
    *1>Soft Power-on RST thru SW*
    *1>CPUs present in system: 0 1*
    *1>OBP->POST Call with %o0=00000000.05002000.*
    *1>Diag level set to MIN.*
    *1>MFG scrpt mode set to NONE*
    *1>I/O port set to TTYA.*
    +*1>*+
    *1>Start selftest...*
    *1>Init SB*
    *1>Init CPU*
    *1>DMMU*
    *1>DMMU TLB DATA RAM Access*
    *1>DMMU TLB TAGS Access*
    *1>IMMU Registers Access*
    *1>IMMU TLB DATA RAM Access*
    *1>IMMU TLB TAGS Access*
    *1>Init mmu regs*
    *1>Setup L2 Cache*
    *1>L2 Cache Control = 00000000.00f04400*
    *1>     Size = 00000000.00100000...*
    *1>Scrub and Setup L2 Cache*
    *1>Setup and Enable DMMU*
    *1>Setup DMMU Miss Handler*
    *1>Test and Init Temp Mailbox*
    *1>CPU Tick and Tick Compare Registers Test*
    *1>CPU Stick and Stick Compare Registers Test*
    *1>Set Timing*
    +*1>     UltraSPARC[TM] IIIi, Version 2.4*+
    *0>Init CPU*
    +*0>     UltraSPARC[TM] IIIi, Version 2.4*+
    *0>DMMU*
    *0>DMMU TLB DATA RAM Access*
    *0>DMMU TLB TAGS Access*
    *0>IMMU Registers Access*
    *0>IMMU TLB DATA RAM Access*
    *0>IMMU TLB TAGS Access*
    *0>Init mmu regs*
    *0>Setup L2 Cache*
    *0>L2 Cache Control = 00000000.00f04400*
    *0>     Size = 00000000.00100000...*
    *0>Scrub and Setup L2 Cache*
    *0>Setup and Enable DMMU*
    *0>Setup DMMU Miss Handler*
    *0>Test and Init Temp Mailbox*
    *0>CPU Tick and Tick Compare Registers Test*
    *0>CPU Stick and Stick Compare Registers Test*
    *0>Setup Int Handlers*
    *1>Setup Int Handlers*
    *1>Send Int CPU 0*
    *0>Send Int to Master CPU*
    *1>Initialize I2C Controller*
    *1>MB:     Part-Dash-Rev#:  3753130-02-50     Serial#:  020322*
    *1>Set CPU/System Speed*
    *1>MCR Timing index = 00000000.00000002*
    *1>..*
    *1>Send MC Timing CPU 0*
    *1>Probe Dimms*
    *0>Probe Dimms*
    *0>Init Mem Controller Regs*
    *1>Init Mem Controller Regs*
    *0>Set JBUS config reg*
    *1>Set JBUS config reg*
    *1>IO-Bridge unit 0 init test*
    *1>IO-Bridge unit 1 init test*
    *1>Do PLL reset*
    *1>Setting timing to 8:1 10:1, system frequency 160 MHz, CPU frequency 1280 MHz*
    *1>Soft Power-on RST thru SW*
    *1>CPUs present in system: 0 1*
    +*1>*+
    *1>Resume selftest...*
    *1>Init SB*
    *1>Init CPU*
    *1>Init mmu regs*
    *1>Setup L2 Cache*
    *1>L2 Cache Control = 00000000.00f04400*
    *1>     Size = 00000000.00100000...*
    *1>Setup and Enable DMMU*
    *1>Setup DMMU Miss Handler*
    *1>Timing is 8:1 10:1, sys 159 MHz, CPU 1279 MHz, mem 127 MHz.*
    +*1>     UltraSPARC[TM] IIIi, Version 2.4*+
    *0>Init CPU*
    +*0>     UltraSPARC[TM] IIIi, Version 2.4*+
    *0>Init mmu regs*
    *0>Setup L2 Cache*
    *0>L2 Cache Control = 00000000.00f04400*
    *0>     Size = 00000000.00100000...*
    *0>Setup and Enable DMMU*
    *0>Setup DMMU Miss Handler*
    *0>Timing is 8:1 10:1, sys 159 MHz, CPU 1279 MHz, mem 127 MHz.*
    *1>Initialize I2C Controller*
    *0>Init Mem Controller Sequence*
    *1>Init Mem Controller Sequence*
    *1>IO-Bridge unit 0 init test*
    *1>IO-Bridge unit 1 init test*
    *1>Select Bank Config*
    *1>Probe and Setup Memory*
    *1>INFO: No memory on cpu 1*
    +*1>*+
    *1>ERROR: TEST = Probe and Setup Memory*
    *1>H/W under test = CPU1 Memory*
    *1>Repair Instructions: Replace items in order listed by 'H/W under test' above*
    *1>MSG = No good memory available on master CPU 1, rolling over to new Master.*
    *1>END_ERROR*
    *1>ERROR:*
    *1>     POST toplevel status has the following failures:*
    *1>          CPU0: Bank0 DIMM0, Motherboard*
    *1>          CPU0: Bank1 DIMM0, Motherboard*
    *1>END_ERROR*
    +*1>*+
    *1>ERROR:     No good CPUs OR CPUs with good memory left.  Calling debug menu.*
    *1>     0     Peek/Poke interface*
    *1>     1     Dump CPU Regs*
    *1>     2     Dump Mem Controller Regs*
    *1>     3     Dump Valid DMMU entries*
    *1>     4     Dump IMMU entries*
    *1>     5     Dump Mailbox*
    *1>     6     Dump IO-Bridge regs unit 0*
    *1>     7     Dump IO-Bridge regs unit 1*
    *1>     8     Allow other CPUs to print*
    *1>     9     Do soft reset*
    *1>     ?     Help*

    I have just experienced this issue while patching Solaris 10 on a V240 system using Live Upgrade.
    The issue was resolved by performing a failsafe boot from the console ("boot -F failsafe"), splitting the (new/alt) root mirror, mounting the primary half, performing fsck several times (as filesystem issues were apparrent), then mounting the slice and installing the new bootblock using the version of installboot from the new root and rebuilding the boot archive manually:
    # mount /dev/dsk/c1t0d0s4 /a
    #  /a/usr/sbin/installboot /a/usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t0d0s4
    # bootadm update-archive -R /aI note that the OpenSolaris bug linked to above lists four criteria for reproducing this problem:
    1. Install u6 ( SUNWCreq ( no mkisofs so we build ufs boot archive )
    2. Limit /tmp to 512M ( so forcing the ufs build to happen in /var/run
    3. Have seperate /var ( bootadm.c only lofs nosub mounts / when creating the alt root for DAP patching build of boot archive )
    4. Install 139555-08
    The system I was patching met the last three of these, but not the first one (pkginfo does not show SUNWCreq as being installed). However, this was not an OpenSolaris system, which may make a difference.
    Rob

  • Up-gradation of Sun Fire 6800 to Sun Fire E6900 UltraSPARC IV+ Processors

    We wanted to upgrade our exisiting two Sun Fire 6800 Servers to Sun FIre E6900 with Ultra SPARC IV+ processors boards and memory only. This, we want to do through Sun UAP uniboard promotion.
    So we want to know the compatibility and additional requirement before going a head for this activity ....
    Tech specs details mentioned below ....
    Existing Sun FIre 6800 Servers Specs ...
    Processor : sparcv9 UltraSPARC-III+
    Model : Sun Fire 6800
    Processor Speed : 900 MHz
    RAM : 48 GB
    Processor Count : 24
    Will be upgraded to Sun FIre E6900 ........
    Upgrade to E6900, 12 x 1.8Ghz US IV+ CPU, 96GB Memory.
    I want to confirm below queries before placing order with Sun Micro sysytems...
    1) Is there requirement for Box change ( Chassis ) or Sun Microsystems will provide it free of cost with this promotional programme ?
    2) Is there any need to add more Power supply due to this up-gradation or additional PS will be a part of this kit ?
    3) Is there anything left out in our discussion or planning which needs to be added or upgraded to make the solution compatible ?
    Please add your valuable inputs for this up-gradation . Quick response will be highly appreciated.
    Thanks in advance ...
    Rajeev Kumar
    email : [email protected]

    Considering the domain name in your email address...
    This is far from the first SF6800 your company administers, that's been upgraded.
    Having said that, you need to keep in mind that these forums are NOT a venue for techsupport from Sun. They are hosted so that the user community can have general conversations, swap stories, and more.
    So, as for your questions:
    (1) Contact Sun and ask them for an accurate answer ( and no, the outer chassis does not get swapped ).
    (2) Contact Sun and ask them for an accurate answer ( I seem to remember that you must buy whatever PSU's, system controllers, and whatever else may be necessary, in addition to the uniboards ).
    (3) I suggest you discuss this plan with your peers in other EDS sites that have already done this. This promotion is not anything new. It seems to be offered every few months or so, since the E6900's first arrived in 2004.
    Expect to do a ton of firmware and OS upgrades so that the system can handle the US-IV+ architecture of the new hardware. That will need to be done before the new hardware is inserted to the chassis.

  • How Can I shutdown SUN FIRE 4800 Machine?

    Dear IT Experts,
    I maintain a SUN Server. It is SUN Fire F4800. To login , I must use HyperTerminal, serial cable , and Laptop.
    I have a plan to shutdown this machine for about 30 minutes for hardware maintenance checking.
    But, I can not shutdown it using command "power-off" at "OK" prompt.(before, I use "init 0" to "OK" prompt)
    I got this message :
    {13} ok power-off
    ERROR: do_interpret: no such symbol "power-off"
    ERROR: undefined word
    Could any body tell me what step can I do to shutdown this machine properly,please?
    Here is the machine description :
    - F4800 Server Base � Factory Rack
    - CPU / MEM BD BNDL � 2CPU @750/6GMEM
    - Red. Kit for Sun Fire 4800
    - PCI GIGABIT Ethernet 2.0 Card
    - PCI I/O Assy for F4800-6800
    - Media Tray HD2, DVD1, TP1
    - OPT INT PCI 10/100 Base NIC
    - Continental Europe PWR CRD KIT
    - 15 m Fiber Cable (multi-mode) and connector
    - JNI 64 Bit PCI adapter Non � OFC
    - Sun Fire Cabinet
    - Solaris 8 Std English - Only
    - OPT PWR Cord for ENTERPR. (INT)
    - SUN Trunking Software v 1.2.1
    - WS SB_100PGX64 128/15GB/CDROM
    - NORTHAMERICAN COUNTRY KIT
    Please help this newbie person.
    Thanks for any response
    Regards,
    Ferianto

    Dear All,
    Sorry, if I ask this question to this forum again. Yesterday, I tried to shutdown SUN Fire 4800 machine. It is the same machine which I asked to all of you before.
    At "OK" prompt, I typed *#.* in order to go to SC (System Controller) to shutdown the Machine. (As I know that "OK" prompt just shutdown the OS level)
    But, I got the Error message.. These are commands that I have typed, but still did not work
    {12} ok ~#
    ERROR: do_interpret: no such symbol "~#"
    ERROR: undefined word
    {12} ok ~.
    ERROR: do_interpret: no such symbol "~."
    ERROR: undefined word
    {12} ok #.
    ERROR: do_interpret: no such symbol "#."
    ERROR: undefined word
    {12} ok ~#
    ERROR: do_interpret: no such symbol "~#"
    ERROR: undefined word
    {12} ok #
    ERROR:
    I also tried to type command :(This comamand I got from SUN Firev4800 manual reference)
    OK ctrl - ]
    I connect to SUN Fire 4800 using serial cable (DB25) to USB port at my laptop, using either "Hyperterminal" or "TeraTerm"
    -Bit persecond : 9600
    -Data Bit : 8
    -Parity : None
    -Stop Bits : 1
    -Flow Control : None
    I can see the screen and typed "init 0" to go to "OK" prompt.But it still does not work. What happen? What should I do? For this case, is there any influence from "keyboard" layout (because I use my laptop keyboard) ?
    Note : I also tried typed *"init 5"* to shutdown the server, but it still go to "OK" prompt
    Please suggest...
    Thanks for any help
    Regards,
    Ferianto
    Edited by: Ferianto on Mar 10, 2008 9:05 PM

  • Session problem on clustered sun fire with SP6.1

    Hi All,
    I am having a problem with a webapplication on a clustered system.
    In the following I describe the current system setup.
    I will then describe the setup of the web application.
    Following this, I explain the trouble I'm having with sharing sessions in this environment.
    Finally, I pose the key question.
    SYSTEM SETUP:
    Our group has a clustered server setup. I believe the two machines are Sun Fire systems running Solaris 9. Let me know if you need more information concerning the HW.
    To manage the cluster, we use Sun Cluster software. Let me know if you need to know the version number. I do not have it handy. We just recently setup the system (in the past 6 months).
    Also, we have the newest Sun Web Server installed 6.1. We used the default installation. The only thing we changed was the dynamicreloadinterval variable from "-1" to "60".
    WEB APPLICATION:
    We have a fairly simple MVC web application setup, which works well on a single-server system. Essentially, the web application (1) asks for a certain input via a webform, (2) takes this input, and uses it to get information from a database, (3) puts this information into a httpsession attribute, (4) redirects the response to a JSP. The JSP gets the httpsession attribute (populated in step 3) and prints it out.
    So, HTML with form posts to Servlet, Servlet gets data from database and populates a session attribute, JSP gets that session attribute and prints it out.
    Between each transition from HTML, Servlet and JSP, the currently processing server may switch. I.e. the HTML may be served, for instance, by ClusterServer1, the Servlet may be handled, for instance, by ClusterServer2, and the JSP may be served at random by either.
    THE TROUBLE I'M HAVING
    Sometimes, my JSP is able to find the session attribute and print out whatever I put into the session during the Servlet step. However, sometimes, the JSP will print out "null" (the session attribute isn't available to the JSP).
    For the HTML and the JSP, I have the page print out what server it is being served from. Thus, I can tell that the HTML is served from, for instance, ClusterServer1 while JSP is served from, for instance, ClusterServer2. Sometimes, the same server serves both.
    KEY QUESTIONS:
    Why would this be happening?
    How can I have a session stick to the user and be shared across both servers?
    What other information would you need to provide an answer concerning this issue?
    I appreciate your efforts very much!
    Matthias Edrich
    dailysun

    Hi All,
    Hi Elving,
    I read through the documentation and have the following questions:
    (1) It seems that I can share sessions amongst both servers if I
    configure the web server to store sessions in a persistant manner such as in a file or in a database. Is this correct?
    (2) To enable this persistent storage of a session, I would need to change the Session Manager used. Is this correct?
    (3) If yes, I have the choice between the following managers. Please correct me if I have misunderstood the options.
    - PersistentManager: Instead of securing session information
    in memory, this manager saves session information within
    a file on the server in a directory, which I specify within
    sun-web.xml
    - IWSSessionManager: With this manager, I can store sessions
    in a defined database or file on the server.
    - MMapSessionManager: This manager also stores sessions in
    a file on the server
    (4) What is the difference between PersistentManager and
    MMapSessionManager if my web server is running in
    single-process mode?
    (5) If MMapSessionManager is a file-based manager, where do I
    specify to what directory the related file is stored to as I
    do in PersistentManager?
    (6) What are the advantages and disadvantages of
    Persistent/IWS/MMap managers?
    (7) In looking at SessionManagers, am I even barking up
    the right tree? It seems like these would help me share
    sessions across clustered servers.
    (8) Finally, I guess my plan of action would be the following:
    -> Identify what session manager to use
    -> Include a sun-web.xml file in /WEB-INF containing
    the manager-specific info
    -> Reload my web application
    -> done...
    Is this correct?
    Elving, I appreciate your help. Thanks!
    dailysun

  • GoldenGate and Veridata Capacity Planning/sizing

    Hello,
    Are there any capacity planning guides available for Veridata and for GoldenGate?
    What are the pertinent metrics to gather to aid in capacity planning?
    Thanks,
    Mac McDermid
    Edited by: user13291419 on Oct 17, 2012 12:55 PM

    I faced below application error after I insert new connection with correct information of golden gate connection and data source connection and click finish.
    Application Error
    javax.faces.FacesException: Error calling action method of component with id form:next
    at org.apache.myfaces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:74)
    at javax.faces.component.UICommand.broadcast(UICommand.java:106)
    at javax.faces.component.UIViewRoot._broadcastForPhase(UIViewRoot.java:90)
    at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:164)
    at org.apache.myfaces.lifecycle.LifecycleImpl.invokeApplication(LifecycleImpl.java:316)
    at org.apache.myfaces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:86)
    at javax.faces.webapp.FacesServlet.service(FacesServlet.java:106)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.WelcomeTokenFilter.doFilter(WelcomeTokenFilter.java:61)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at org.apache.myfaces.component.html.util.ExtensionsFilter.doFilter(ExtensionsFilter.java:92)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.SessionUserFilter.doFilter(SessionUserFilter.java:115)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.AjaxFilter.doFilter(AjaxFilter.java:66)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at org.apache.myfaces.component.html.util.ExtensionsFilter.doFilter(ExtensionsFilter.java:122)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.Utf8Filter.doFilter(Utf8Filter.java:28)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:563)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
    at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
    at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
    at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
    at java.lang.Thread.run(Unknown Source)
    Caused by: javax.faces.el.EvaluationException: Exception while invoking expression #{addConnectionWizardUI.getNextStep}
    at org.apache.myfaces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:153)
    at org.apache.myfaces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:63)
    ... 39 more
    Caused by: java.lang.NullPointerException
    at com.goldengate.veridata.entity.ConnectionComparisonFormat.(ConnectionComparisonFormat.java:13)
    at com.goldengate.veridata.entity.ConnectionDatatypeInfo.(ConnectionDatatypeInfo.java:63)
    at com.goldengate.veridata.entity.Connection.(Connection.java:84)
    at com.goldengate.veridata.dao.ConnectionDAOWebServices.findByName(ConnectionDAOWebServices.java:222)
    at com.goldengate.veridata.dao.ConnectionDAOWebServices.handleVersionControlInfo(ConnectionDAOWebServices.java:185)
    at com.goldengate.veridata.dao.ConnectionDAOWebServices.insert(ConnectionDAOWebServices.java:144)
    at com.goldengate.veridata.bu.ConnectionManagerImpl.insert(ConnectionManagerImpl.java:73)
    at com.goldengate.veridata.ui.AddConnectionWizardUI.createConnection(AddConnectionWizardUI.java:211)
    at com.goldengate.veridata.ui.AddConnectionWizardUI.getNextStep(AddConnectionWizardUI.java:120)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.myfaces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:129)
    ... 40 more
    Your response is highly appreciated.
    Regards

  • Enterprise level capacity planning

    I am trying to look at capacity planning at an enterprise level and wish to get summarized views of captured data across a number of hosts and instances. I would like to be able to see the tablespace stats for all of our instances in one graph or report, and not have to view each one separately. The OEM capacity planner does not seem to provide any such views and from what I have discovered in the repository it looks as though separate tables are created to hold captured data for each instance eg table space data for prod01 will be in table a, tablespace data for xcpt02 will be in table b.
    Has anybody got any ideas on how to use the Capacity Planner for Enterprise Level Capacity Planning?

    Hi,
    About packaging : Always deploy your application in ear format using weblogic
    (not a single war (WEB module) + single jar (EJB module)) Ref http://edocs.bea.com/wls/docs61/programming/packaging.html#1051556
    "WebLogic Server can bypass the intermediate RMI classes because the EJB client
    and implementation are in the same JVM." If you have large data exchange between
    EJB and WEB you're gonne save a lot process time (including garbage collect process
    time to clean RMI object copies). For our application with save up to 7%just
    using ear format for deployment.
    "Raghu Arni" <[email protected]> wrote:
    Hi,
    I am looking for document(s) that describe any real world experiences
    in the
    deployment of applications using the weblogic server.
    I hope to get the following information:
    Server sizing (only Solaris on Sun machines)
    Performance tuning parameters for improved scaling on multi-processor
    machines
    Actual clustering scenarios bothe OS level and application level.
    etc..
    Any pointers are appreciated,
    A

  • Operations are not getting despatched in capacity planning table

    Hi Experts,
    I am using capacity planning table (Graphical) to level the capacities and sequence the process orders. The start date of my orders are very well in future and sufficient capacities also is available in resources. But when I go to CM25 and select one order and click on despatch, system is not despatching it. This is the case for almost all the overall profiles.
    The surprising factor is, this function was working fine earlier without any issues. I havent done any changes in configs which will affect capacity planning.
    What can be the reason for this? Any thoughts please?.
    One more thing. Is there any option to avoid the capacity planning step if Iam using R/3 and manage it by some other way?
    Appreciate your earlier reply
    Thanks & Regards
    Prathib

    Most likely you didnt define the rowsource key properly. Please look under <install>/errors folder for any file there.
    Also, please read the documentation. There is information there about how to troubleshoot a loading problem. Please always read the documentation and we would appreciate feedback on the documentation.
    http://download.oracle.com/docs/cd/E17236_01/epm.1112/iop_user_guide/frameset.htm?launch.html

  • On Sun fire v490 - Solaris 10 with Oracle 8.1.7.4 & Sybase 12.0

    Hi,
    We are going to upgrade our server with this configuration -
    Sun Fire V490     2 x 1.05 GHz UltraSPARC IV CPU
    8096MB RAM     2 x73GB local disk
    2x FC 2GB Sun/QLogic HBAs
    DAT72
    On one machine we will have Sun Solaris v10 with
    Oracle DB v8.1.7.4 & Second one will be Sun Solaris v10 with Sybase DB v12.0.0.6.
    Now our question is - Sun fire have Hyper-thread CPUs ��� will the O/S and databases (Oracle and Sybase) view the proposed system as a true 4 CPU platform? Will parameters used to tune the database such as Sybase max online engines still operate in the same manner as before?
    Our old machine configuration was - Sun E450     4x400MHz CPU     1024MB RAM     2 x18; 8x36GB disks

    Questions on Oracle and Sybase should be directed to a database forum, this forum is for Sun hardware support.
    Here is a link to a DB forum I look at from time to time:
    http://www.dbforums.com/index.php
    The topic of tuning Oracle or Solaris is way beyond the scope of this forum, I have attempted to go into it before but didn't get any feedback and I would only like to spend lots of time on it if I was being paid!!! On the memory side, keep in mind that Oracle 9i 64-bit can address a maximum of 2 ^ 64 ( 16777216 TB ) memory, prior to that the DBA had to define memory parameters in init.ora. To be honest the last time I worked with a Oracle 8 database I shut a HP K class server down permanently that had been migrated to Oracle 9i on Solaris by an Oracle consultant and I can't remember all the tuning trick etc.

  • "Memory  Sizing" error on Sun Fire V65x

    Hello.
    I have some trouble with my old Sun Fire V65x and i hope that you guys can help me out.
    I had 2,5GB RAM installed in my server, working flawless. 2 x 256MB and 2 x 1GB modules. 256-modules in Bank 1 and 1GB´s in Bank 2.
    To get rid of the boot up warning of wrong memory configuration i moved the 2 1GB modules to bank 3 (as stated in the manual) but that was when the problem started.
    In this configuration the server won't go to POST, i tried moving the 1GB modules around between bank 1, 2 & 3 without any result.
    The server will not go to POST. No beeps, no nothing. When turning the server on line it starts up, the fans starts running on at maximum speed and that's all, no picture and no POST and the system warning LED turns to red. After a couple of minutes in this state the fans slow down for a couple of seconds and then back to 100% (repeating cycle)
    The POST diagnostics LEDs at the back of the main board indicates POST code 13h, "Memory sizing". The main board does not report any defective DIMMs, also worth mentioning is that the 2 256MB DIMMs are working flawlessly in all banks.
    I have tried:
    Resetting the memory config in BIOS (using the 2x256 DIMMs and then installing the 2 1GB modules.
    Moving the memory modules around between different banks, both alone and with the 2 256MB DIMMs present.
    Clearing the CMOS settings.
    Searching manuals and Google for hours without answer to my problem or what "memory sizing " means in this situation.
    System:
    Sun Fire V65x
    1 CPU
    If anyone know the answer to my dilemma and are willing to help me, i would be really grateful.
    Sadly i do not have the possibility to test the memory modules in another system.
    Best regards. Erik Järlestrand, Sweden.

    Hardware. A reboot will probably not do anything. From what I can tell, it's a CPU cache problem. If you do anything, shut the system down completely, turn the power off for a minimum of 20 seconds to let any residual electricty go away, the turn it back on. If the problem returns, you'll most likely need to either get a new CPU module or a new V100. I don't know if the CPU can be removed from the V100 motherboard, but since there are jumper settings for the speed I would assume that you can.

  • Machine capacity planning and efficency

    Dear Guru,
    I have number of machines and i want there individual capacity and efficency can we see that by any report or any tcode to see all capacity of machine.What is capacity leveling.
    Is thee any configuration for capacity planning.
    Thanks in advance

    Hi, the following configuration steps involved in the capacity planning: SPRO --> IMG --> PRODUCTION --> CAPACITY REQUIREMENT PLANNING Capacity Requirements Planning: 1) Define time units 2) Define Capacity Category 3) Set up Capacity Planner. 4) Define parameters 5) Define Standard value key. 6) Define Move time matrix. 7) Define Setup matrix. 8) Define Control Key. 9) Define Shift Sequence. 10) Define Key for performance efficiency rate. 11) Define Formula parameters. 12) Specify Scheduling type. 13) Set up Production scheduler group 14) Select Automatically. 15) Define Scheduling parameters for Production orders 16) Define Scheduling parameters for networks. 17) Define Reduction Strategies planned/ production orders. 18) Define Reduction Strategies for network/process orders. 19) Define Control Profile. 20) Define Selection profile. 21) Define time profile. 22) Define Evaluation profile. 23) Define Strategy Profile. You can follow these above configuration steps in capacity planning Yes you need to active your all work center for the finite scheduling at the bottom of capacity view. Use all standard SAP configurations, Capacity Leveling Profile. Define time profile in OPD2. Define the strategy profile in OPDB Define the Overall profiles in OPD0. Then only you need to to active your all work center for the finite scheduling at the bottom of capacity view in work center.(CR02) Then run MRP with scheduling- 2 Lead time scheduling and capacity planning.(MD02) You will get your capacity requirement in CM01, or CM25/CM21. In rem you can use MF50. -
    OR refer below steps Capacity Planning Evaluation: Use this procedure to Evaluate the Capacity of a Work Center (Resource). The same transaction will be used for finding out the Capacity Overload on the Work Center Select Logistics -> Production -> Capacity Planning -> Evaluation ->Work Center View ->Load and go to the Capacity Planning Selection screen (CM01) Through capacity planning : Standard overview :System will show week wise overload in Red Color. If there is no overload then system will show in the normal color. Date wise Capacity can be overviewed. If there were capacity overload, system would show with Red Color indication and Percentage overload can be viewed. Through capacity planning : Standard overview: Details: Date wise, planned process order wise capacity requirement. Through capacity planning : Standard overview: Graphical: Yellow Color indicates capacity lying un-used. Red Color Bars is the indication for Capacity Overload and Blue Color Bars indicate for Capacity available Use this procedure to Level / Schedule the Capacity of a Work Center Capacity Planning - Levelling/Scheduling: Select Logistics ->Production->Capacity Planning -> Levelling -> Work Center View and go to the Planning Table (Graphical) screen (CM21) Key Points u2022 This would be used for Shop floor production scheduling u2022 Capacity Scheduling is mandatory for correct results of Master Production Schedule and Material Requirement Planning For settings: Define formulas for resources: (OP54) IMG -> Production Planning For Process Industries -> Master Data ->Resource->Capacity requirements planning->Formulas for resources->Define formulas for resources. For formula parameters for Resource/workcentre:(OP51) IMG -> Production Planning For Process Industries -> Master Data ->Resource->Capacity requirements planning->Formulas for resources->Define formula parameters for resources-> Standard Value u2013 Define Parameters (OP7B) IMG -> Production Planning For Process Industries -> Master Data ->Resource-> General Data -> Standard value -> Define Parameters Define standard value key(OP19) IMG -> Production Planning For Process Industries -> Master Data ->Resource-> General Data -> Standard value -> Define Standard Value First configure the below settings in the capacity Planning. 1) Define time units Path : SPRO ->Production-> Capacity Requirements planning ->Master Data-> Define Time units (OPCF) 2) Define Capacity Category Path : SPRO ->Production ->Capacity Requirements planning->Master Data-> Capacity Data ->Define Capacity category. 3) Set up Capacity Planner. Path : SPRO ->Production ->Capacity Requirements planning -> Master Data->Capacity Data -> Set up Capacity Planner 4) Define parameters Path : SPRO->Production ->Capacity Requirements planning -> Master Data ->Work Center Data->Standard value ->Define Parameters (OP7B) 5)Define Move time matrix. Path : SPRO -> Production-> Capacity Requirements Planning -> Master Data-> Work Center Data -> Define Move time Matrix (OPCN) 6) Define Setup matrix. Path : SPRO -> Production ->Capacity Requirements Planning -> Master Data -> Routing data -> Define Setup Time Matrix 7)Define Control Key. Path : SPRO-> Production-> Capacity Requirements Planning -> Master Data ->Routing data -> Define Control Key(OPCG) 8) Define Shift Sequence. Path : SPRO -> Production-> Capacity Requirements Planning -> Operations -> Define Shift Sequences (OP4A) 9) Define Key for performance efficiency rate. Path : SPRO-> Production -> Capacity Requirements Planning -> Operations-> Available Capacity-> Define Key Performance Efficiency Rate (OPDU) 11) Specify Scheduling type. Path : SPRO ->Production ->Capacity requirements Planning->Operations->Scheduling ->Specify Scheduling Type (OPJN) 12) Set up Production scheduler group. Path : SPRO ->Production->Capacity requirements Planning -> Operations -> Scheduling -> Set up Production Scheduler Group (OPCH) 13) Select Automatically. Path : SPRO -> Production -> Capacity Requirements Planning ->Operations->Scheduling ->Task List Type-> Select Automatically (OPJF) 14) Define Scheduling parameters for Production orders. Path : SPRO -> Production ->Capacity Requirements Planning-> Operations->Scheduling -> Define Scheduling Parameters for Production orders (OPU3) 15) Define Reduction Strategies planned/ production orders. Spro-> Production->Capacity reqirements planning->operations->Scheduling->Reduction strategies->Define reduction strategies planned-/production order 16) Define Reduction Strategies for network/process orders. Spro-> Production->Capacity reqirements planning->operations->Scheduling->Reduction strategies-> Define reduction strategies for network/process order 17) Define Selection profile. Spro->Production->Capacity requirements Planning->Evalution->Profiles-> Define selection profiles Hope clear to u. Regards Alok

  • Capacity planning and schedulng production order based on availabe capacity

    Hi Gurus,
    I am trying to use capacity planning, when the production order is created it takes the order start date as the requirement date. Then when i do he capaity check it says capaity not available. After finite scheduling again it proposes to shift operations by certain days.
    I need the capity to be check in MRP and the palled orders to have start dates based on aviable capity, currenty all the palled orders are starting on the requirement date irresptive of avaiabe capaity. I tried using finite scheduling option in MD02 still it does not take in to account exsting production orders for giving dates of the planned orders.
    Regards
    Abhi

    Dear Abhishek,
    1) This is Std SAP of MRP, only material Av check will be carried out in MRP & create procurement
    Proposals
    2) Capacities will not Checked  ( consider ) in MRP in Std R/3
    3) Your Need can be only met through APO
    Regards
    Madhu

  • How can I connect to sun fire v125 with serial mgr port by hyperterminal

    Hi there!
    My c.o. got a sun fire v125 server.My mission is finish setting up the server.But as I followed the sun
    fire v125's document, it aint be connected to this server with serial mgr port by using windows xp hyperterminal .
    My step as:
    step1. Find the original RJ45 net cable from the v125 box, and put one side into
    Serial MGR Port of V125.
    step2. Combined the RJ45 net cable's other side with DB-9 converter,
    then plug the DB-9 converter into the PC's serial port(9 pins).
    step3. start the pc(windows xp sp2), and then creat a hyperterminal link in COM1 as:
    Bits/Sec--9600, DataBits--8, Parity--None, Stop bits--1, Flow control--Xon/Xoff
    And then pause connect. make it hang up.
    step4. push the sun fire v125 power button. after 1mins, resume the hyperterminal,
    connect to the v125.
    as above all is all my step here. Somewhere incorrect?if theres no problem, why can't I receive the
    "sc> " promote?
    Theres only one thing what i can do is input from keyboard....?
    I had known that there will be promoted the "sc> " after you start the connection of hyperterminal with the correct steps?
    BTW,I had input "#." with enter key, but it doesnt work!
    Is there anybody met this problem as mine? Or can you give me some hits?
    Thank you for your reading.
    Sinceley Regards
    Maqintoshi. 2008.2.7 pm4:10
    Edited by: maqintoshi on Feb 6, 2008 11:51 PM

    Im sorry for replying so late.
    I'd solved it long time ago.
    finally I found I made a mistake that I connect the Serialport Cable which came from PC
    to the RJ45(NIC) port which came from Sun Server :-@
    If you still can not get the prompts, check these parameters of your hyperterminal:
    connect a terminal or a terminal emulator (PC or workstation) to the SC serial management port.
    Configure the terminal or terminal emulator with these settings:
    * 9600 baud
    * 8 bits
    * No parity
    * 1 Stop bit
    * No handshaking

  • Oracle VM Server 2.0 for Sun Fire v240?

    Hi all,
    I am searching and trying all day long to install Oracle VM Server on SPARC 64 Sun Fire v240.
    I still can not find information is it supported or not?
    I have Oracle Solaris 10 5.10 Generic_142909-17 sun4u sparc
    the CPUs are UltraSPARC-IIIi (portid 0 impl 0x16 ver 0x34 clock 1503 MHz).
    I installed OVM_Server_SPARC-2_0 from https://edelivery.oracle.com/
    Anyway the command ldmconfig returns that This is not supported by my platform!
    In the release notes of Logical Domain Manager 1.2, 1.3 and VM Server 2.0 is written the supported hardware and I can not see the v240, but I see
    Supported Platforms:
    Sun Fire and SPARC Enterprise T1000 Servers
    Sun Fire and SPARC Enterprise T2000 Servers
    Probably this sun fire is not Sun fire v240 but it is sun fire t1000?!
    What do you think - is my server supported for any of these versions? Is virtualization possible?
    Also I can not download version 1.3 because it's moved to oracle my support section but it was free?
    Need your help and support :)
    Thanks a lot.

    A T1000 servers does not show V240.
    uname -a on T1000 output below
    bash-3.00# uname -a
    SunOS xxxx 5.10 Generic_142909-17 sun4v sparc SUNW,SPARC-Enterprise-T1000
    Thanks,
    Sudhir

  • How to install Oracle VM server on Sun Fire v100 server

    Hi All
    I have just a Sun Fire V100 Server.
    How to install Oracle VM server on Sun Fire v100 server?
    Thankyou and best regards,
    Thiensu

    user8248216 wrote:
    I have install Oracle VM Server successful on PC Core 2 dual 2.93Ghz.
    But Oracle VM Server does not detect onboard Network card.Your NIC needs to be supported by Oracle Linux 5 Update 3 to work with Oracle VM Server 2.2. If it's too new, then it will not be seen. Also, please start a new topic for each question. This new question is not relevant to the original topic.
    Edited by: Avi Miller on Aug 19, 2011 12:46 PM

Maybe you are looking for

  • Safari crashes within a few seconds

    I have a late 2011 Macbook Pro on which I've just reformatted and installed a fresh copy of Mountain Lion. Unfortunately, I cannot get Safari to run for more than a few seconds. It crashes every single time. All that I've installed at this point is F

  • How can I make a button (click) animate an object

    Basically I want to make a button (nextscreen) which animates a rectangle (movie1) once it's clicked. So far it doesn't work, anything I try gives me errors for example I typed nextscreen.onPress(); function fl_AnimateHorizontally_6(event:Event)    

  • Transaction SAT - Analyze Performance of Classes

    Hi, is it possible to analyze performance of classes directly? In transaction SAT, I'm only able to execute Transaction, Program, or Function Module. How can I analyze performance using transaction SAT of classes without writing a program? The editor

  • OATS 9.0 production license?

    Hi all, I'm evaluating this product for our company. As with any piece of software, cost is a major consideration. I see the free download/trial evaluation is released under the OTN license, which states that this cannot be used for production purpos

  • Time machine sync with freeagent ext drive

    I have a seagate freeagent 500gb ext hard drive which i use to sync with time machine. So far as i can tell it is syncing perfectly however when i first set it up I gave the drive the name "seagate 500GB drive". I've got the drive showing as an icon