Supermicro AOC-SASLP-MV8 (Marvell 6480 chip)

Does anyone know if the Supermicro AOC-SASLP-MV8, with the Marvell 6480 chip, will work with OVM 2.2? it is an 8-port raid contoller.
thanks

It does claim to be supported in Redhat Enterprise Linux, btw.

Similar Messages

  • Storage Spaces UI missing disks when a controller reports the same UniqueID for all attached disks (e.g. Supermicro AOC-SASLP-MV8)

    Summary: The Storage Spaces UI has several problems when there are more than 21 physical disks available to Storage Spaces.
    I have 28 SATA disks connected over 6 controllers. 2 are used for an Intel motherboard RAID1 for OS (PhysicalDisk0), so that leaves 26 data disks for Storage Spaces. [The plan is to get to 36 data disks in due course by adding disks (this 36-bay chassis: http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400LP.cfm)]
    Initially, there were 23 data disks (5x 1TB, 1x 640GB, 14x 500GB, 3x 250GB) as PhysicalDisk1-23 (in that order), which I put into a storage pool. I created a parity disk over all 23 disks. It looks like it is working fine, albeit very slowly on writes.
    I've now added 3 more 4TB disks, as PhysicalDisk24-26, and taken them offline, and have now noticed errors in the Storage Pools UI in the Server Manager. For example:
    * No more than 21 disks ever show up in the "Physical Disks" area in the lower right. When the 23 disks are connected, only the first 21 show up in the pool I created. With 26 disks connected, only the first 20 show up in the pool, and only 1 more of the
    new 3 (PhysicalDisk26) shows up in the Primordial group.
    * In the Properties of the parity Virtual Disk created over the 23 disks, the disks are shown incorrectly. Again, only 21 disks are shown, and PhysicalDisk26 is incorrectly shown as part of the virtual disk. See image:
    * Using the New Storage Pool Wizard, I cannot add more than 1 of the new 3 disks to a new Storage Pool (only PhysicalDisk26 is available). And the details incorrectly refer to PhysicalDisk21. See image (a WDC WD2500JD-22H is a 250GB disk, not a 4TB disk).
    Thus I cannot use the new disks in a new storage pool.
    According the blog post at http://blogs.msdn.com/b/b8/archive/2012/01/05/virtualizing-storage-for-scale-resiliency-and-efficiency.aspx:
    Q) What is the minimum number of disks I can use to create a pool? What is the maximum?
    You can create a pool with only one disk. However, such a pool cannot contain any resilient spaces (i.e. mirrored or parity spaces). It can only contain a simple space which does not provide resiliency to failures. We do test pools comprising multiple hundreds
    of disks – such as you might see in a datacenter. There is no architectural limit to the number of disks comprising a pool.
    However, the UI currently does not seem to correctly work with more than 21 physical disks. Please advise.
    Using Server 2012 RC.
    Hardware: Supermicro X8SAX (BIOS v2.0), Intel i7-920 2.67GHz, 6x 2GB DDR3-1333 (certified Crucial CT25664BA1339.16SFD)
    Disk controllers: 2x RAIDCore BC4852 (PCI-X, final 3.3.1 driver) (15 ports used), 2x Supermicro AOC-SASLP-MV8 (PCIe, 4.0.0.1200 Marvell driver to allow >2TB disks) (6 ports used), Sil 3114 (PCI, latest 1.5.20.3 driver) (1 port used), motherboard Intel
    in RAID mode (4 ports used for data, plus 2 for OS RAID1).

    An update. I added 16x SATA disks across 2x Supermicro AOC-SASLP-MV8. All 16 disks report the same UniqueID.
    I have 25 disks in the pool now (23 as parity; 2 as journal added via PowerShell). 10 of these are on the two AOC-SASLP-MV8 controllers. Only the first 16 disks show up in the UI, so 9 are missing from the UI - which is consistent with this UI bug where
    only one disk per UniqueID shows up. PowerShell does work to manage the SS.
    PS C:\Users\administrator.TROUNCE> Get-PhysicalDisk | format-list FriendlyName, UniqueId, ObjectId, BusType
    FriendlyName : PhysicalDisk6
    UniqueId     : 00280000004000004FB116493C169A1A
    ObjectId     : {7ab38e00-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk7
    UniqueId     : 00280000004000001AE48E5088028D0D
    ObjectId     : {7ab38e02-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk8
    UniqueId     : 002800000040000020C9A6680224E32F
    ObjectId     : {7ab38e04-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk9
    UniqueId     : 0028000000400000FDE73E7254A60C4C
    ObjectId     : {7ab38e06-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk23
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e08-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk22
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e0a-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk21
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e0c-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk20
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e0e-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk5
    UniqueId     : 0028000000400000272BA74A52309853
    ObjectId     : {7ab3900f-ab87-11e1-bbbd-002590520253}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk19
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e10-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk4
    UniqueId     : 00280000004000009DE164099941430A
    ObjectId     : {7ab39011-ab87-11e1-bbbd-002590520253}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk18
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e12-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk11
    UniqueId     : 0028000000400000967EB0559AB4E351
    ObjectId     : {7ab39013-ab87-11e1-bbbd-002590520253}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk17
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e14-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk24
    UniqueId     : 0050430000000000
    ObjectId     : {7ab38e16-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk10
    UniqueId     : 0028000000400000B22A722C8AD2557B
    ObjectId     : {df23f916-c19f-11e1-bbf5-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk16
    UniqueId     : 0028000000400000DA4D24536A847E52
    ObjectId     : {7ab38e19-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk15
    UniqueId     : 00280000004000005DEDFF007783A242
    ObjectId     : {7ab38e1b-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk14
    UniqueId     : 002800000040000018C9CF6EBE605911
    ObjectId     : {7ab38e1d-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk13
    UniqueId     : 0028000000400000B64436290D155A48
    ObjectId     : {7ab38e1f-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk0
    UniqueId     : IDE\DiskOS1.0.00__\4&180adc7b&0&0.0.0:Trounce-Server2
    ObjectId     : {df23f925-c19f-11e1-bbf5-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk31
    UniqueId     : 0050430000000000
    ObjectId     : {df241daf-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk32
    UniqueId     : 0050430000000000
    ObjectId     : {df241db2-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk27
    UniqueId     : 0050430000000000
    ObjectId     : {df241cbe-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk28
    UniqueId     : 0050430000000000
    ObjectId     : {df241cc1-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk34
    UniqueId     : 0050430000000000
    ObjectId     : {df241dc4-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk29
    UniqueId     : 0050430000000000
    ObjectId     : {df241cca-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk33
    UniqueId     : 0050430000000000
    ObjectId     : {df241dcf-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk30
    UniqueId     : 0050430000000000
    ObjectId     : {df241cd3-c19f-11e1-bbf5-002590520253}
    BusType      : RAID
    FriendlyName : PhysicalDisk2
    UniqueId     : 002800000040000037638531D4A17419
    ObjectId     : {7ab38df8-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk3
    UniqueId     : 0028000000400000AB7400464090110C
    ObjectId     : {7ab38dfa-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel
    FriendlyName : PhysicalDisk1
    UniqueId     : IDE\DiskWDC_WD6400AAKS-00A7B2___________________01.03B01\4&180adc7b&0&0.1.0:Trounce-Server2
    ObjectId     : {7ab38dfc-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : RAID
    FriendlyName : PhysicalDisk12
    UniqueId     : 00280000004000005396CC47AA8AD97B
    ObjectId     : {7ab38dfe-ab87-11e1-bbbd-806e6f6e6963}
    BusType      : Fibre Channel

  • Can I make a hardware raid with the on board Marvell SE9128 chip on p67a-gd65?

    Hi,
    as you all know the RAID 0/1/10/5/JBOD with the p67 chipset are pure software raids or "fake" raids. I saw some sata-raid pcie 2.0 x1 cards on the sell which have the Marvell SE9128 chip. So my question is: Can I make a hardware raid with the onboard Marvell SE9128 chip on the p67a-gd65?
    Thanks
    --pepe

    Quote from: Stu on 06-November-11, 02:31:26
    Hardware RAID explained:
    http://www.pcguide.com/ref/hdd/perf/raid/conf/ctrlHardware-c.html
    What do you want to say with this? I know what's the difference between a software (fake) and hardware raid? Your post doesn't answer the question. The p67-chipset has integrated software raid. The marvell se9128 is also found on some hardware raid cards. That's why I'm still wondering if it is for a hardware raid on this board or not.

  • Slow performance Storage pool.

    We also encounter performance problems with storage pools.
    The RC is somewhat faster than the CP version.
    Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with “Bursts”.
    Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).
    Inserting an ARC 1882X RAID card increases speed with a factor 5 – 10.
    Hence hardware RAID on the same hardware is 5 – 10 times faster!
    We noticed that the “Recourse Monitor” becomes unstable (irresponsive) while testing.
    There are no heavy processor loads while testing.
    JanV.
    JanV

    Based on some testing, I have several new pieces of information on this issue.
    1. Performance limited by controller configuration.
    First, I tracked down the underlying root cause of the performance problems I've been having. Two of my controller cards are RAIDCore PCI-X controllers, which I am using for 16x SATA connections. These have fantastic performance for physical disks
    that are initialized with RAIDCore structures (so they can be used in arrays, or even as JBOD). They also support non-initialized disks in "Legacy" mode, which is what I've been using to pass-through the entire physical disk to SS. But for some reason, occasionally
    (but not always) the performance on Server 2012 in Legacy mode is terrible - 8MB/sec read and write per disk. So this was not directly a SS issue.
    So given my SS pools were built on top of disks, some of which were on the RAIDCore controllers in Legacy mode, on the prior configuration the performance of virtual disks was limited by some of the underlying disks having this poor performance. This may
    also have caused the unresponsiveness the entire machine, if the Legacy mode operation had interrupt problems. So the first lesson is - check the entire physical disk stack, under the configuration you are using for SS first.
    My solution is to use all RAIDCore-attached disks with the RAIDCore structures in place, and so the performance is more like 100MB/sec read and write per disk. The problems with this are (a) a limit of 8 arrays/JBOD groups to be presented to the OS (for
    16 disks across two controllers, and (b) loss of a little capacity to RAIDCore structures.
    However, the other advantage is the ability to group disks as JBOD or RAID0 before presenting them to SS, which provides better performance and efficiency due to limitations in SS.
    Unfortunately, this goes against advice at http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx,
    which says "RAID adapters, if used, must be in non-RAID mode with all RAID functionality disabled.". But it seems necessary for performance, at least on RAIDCore controllers.
    2. SS/Virtual disk performance guidelines. Based on testing different configurations, I have the following suggestions for parity virtual disks:
    (a) Use disks in SS pools in multiples of 8 disks. SS has a maximum of 8 columns for parity virtual disks. But it will use all disks in the pool to create the virtual disk. So if you have 14 disks in the pool, it will use all 14
    disks with a rotating parity, but still with 8 columns (1 parity slab per 7 data slabs). Then, and unexpectedly, the write performance of this is a little worse than if you were just to use 8 disks. Also, the efficiency of being able to fully use different
    sized disks is much higher with multiples of 8 disks in the pool.
    I have 32 underlying disks but a maximum of 28 disks available to the OS (due to the 8 array limit for RAIDCore). But my best configuration for performance and efficiency is when using 24 disks in the pool.
    (b) Use disks as similar sized as possible in the SS pool.
    This is about the efficiency of being able to use all the space available. SS can use different sized disks with reasonable efficiency, but it can't fully use the last hundred GB of the pool with 8 columns - if there are different sized disks and there
    are not a multiple of 8 disks in the pool. You can create a second virtual disk with fewer columns to soak up this remaining space. However, my solution to this has been to put my smaller disks on the RAIDCore controller, and group them as RAID0 (for equal
    sized) or JBOD (for different sized) before presenting them to SS. 
    It would be better if SS could do this itself rather than needing a RAID controller to do this. e.g. you have 6x 2TB and 4x 1TB disks in the pool. Right now, SS will stripe 8 columns across all 10 disks (for the first 10TB /8*7), then 8 columns across 6
    disks (for the remaining 6TB /8*7). But it would be higher performance and a more efficient use of space to stripe 8 columns across 8 disk groups, configured as 6x 2TB and 2x (1TB + 1TB JBOD).
    (c) For maximum performance, use Windows to stripe different virtual disks across different pools of 8 disks each.
    On my hardware, each SS parity virtual disk appears to be limited to 490MB/sec reads (70MB/sec/disk, up to 7 disks with 8 columns) and usually only 55MB/sec writes (regardless of the number of disks). If I use more disks - e.g. 16 disks, this limit is
    still in place. But you can create two separate pools of 8 disks, create a virtual disk in each pool, and stripe them together in Disk Management. This then doubles the read and write performance to 980MB/sec read and 110MB/sec write.
    It is a shame that SS does not parallelize the virtual disk access across multiple 8-column groups that are on different physical disks, and that you need work around this by striping virtual disks together. Effectively you are creating a RAID50 - a Windows
    RAID0 of SS RAID5 disks. It would be better if SS could natively create and use a RAID50 for performance. There doesn't seem like any advantage not to do this, as with the 8 column limit SS is using 2/16 of the available disk space for parity anyhow.
    You may pay a space efficiency penalty if you have unequal sized disks by going the striping route. SS's layout algorithm seems optimized for space efficiency, not performance. Though it would be even more efficient to have dynamic striping / variable column
    width (like ZFS) on a single virtual disk, to fully be able to use the space at the end of the disks.
    (d) Journal does not seem to add much performance. I tried a 14-disk configuration, both with and without dedicated journal disks. Read speed was unaffected (as expected), but write speed only increased slightly (from 48MB/sec to
    54MB/sec). This was the same as what I got with a balanced 8-disk configuration. It may be that dedicated journal disks have more advantages under random writes. I am primarily interested in sequential read and write performance.
    Also, the journal only seems to be used if it in on the pool before the virtual disk is created. It doesn't seem that journal disks are used for existing virtual disks if added to the pool after the virtual disk is created.
    Final configuration
    For my configuration, I have now configured my 32 underlying disks over 5 controllers (15 over 2x PCI-X RAIDCore BC4852, 13 over 2x PCIe Supermicro AOC-SASLP-MV8, and 4 over motherboard SATA), as 24 disks presented to Windows. Some are grouped on my RAIDCore
    card to get as close as possible to 1TB disks, given various limitations. I am optimizing for space efficiency and sequential write speed, which are the effective limits for use as a network file share.
    So I have: 5x 1TB, 5x (500GB+500GB RAID0), (640GB+250GB JBOD), (3x250GB RAID0), and 12x 500GB. This gets me 366MB/sec reads (note - for some reason, this is worse than the 490MB/sec when just using 8 of disks in a virtual disk) and 76MB/sec write (better
    than 55MB/sec on a 8-disk group). On space efficiency, I'm able to use all but 29GB in the pool in a single 14,266GB parity virtual disk.
    I hope these results are interesting and helpful to others!

  • Mdadm boot slow to assemble drives with newer installs

    I had a problem where my raid1 installs from this year would pause about 5 seconds per mdadm entry yet my old installs from March 2014 would go through all 4 mdadm entries in less than a second. After fussing for a long time on the new installs I finally broke (slowed) one of my old installs which immediately revealed what the problem was.
    I didn't forget to pick partition type FD or FD00. I have my install sequence in a text file so I rarely forget things. I install through PuTTY pasting all the way so it's fast and mistakes are rare.
    devices of the form /dev/md/* take about 5 seconds to create.
    devices of the form /dev/md{0..127} create very fast.
    "mdadm -E --scan" always creates devices of the form /dev/md/{0..127}. In the past I was very meticulous about fixing the device names to the more compact form. I never missed so I never found out there was a substantial speed difference. I just like the the more compact look of /dev/md1 in my boot menu. Eventually I progressed to UUID and LABEL but persisted with the compact form. Then I discovered "mdadm --name" so I left mdadm do it's own thing. What could possibly go wrong?
    I also noticed that the mdadm instructions changed from using "mdadm" to "mdadm_udev". mdadm_udev seems to assemble faster than mdadm.
    To fix, boot once to see if you're slow, fix any usage of /dev/md*/* in your boot scripts or fstab, fix the form in /etc/mdadm.conf, and mkinitcpio...
    I still see that the new installs pause at udev more than old installs, but instead of 20s vs >1s, it's now 5s vs >1s, which is much better.
    Looking further I compared mkinitcpio.conf and noticed that autodetect is now installed by default. I took that out and though my list of drivers went way down the pause didn't change. Then I compared "lsinitcpio -a /boot/initramfs-linux.img" and I saw that fsck.zfs was loading so ZFS being a data drive only I uninstalled it and remade mkinitcpio. The pause didn't change.
    Now "lsinitcpio -a /boot/initramfs-linux.img" and "lsinitcpio /boot/initramfs-linux.img | sort" compare the same yet the pause remains. I know something is lying because the slow pause computer has a SAS controller and the fast pause computer doesn't, and I know that I must remake mkinitcpio when the SAS controller brand changes or it won't boot without Fallback. The mptsas driver is hiding in there somewhere. So then I unpack both initrd files and by-content compare all files with Total Commander.
    Only one difference: mdadm.conf
    I can't boot without the SAS controller so I can't check to see if that's the source of the pause. I suspect it is because I can see the NumLock blink halfway through the pause and I know the SAS controller isn't concerned with that.
    To solve this quandary I made a quickie install on a SATA only computer. I used the fast form of /dev/md* in mdadm.conf and left autodetect in mkinitcpio.
    It's fast, but there's still about a half second pause right there at udev. Out goes autodetect. Still about a half second.
    Then, back at the slow pause computer, I swapped the Marvell AOC-SASLP-MV8 for the LSI SAS2008 9211-8i. See Zorinaq for what makes this possible. The pause at udev dropped to about 1s which is only a bit slower than SATA. Not really a win since the LSI BIOS boot takes much longer. The good news is that I didn't need to rebuild mkinitcpio to change controllers. autodetect is toast and I get one more degree of freedom!
    I guess that's as good as it gets. Fortunately the mdadm --name option doesn't create /dev/md/* entries so I don't need to fix that.
    The Internet is strangely silent on this issue. I guess you're all just used to slow boots with mdraid.
    This is a bug and needs to be reported. Either mdadm scan needs to stop producing the slow form or needs to make the slow form fast.

    severach wrote:I didn't forget to pick partition type FD or FD00.
    I wouldn't use this partition type anymore. (raid autodetect is deprecated)
    severach wrote:Then I discovered "mdadm --name" so I left mdadm do it's own thing.
    mdadm --name never worked well for me. I don't see many people using it.
    I prefer a minimal mdadm.conf that only specifies the md number and UUID.
    ARRAY /dev/md0 UUID=627125a5:abce6b82:6c738e49:50adadae
    I don't notice any delays but, in my case the raid is assembled in the background while in the foreground LUKS is waiting for me to type a passphrase.

  • Module for Marvell PATA

    To the kernel devs:
    By googling, I believe there is an experimental patch for the Marvell PATA drivers in the current kernels.  Could you please enable it as a module in the next arch kernel release?
    I bought an Intel DP965LT motherboard for Core 2 CPU that has a 965 Express (ICH8) chipset with the Marvell PATA chip.  I thought the board used the JMicron chip. I installed 0.8alpha3 on the SATA drive with no problems.  However dmesg shows no pata or ide detected.
    lspci:
    IDE interface: Marvell Technology Group Ltd. Unknown device 6101.
    If somebody knows how to solve this problem, please advise.
    Thanks,
    cascat

    Thanks for the quick replies.
    I will wait for 2.6.20 stable.  Linus wrote that he will release it upon his return from Australia if there were no showstoppers in rc5.
    cascat

  • Deal-breakers for real use of Solaris 11 Express

    I run Solaris 10 U9 for my home 12TB NAS box - based on Supermicro H8SSL-i2 motherboard (ServerWorks HT1000 Chipset and Dual-port Broadcom BCM5704C) and their 8-port SATA2 PCI-X card (AOC-SAT2-MV8). It's a great (but aging) platform and a rock solid OS with the unbeatable ZFS volume manager/filesystem.
    However, despite my willingness to run Solaris 11 Express in this role, I can't because of these deal-breakers:
    1) Lack of a full-featured installer that allows me to lay out or preserve existing partitions the way I want. Making /var a separate file system is a must. Ideally, I'd be able to run multiple versions of Solaris on the same box by customizing grub, and use my ZPOOLs on either Solaris 10 or 11 Express while I learn the new OS.
    2) Lack of support for the Broadcom BCM5704C dual-port gigabit NIC (and others), which work wonderfully under Solaris 10, but are badly broken under Solaris 11 Express. I know I could disable the on-board Broadcom NICs and go buy an Intel card - but why the need for this? Won't there be a fix for Broadcom NICs?
    3) Lack of support for modern, generic, server-class motherboards and PCI-e multi-port SATA/SAS cards. I wonder about the future for Solaris without support for modern, affordable x64 server hardware.
    Maybe I'm missing the point and Solaris 11 Express is only intended to be run as a virtual machine under VBox or VMware. But it would sure be nice to be able to run it on my real hardware - even if it is just a small hobbyist rig. Any suggestions?
    Regards,
    Mike

    In Solaris 11, you get a separate /var by default. If you update from Solaris 11 Express to Solaris 11, this transition doesn't happen automatically. If you decide to tackle it on your own, you need to be sure that it is done in a way that beadm, pkg, and other consumers of libbe will handle properly. I would recommend something along the lines of the following. This is untested and may break your system - prove it out somewhere unimportant first.
    Do the work in a new boot environment so you reduce the likelihood that you will break things in an unrecoverable way.
    # beadm create sepvar
    # beadm mount sepvar /mntFigure out the name of the root dataset of the new boot environment, then create a var dataset as a child of that.
    # rootds=$(zfs list -H -o name /mnt)
    # zfs create -o mountpoint=/var -o canmount=noauto $rootds/varMount this new /var and migrate data
    # mkdir /tmp/newvar
    # zfs mount -o mountpoint=/tmp/newvar $rootds/var
    # cd /mnt/var
    # mv $(ls -A) /tmp/newvarUnmount, remount
    # umount /tmp/newvar
    # beadm unmount sepvar
    # beadm mount sepvar /mntAt this point /mnt/var should be a separate dataset than /mnt. The contents of /mnt/var should look just like the contents of /var, aside from transient data that has changed while you were doing this. Assuming that is the case, you should be ready to activate and boot the new boot environment.
    # beadm activate sepvar
    # beadm unmount sepvar
    # init 6

  • Airtune Over Ethernet  with Time Capsule or Airport Extreme

    Can the previous generation Time Capsule or Airport Extreme bridge an ethernet client to an Airtunes network?
    Current Setup: I have an aluminum iMac wired to my Time Capsule; currently, I use the aluminum as a music server by connecting to several Airport Express clients over the the iMac's built-in Airport for Airtunes support.
    Problem: I have 3-802.11g clients dragging my wifi network. I would like to dedicate the iMac's internal network to the g-clients so I don't have to use compatibility mode for my n-network. However, I will lose Airtunes speaker support when I use internet sharing over the built-in Airport. Airtunes does not appear to support the Time Capsule ethernet client.
    Nodes:
    -AE 802.11g (Airtunes only)
    -AE 802.11n (Airtunes and USB printer)
    -1st gen Time Capsule (Internet gateway, "creating network" 802.11 g/n,
    -iMac 8,1 (Music server via internal wifi, wired TC client)
    -Macbook 4,1 (wifi client)
    -iPhone
    -Canon MP620 (USB to iMac, wifi to Macbook)

    Further to Bob's comments..
    A Gen1 TC will be using marvel wireless chip and your 2008 and 2010 Macbook will use atheros and/or broadcom cards.. Just open your system profiler and look for info on the airport. We find the mixture of wireless chipsets especially older draft N and later N products can give very varied results.
    The very fact you are linking at 270 and not 300mbps shows some reduction from theoretical max speed.. and really to get over 100mbps with any wireless you need perfect setup.. matched wireless chips etc.
    Do a test uploading and downloading a file to the TC to see if the LAN speed is better than internet speed.
    In reality I think you are doing especially well.. we see loads of people complaining about slow internet here who are getting less than 10% of the speed they get direct when routed through the TC. And on most occasions the limit in speed is not really going to affect what you do, as the real links to the internet are not that fast.

  • Poor internet speed with time capsule or Airport extreme

    I'm trying to find away to increase the throughput of my 1st generation tme capsule. i'm getting a 33% drop in speed over wireless compared to wired conncetions.
    current setup
    Virginmedia XXL broadband  100Mbit down  5Mbit up.
    Modem standard virginmedia modem (not the superhub) connected to Apple time capsule (500Mb) @ 1000Mbit full duplex
    time capsule reset to default and then configured to use 5Ghz band, channel 36 . (no other networks in the area using this band) tested using wifi scanner and istumbler
    tested down/up speed using www.vmspeed.com
    wired connection
    105Mb/sec download   4.9Mb/sec upload
    wireless connection
    73.14Mb/sec download   4.7Mb/sec upload
    i have tried testing with an airport extreme 3rd Gen with the same setup and get the same results.
    Tested using a 2008 Macbook running Lion 10.7.3 and also tested using 2010 Macbook unibody running snow leopard 10.6.8. both laptops connecting to wireless network at 270Mb/sec and are both exhibiting the same drop in speed over wireless. only one laptop connected to the network at a time.
    is thier a limit to the time capsule/airport extreme wireless throughput?

    Further to Bob's comments..
    A Gen1 TC will be using marvel wireless chip and your 2008 and 2010 Macbook will use atheros and/or broadcom cards.. Just open your system profiler and look for info on the airport. We find the mixture of wireless chipsets especially older draft N and later N products can give very varied results.
    The very fact you are linking at 270 and not 300mbps shows some reduction from theoretical max speed.. and really to get over 100mbps with any wireless you need perfect setup.. matched wireless chips etc.
    Do a test uploading and downloading a file to the TC to see if the LAN speed is better than internet speed.
    In reality I think you are doing especially well.. we see loads of people complaining about slow internet here who are getting less than 10% of the speed they get direct when routed through the TC. And on most occasions the limit in speed is not really going to affect what you do, as the real links to the internet are not that fast.

  • MOVED: K9N4 Sli-f Replacement Bridge

    This topic has been moved to AMD64 nVidia boards.
    https://forum-en.msi.com/index.php?topic=126440.0

    http://computers.pricegrabber.com/accessories/Supermicro-AOC-SLIB-SLI-BRIDGE-DUAL/m24516191.html
    http://www.ncixus.com/products/28972/SLI-12CM/Arctic%20Cooling/

  • K9N4 Sli-f Replacement Bridge

    I live in Houston, Texas, and i purchased the K9N4 sli-F a little over a year ago, since then i have been using an 8800gtx, but lately i have been experiencing major over heating issues, so i've decided to purchase a new CPU cooler and a new GPU, and because of unbeatable deals on Newegg.com i have decided to purchase 2 Gts 250 1gb's... but i seem to have misplaced my Sli bridge, i did recieve one when i purchased my motherboard, but i never really thought id use it (go me!) and i never tried to keep track of it.. is there anyway i can recieve a replacement? i have already put in a request to customer service via email, but if its anything like other companies i know it will be a while untill i recieve a reply

    http://computers.pricegrabber.com/accessories/Supermicro-AOC-SLIB-SLI-BRIDGE-DUAL/m24516191.html
    http://www.ncixus.com/products/28972/SLI-12CM/Arctic%20Cooling/

  • SATA 1/2 vs. 3/4

    Hi Guys,
    I OC'ed my system (see below) to 240x11 with 1:1 mem/cpu, HTT 4x, 2.5-3-3-7-1T, 1.47vcore.  I couldn't get above this, despite trying looser timings, more voltage, etc., and then I read about the SATA 1/2 lock problem on the K8N Neo2.  I have SATA drives on 1/2, though I don't boot from either of them.  Sadly, I did eventually experience data corruption on channel 1 and had to reformat.
    I've read in other forums that a northbridge SATA channel (1/2) is faster than an external channel (3/4) because it's native, whereas an external channel has to communicate with the chipset over the PCI bridge.  My question is, are the performance gains that I would get by switching to SATA 3/4 (enabling higher clocks) worth the hit I would take in data transfer?
    Thanks for any advice!

    There is none of the four sata ports that is connected to/run off the PCI bus .
    The nforce 3 250 chipset by nvidia has physical layer (PHY) buildt in for 2 sata ports .
    Some manufacturers call this ones 1&2 and some 3&4(MSI) .
    The PHY intergrated ports are the two ones close to the AGP port and CPU socket on all NF3 250/ultra motherboards
    this cause it's part of the nvidia reference design .
    The NF3 chipset has a satalink bridge so mobo manufacturers can use an extrernal PHY for additional 2 sata ports , this PHY chip is by marvell in most cases .
    The ports are still on-chip (nf3 chipset) integrated even if they use the marvell PHY chip .
    The problem is that the chipset can't control the clock reference for the marvell PHY like it's internal PHY so the satalink frequency
    of theese two marvell ports scale with the HTT clock .
     

  • Is clustering possible with this setup.

    Is it possible to use
    Windows Server 2012 Data Center
    2 SuperMicro AS-2022G-URF4+ R with
    RAID CARD SUPERMICRO|AOC-USAS2-L8i and SAS drives for a failover cluster for Hyper-V
    At this moment using  mirrored drives for operating system and 4 drives free for the cluster itself if possible with two slots open.

    Is it possible to create shared storage with this setup that will work with the cluster or do I need a SAN.
    At least now you need to deploy a third-party software doing something similar to the picture below:
    So basically you'll take a bunch of a locally attached disks and "mirror" them with a software
    component. If you're already having all-SAS setup (something you really do) you can deploy
    what's called "Clustered RAID Controllers" config replacing SAS HBAs you use now. See:
    LSI Syncro
    http://www.lsi.com/products/shared-das/pages/default.aspx
    8i does not need SAS JBODs to create a fault tolerant config and 8e needs external enclosures.
    P.S. I would not deploy any "magic hardware" as it's a vendor lock-in. You can change software vendor
    easily but I'm not aware of anybody doing similar things to what LSI does.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Wireless comm problem with time capsule or airport extreme using iMac G5

    I did buy a Time Capsule and an iMac 20" G5 January 2009...Installed all the components and created a wireless network...MacBook, iPhone and iPod Touch connected perfectly...iMac connected but started to drop communications with Time Capsule...Safari, iTunes and other internet-access programs had problems to connect 100% with the web...After reviewing all the post here and "testing/trying" several parameters, I found out that you need to define a WPA/WPA2 password protection with more than 13 characters...FYI, Time capsule is configured using 802.11n (b/g compatible)...

    Further to Bob's comments..
    A Gen1 TC will be using marvel wireless chip and your 2008 and 2010 Macbook will use atheros and/or broadcom cards.. Just open your system profiler and look for info on the airport. We find the mixture of wireless chipsets especially older draft N and later N products can give very varied results.
    The very fact you are linking at 270 and not 300mbps shows some reduction from theoretical max speed.. and really to get over 100mbps with any wireless you need perfect setup.. matched wireless chips etc.
    Do a test uploading and downloading a file to the TC to see if the LAN speed is better than internet speed.
    In reality I think you are doing especially well.. we see loads of people complaining about slow internet here who are getting less than 10% of the speed they get direct when routed through the TC. And on most occasions the limit in speed is not really going to affect what you do, as the real links to the internet are not that fast.

  • Switch cisco catalys 2960

    Hi all,
    I have some problems involved Cisco Catalyst 2960 Switch.
    I am using a device which includes Marvell PHY chip 88E1111. The device can send and receive PTP packet to and from my PC.
    Now, I want to connect the device and the PC to Cisco Catalyst 2960 Switch, which will help me trace all of packets in the network . The test scenario is below:
    -          Switch: Cisco Catalyst 2960
    -          Tracer: Wireshark software
    -          PC: Windows 7-64 bit, plugging in Switch port 1 (interface 1)
    -          Device: FPGA board, plugging in Switch port 2 (interface 2),
    operating mode: 1000Mbps, Fullduplex, no auto-negotiation, no auto power efficient-ethernet.
    -          Interface 2 of the Switch is static set by the device’s MAC
    address , which ensures the Switch known the device’s MAC.
    I suffered a problem. Although the RJ45 TX status led are on, there is not any packet sent to the Switch. I have no idea in this case.
    Could you give me an advise please.

    It means the IOS used is corrupt.  
    Go to the Cisco website and download again the IOS.  Once download is complete, compare the MD5 hash value of the file downloaded against the MD5 hash value found in the Cisco website.

Maybe you are looking for

  • How to create an EJB Application in Eclipse 3.2

    hi all, any one suggest me, what are the external jars need to be include in eclipse to cretae the ejb applications. Thanx in advance. Bala

  • Firefox will not run after update (3.6 8.0) WinXP

    I noticed that my mother's laptop was running Firefox 3.6 still, and offered to update. I downloaded 8.0, and ran the install. Had no issues with the install, no errors, nothing. I told it to run after the install finished, and it never came up. I tr

  • How to Play Downloaded PBS Episode on Widescreen TV?

    How do I play a downloaded PBS video ("Endeavor") on my widescreen TV?  I used the itunes PBS app to access the episode and have it on my Ipod touch, version5.1.1 (9B206), Model MC008LL.  Itplays fine on the Ipod, but I want to view it on my Samsung

  • Sending IDoc from TIBCO to SAP

    Hi, request is to send IDoc type PURCONTRACT_CREATE01 from TIBCO to SAP.  After sending the data from Tibco there is nothing received in SAP although TIBCO adapter says successfully created. But no entry in BD87 (IDoc monitor), no entry in SM48 (tRFC

  • CREATE SD document with characteristics - BAPICUVAL / E1CUVAL problem

    Hello there, I am trying to create SD document with material characteristic. I can create single value characteristic, but when I am trying to post interval characteristic, just lower interval is saved and upper one is ignored. I use BAPI_SALESORDER_