Solaris 10 Max Partition size

Hi,
I would like to know the maximum partition size that Solaris 10 can support/create.
We have a Sun StorEdge 6920 system with a 8 TBytes, based on 146 GBytes Hard Disks.
Is it possible to create a 4 TBytes partition?
if not, any suggestions are appreciated.

Look to EFI, allows filesystems built to large TB sizes if you needed.
Per SUN:
Multi-terabyte file systems, up to 16 Tbyte, are now supported under UFS, Solaris Volume Manager, and VERITAS's VxVM on machines running a 64-bit kernel. Solaris cannot boot from a file system greater than 1 Tbyte, and the fssnap command is not currently able to create a snapshot of a multi-terabyte file system. Individual files are limited to 1 Tbyte, and the maximum number of files per terabyte on a UFS file system is 1 million.
The Extensible Firmware Interface (EFI) disk label, compatible with the UFS file system, allows for physical disks exceeding 1 Tbyte in size. For more information on the EFI disk label, see System Administration Guide: Basic Administration on docs.sun.com. "
Found this at:
http://www.sun.com/bigadmin/features/articles/solaris_express.html
So you may want to look into EFI, which is different way of partitioning the disk.

Similar Messages

  • Max GPT partition size for Windows 2008R2

    Is the max partition size 256tb -64K? for 64k stripe size?? What is the max?

    Hi Jared,
    From Windows Server 2003 SP1, Windows XP x64 edition, and later versions, the maximum raw partition of
    18 exabytes can be supported. (Windows file systems currently are limited to 256 terabytes each.)
    GPT disks support partitions of up to 18 exabytes (EBs) in size and up to 128 partitions per disk.
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn640535(v=vs.85).aspx
    Regards,
    Rafic
    If you found this post helpful, please give it a "Helpful" vote.
    If it answered your question, remember to mark it as an "Answer".
    This posting is provided "AS IS" with no warranties and confers no rights! Always test ANY suggestion in a test environment before implementing!

  • Permanent and temporary partition sizes execeeds maximum size allowed

    I receceivd error 818
    permanent and temporary partition sizes execeeds maximum size allowed on this platform
    I'm currently using TimesTen 32 bit version running Solaris and I increased one of my data storage size
    to
    PermSize=1700
    TempSize-350
    which is around 2G.
    The system has 16G of memory, and the shared memory is set to 0xF0000000 which is almost the max of 32 bit.
    Does anyone know this error is a sytem limit (32bit), or it can be fixed by tuning the system paramter? The system limits section does not have any information on the partition size limit. Anyone has any idea?

    That's correct, the 1 GB limit is specific to 32-bit HP/UX. All other 32-bit Unix/Linux platforms have a maximum size limit of 2 GB and as Jim mentioned this limit is such that:
    PermSize + TempSize + LogBufMB + ~20MB < 2 GB
    If you need anything larger you need a 64-bit O/S and 64-bit TimesTen.
    Chris

  • Need info about max HDD size available for Satellite Pro M30-813

    Hello,
    The following question is mainly to be addressed to authorized Toshiba support personnel. What exactly is the limitation of a maximum size of an internal HDD that I could use with my Satellite Pro M30-813?
    Recently, I have bought and installed seagate 160 GB SATA drive, onto which I have successfully installed WXP Pro and have been running it for quite a while with no problems. Recently, I have been copying large amount of data from an external hard drive to my new internal disk, and as the files were being copied as I noticed having about 50 GB free space left, I had experienced windows "delayed write failed" and a massive partition failure with no possibility to recover data. The system would no longer boot and the whole MBR was damaged. As the result, I have lost all data on my new disk.
    Although, I realize that Toshiba is not responsible for additional hardware that I use with my laptop and that is not officially supported by Toshiba, I am certain that as an end user of a Toshiba product I have the right to know about a max HDD size limitation information for my notebook model. Therefore, I request Toshiba technical support representative to give me a straight official answer to my question.
    Thank you in advance,
    Andrejs
    (You may also contact me privately at my e-mail address)

    Hi Andrew
    > The following question is mainly to be addressed to authorized Toshiba support personnel
    I think you are in the wrong area if you are looking for an answer from an authorized Toshiba support.
    This is a Toshiba user-to-user forum! You will meet here Toshiba notebook owner and enthusiasts who share knowledge and tries solve problems but nobody from Tosh :(
    I could provide my experience with the M30 Satellite and the HDD upgrade possibilities.
    In my knowledge the Sat M30 supports a 40GB, 60GB and 80GB HDD for sure.
    In my opinion you could use the 100GB HDD but bigger HDDs will not run and functions correctly.
    So switch to a lower HDD size and enjoy the notebook!
    Ive goggled a little bit and found compatible HDD and the part numbers
    HITACHI GBC000Z810 -> 80GB
    HITACHI GBC00014810 -> 80GB
    TOSHIBA HDD2188B -> 80GB
    HITACHI G8C0000Z610 -> 60GB
    HITACHI G8BC00013610 -> 60GB
    TOSHIBA HDD2183 -> 60GB
    TOSHIBA HDD2184 -> 60GB
    I hope this could help you a little bit!
    Best regards

  • How to set desired partition size on Satellite C850-12D with W7 Pro?

    Hi all,
    I recently got a new laptop, Satellite Pro C850-12D with Seven Pro: the disk (500 Gb) is initially partitionned like this:
    - a small hidden, 1.46 Gb
    - the C: one, 449.47 Gb (visible by the user)
    - a hidden one, 14.83 Gb
    I would reduce the C: partition to 100 Gb in order to create a new one for DATA.
    So, I begin the process using the Seven tools for disk managing but the disposable size after reduction seems limited to a max of 226.685 Gb: C remains higher or equal to 233 570 Gb.
    I tried defragmentation of C but its minimal size remains 233 570 Gb
    Is there a method to overcome this limit and achivce my goal of 100 Gb?

    Hi
    Before changing some partitions on the HDD I strongly recommend creating a Recovery disk!
    The Toshiba recovery media creator helps you to create such disk and this is needed in cases something would be wrong with your HDD.
    Back to partition issue:
    You will need to use an 3rd party tool like *Gparted* in order to change the partition size.

  • Max File size in UFS and ZFS

    Hi,
    Any one can share what is max file size can be created in Solaris 10 UFS and ZFS ?
    What will be max size file compression using tar,gz ?
    Regards
    Siva

    from 'man ufs':
    A sparse file  can have  a  logical  size  of one terabyte.
    However, the  actual amount of data that can be stored
    in  a  file  is  approximately  one  percent  less  than one
    terabyte because of file system overhead.
    As for ZFS, well, its a 128bit filesystem, and the maximum size of a file or directory is 2 ^64^ bytes, which i think is somewhere around 8 exabyte (i.e 8192 petabyte), even though my calculator gave up on calculating it.
    http://www.sun.com/software/solaris/ds/zfs.jsp
    .7/M.
    Edited by: abrante on Feb 28, 2011 7:31 AM
    fixed layout and 2 ^64^

  • Max heap size limits

    I've been looking around for information on the max heap size limits on Sun's JVMs but can't seem to find any information. Just by testing, it seems like the max heap size for Windows 2000 can vary from 1.3G to 1.6G depending upon the machine (JDK 1.4). Does anybody know where I could find actual documentation that describes the limits for Sun's VMs on Windows (2000 and Advanced Server), Linux, and Solaris? I'm about to file this as a documentation bug against the JDK.
    God bless,
    -Toby Reyelts

    There was an older thread in the forums that had some info on this - my quick search failed to locate it, ypu might want to spend some time looking. The basic problem is memory space fragmentation by the OS, where the OS locates items in memory and effectively constrains heap growth to the unfragmented area that the heap starts in.While there may be more "unused" memory, it's not contiguous. There is also some info in MS's MSDN data regarding this condition, with information on the various OS's. I think Linux has a similar "condition".

  • Strange: Mac screen menu-bar requires max-heap-size to be set.

    I planned to omit the max-heap-size attribute in the line of my jnlp file
    <j2se version="1.6+" max-heap-size="256m" />
    The idea was that with Java 1.6 the heap size is set automatically
    according to the client's RAM.
    Unfortunately, the Macintosh screen menu-bar works if and only if the max-heap-size attribute is present.
    It is a MacOsX=1.4 with Java1.5.
    Strange, since the Mac runs Java 1.5 and I am talking about settings for 1.6.
    The jnlp passed ValidateJNLP at http://mindprod.com/jgloss/jnlp.html#VALIDATION
    Here is another post stating that attributes in JNLP have side effects on Mac's screen menu-bar:
    http://lists.apple.com/archives/Java-dev/2008/Jul/msg00122.html
    Here is my jnlp:
    w3m -dump http://www.bioinformatics.org/strap/strap.jnlp
    Is there an explanation for this?
    Christoph

    user10289576 wrote:
    I would not blame Macintosh.
    The error might still be in the Sun's code.Could be. But to fix the Mac VM it would require the following.
    1. Find it in the Sun VM.
    2. Fix it in the Sun VM.
    3. Move the changes to the Mac VM...somehow.
    >
    If the jdk would be smaller, less redundant and clearer, then
    open JDK could be possibly compiled on a Mac.
    Not sure what that means since there are likely OS level calls that must be implemented in some place in the API. Which are specific to Mac.
    Just as there are differences between windows/solaris/linux which Sun accounted for.
    And that would be what Apple would have done to make it work on Mac. And what someone (someone who likes Mac) will need to continue to do with the public release (explusion?) which is the form that Java will have going forward.
    The main thing that should be improved on a Macintosh
    is to directly allow for Linux and Solaris executables such that the Linux JDK could directly work on a Mac.The main thing, again, is the Apple is no longer supporting Java on the Mac.
    And Apple, not Sun and certainly not Oracle, were the ones that created the Java VM.
    So the main thing at this point is that ALL future directions that Java take are dependent not on Apple but on the Macintosh community. That includes features as well as fixes.

  • Partition size limitation?

    I have run into the following problem when trying to set a new large hard disk on my solaris 9 machine:
    If I use 100% of the space (200GB) on the disk to create my solaris partition and then proceed to define the slices I will always get a Warning message when performing the label command from the partition prompt:
    Warning: no backup labels
    When in fact I used the "Free Hog" option to set it up and there is a backup slice.
    If I proceed to run newfs the process will hang and eventually the entire machine will lock up.
    If I reduce the solaris partition to 25% (50GB) and follow the same steps I do not get any warning message and the newfs process finishes normally.
    I experience exact same problem at 50% (100GB). So my limit is somewhere between 50 and 100 GB for the partition size. Is there some kind of work around for this? I have a solaris 8 machine that uses a 100GB partition without any problems so it seems unlikely that I am hitting some kind of OS limitiation...
    The hard drive is an ata-133 (maxtor) however it's running at ata-100 due to motherboard chipset limitations... I don't know if this could be causing a problem or not...

    I'd try to install Solaris 9 from the CD "Software 1 of 2" (that is, not using the installation CD).
    At the stage where the video card, keyboard and mouse is configured select "bypass". The installer
    runs in ascii mode now.
    Now. in case of a system hang I hope that you are able to see some kernel error message and you
    may get a clue what the problem is.

  • Max-stack-size - default_stksize

    Hi,
    First, sorry for my english ^^
    I'm new on Solaris. I installed Solaris 10 on Sun V490 . I use Core Network.
    Why with the default setting, daemons or process of the OS "don't work" like this:
    Jun 17 14:50:10 unknown genunix: [ID 883052 kern.notice] basic rctl process.max-stack-size (value 8683520) exceeded by process 353In this example I generate the error with the format command. But I get this error with other command or deamon like nscd.
    I had the same problem with the value max-file-descriptor. The values set by projmod for the system project did not seem take effect. Thus I used the "old" parameters rlim_fd_cur, rlim_fd_max. Now it's ok.
    I find this parameter default_stksize in the Sun documentation. I put this in my /etc/system file:
    set default_stksize=16384At the boot time I have no error message for the value, but the max-stack-size value is the same:
    prctl -n process.max-stack-size 130
    process: 130: /usr/sbin/nscd
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    process.max-stack-size
    basic 8,28MB - deny 130
    privileged 127MB - deny -
    system 2,00GB max deny -
    nscd is in the system project:
    ps -p 130 -o project
    PROJECT
    system
    Thank in advance, any idea is welcome.
    Guillaume

    Hi Prasad,
    Block Size is the number of parallel processes that are being executed in background during the application.  This is normally a configuration activity to be configured in line with basis.
    In Block size, we enter the number of objects to be processed per block by the CIF comparison/reconciliation during data selection in SAP APO or in the partner system.
    If you increase the block size, the memory required also increases. This has a positive effect on performance. If processes are cancelled due to lack of memory, you can decrease the block size to save memory.
    If you do not enter anything here, the system determines the block size dynamically from the number of objects that actually exist and the maximum number of work processes available.
    Normally, when you execute a job in background, it picks up the application server automatically or by manually defined server.  In parallel processing, at a time, one or more job of the same identify can be triggered under this scenario by defining application servers.  But too many parallel processing activity will affect the performance.
    One needs to define the parallel processes also to control system behaviour.  The Parallel processing profile is defined for parallel processing of background jobs. You then assign these profiles to variants in the applications.
    Regards
    R. Senthil Mareeswaran.

  • Split Max partition

    I've 20gb table which has been partitioned by range (yearly basis ans quaterly basis).
    And most of the data has been stored in the max partition and segment size is increasing faster.
    Now I want to get rid of max partition and divide the table with more partitions which would capture data yearly.This way I can have only 2 years worth data on production and partition can be dropped easily after moving to datawarehouse .
    Kindly suggest.Partitions I want to split is yearly like
    "PARTITION "2008" VALUES LESS THAN (TO_DATE(' 2009-01-01 00:00:00',
    TABLESPACE "LLS_DATA01" NOCOMPRESS"
    Below is the table desc
    CREATE TABLE "LLS"."test"
    (     "STARTDATE" DATE NOT NULL ENABLE,
         "CALLID" CHAR(9 BYTE) NOT NULL ENABLE,
         "CUSTOMERLLSID" NUMBER,
         "CRCLIENTLLSID" NUMBER,
         "EAPLLSID" NUMBER,
         "EAPCOMMROOMID" NUMBER,
         "LANGID" VARCHAR2(5 BYTE),
         "CRCLIENTID" VARCHAR2(20 BYTE),
         "PERSONALCODE" VARCHAR2(255 BYTE),
         "PIN" NUMBER,
         "SPECIALPROMOCODE" VARCHAR2(4 BYTE),
              "PREBILLESTIMATE" NUMBER,
         CONSTRAINT "LANID" CHECK ( LANID IN ('A', 'B','C','E','P') ) ENABLE,
         CONSTRAINT "TRUE_OR_FALSE_IC1" CHECK (InterpreterLunchAdjustmentMade IN ('T', 'F') ) ENABLE,
         CONSTRAINT "TRUE_OR_FALSE_IC2" CHECK (DontBillCustomer IN ('T', 'F') ) ENABLE,
         CONSTRAINT "TRUE_OR_FALSE_IC3" CHECK (DontPayInterpreter IN ('T', 'F') ) ENABLE,
         CONSTRAINT "TRUE_OR_FALSE_IC4" CHECK (RecordChanged IN ('T', 'F') ) ENABLE,
         CONSTRAINT "TRUE_OR_FALSE_IC5" CHECK (LogicalDelete IN ('T', 'F') ) ENABLE,
         CONSTRAINT "XPKINTERPRETATIONCALLS" PRIMARY KEY ("INTERPRETATIONSTARTDATE", "CALLID")
    TABLESPACE "LLS_INDX01" ENABLE
    TABLESPACE "U12_DATA"
    PARTITION BY RANGE ("STARTDATE")
    (PARTITION "2007Q3" VALUES LESS THAN (TO_DATE(' 2007-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    TABLESPACE "LLS_INTCALLS_DATA01" NOCOMPRESS ,
    PARTITION "2007Q4" VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00',
    TABLESPACE "LLS_DATA01" NOCOMPRESS ,
    PARTITION "CALLSMAX" VALUES LESS THAN (MAXVALUE)
    TABLESPACE "LLS_INTCALLS_DATA01" NOCOMPRESS ) ;

    Go to Morgan's Library: www.morganslibrary.org/reference.html
    Select PARTITIONING
    Search for "Split Partition"
    If you like the library ... bookmark the page.

  • Does the jvm allocate complete max heap size initially?

    Does the JVM allocate the memory for the entire max heap size up front, or does it start with the specified minimum size and increase and grab more later if needed?
    The reason this is being posted is because we have a number of jboss servers that are being run. Some don't require large heap sizes, others do. If we use the same large max heap size for all would all the memory get allocated up front, or possibly a smaller initialization portion?

    I have done the test with Solaris, Linux and WinXP.
    Test with -Xms512M
    Have written a simple java program to which the minimum heap size was set to -Xms512m then the program was executed on Solaris and WinXP platforms. The usage of memory of the Java process was 6 MB in WinXP and 9 MB in Solaris, rather than 512 MB. The JVM is not allocating the configured minimum size of 512 MB at the start of the process execution.
    Reason:
    If you ask the OS for 512 MB it'll say "here it is", but pages won't actually be allocated until your app actually touches them.
    If the allocation is not being made initially during the start of the process, the concept of minimum heap size is not required.
    But the garbage collection log shows the minimum heap size as what was configured using -Xms option.
    Test with -Xms1024M
    The JVM arguments was set to : -Xms1024m -Xmx1024m, but the used memory observed using Windows perfmon was 573M.
    6.524: [Full GC 6.524: [Tenured: 3081K->10565K(967936K), 0.1949291 secs] 52479K->10565K(1040512K), [Perm : 12287K->12287K(12288K)], 0.1950893 secs] Reason:
    Optimization is something that the operating systems do. The JVM allocates the memory in it's address space and initializes all data structures to your -Xms. In any way that the JVM can measure, the allocation from the OS is complete. But the OS doesn't physically assign a page to the app until the first store instruction. Almost all modern OSs do this.
    Hope this is helpful.

  • [ SOLVED ] Partition Sizes For New Arch Install ?

    Hi All
    I have just picked up a new 250gb HDD and i'm going to do a clean install of arch & openbox and would like to know what people would suggest for various partition sizes ?
    /root
    /swap
    etc
    I have 4gb ram so not sure if swap is a required !
    Many Thanks
    Last edited by whitetimer (2010-11-15 09:58:47)

    +1 on KlavKalashj's advice!
    When the installer warns you about there not being a /boot partition either(besides the swap), then just select ignore, it's no problem at all!
    If you do want a swap partiton(and don't hibernate), then I would say max 1gb imho.
    For an Openbox based install, then I would say 10GB is more than enough(mine's 5, although with a tiling WM: Musca).
    I don't keep the cache forever increasing, but empty it at times where I know everything is OK. Also, there's scripts available which cleans the cache up, like e.g. only keeping one revision back of each package.
    Last edited by mhertz (2010-11-15 09:47:18)

  • Sophos AV max scanning size / timeout

    Hi,
    I haven't found any changeable settings for max. scanning size or scanning timeout on a S160 v7.1.3 with Sophos AV.
    In the GUI under "Security Services-->Anti-Maleware"  it shows  "Object Scanning Limits: Max. Object Size:  32 MB".
    I'm not able to change it. This parameter seems not to belong to the Sophos AV.
    I can change it only after enableing Webroot or McAfee first.
    The CLI has no commands for adjusting AV settings.
    How can I control the max. scanning size or scanning timeout with Sophos-AV?
    Has it fixed values for it?
    Does anyone have an idea, how it works?
    Kind regards,
    Manfred

    With administrator rights, the value should be editable.  The object size is applied to all scanners which have been licensed and enabled on the appliance.
    ~Tim

  • "max-pool-size"   what is it good for?

    SCreator simple CRUD use:
    After a while I get:
    " Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connection"
    Which is odd, because its just me using the server/database. It looks like every tiime I run a test, another conection is lost.
    Do I have to restart the server? Is there a way to say "its only me, reuse a single connection"
    why does "connection pooling" make life harder?
    Can I turn it of?
    cheers
    cts

    I got the same error in my JSC project. I search for few days and i found the solution. I do a mistake in my page Navigation. I forgot a slash in <to-view-id>.
    A bad example:
    <navigation-rule>
    <from-view-id>/*</from-view-id>
    <navigation-case>
    <from-outcome>page13</from-outcome>
    <to-view-id>page13.jsp</to-view-id>
    </navigation-case>
    A good example:
    <from-view-id>/*</from-view-id>
    <navigation-case>
    <from-outcome>page13</from-outcome>
    <to-view-id>/page13.jsp</to-view-id>
    </navigation-case>
    with this mistake, the afterRenderedResponse() was never called, and the ResultRowSet was never closed.
    Korbben.

Maybe you are looking for

  • IOS 7 apps not working on iPhone 3gs

    Hi, We have developed application supporting framework starts from 4.3, now we need update of application. Apple forces to mandatory to deploy new or updated applications only from ios 7 sdk and xcode 5. We have updated my application accordingly for

  • PP CS5  XDCAM Workflow

    I've read the pdf about XDCAM workflow. It says you connect the XDCAM and you can read it's clips like files. Fine, but it does not give any steps in how to actually accomplish that. I can read from XDCAM like before; as if it is a tape deck with RS4

  • How to show Certificate Content from IE

    Hi, how to show Certificate Content Window from IE. I dont like to write it by myself;). Does anybody know if its posible? Regards

  • Issues with coverted files of captivate from CP2 to CP4

    Hi Adobe, We have a captivate files made using CP2 and saved it as CP4 and published it for Flash8, AS2 and SCORM 1.2. This a multi-sco course and each SCO had one CP file 1st CP file contains the simulation with few self-checks in it and the 2nd CP

  • Adobe Astro Photo Issue?

    Some friends and I have been photographing the Milky Way lately and having some issues.  When the Milky Way is faint in the sky it shows up clearly on our camera displays, but not on PS or LR. If I recall correctly, Adobe introduced a feature designe