Partition size limitation?

I have run into the following problem when trying to set a new large hard disk on my solaris 9 machine:
If I use 100% of the space (200GB) on the disk to create my solaris partition and then proceed to define the slices I will always get a Warning message when performing the label command from the partition prompt:
Warning: no backup labels
When in fact I used the "Free Hog" option to set it up and there is a backup slice.
If I proceed to run newfs the process will hang and eventually the entire machine will lock up.
If I reduce the solaris partition to 25% (50GB) and follow the same steps I do not get any warning message and the newfs process finishes normally.
I experience exact same problem at 50% (100GB). So my limit is somewhere between 50 and 100 GB for the partition size. Is there some kind of work around for this? I have a solaris 8 machine that uses a 100GB partition without any problems so it seems unlikely that I am hitting some kind of OS limitiation...
The hard drive is an ata-133 (maxtor) however it's running at ata-100 due to motherboard chipset limitations... I don't know if this could be causing a problem or not...

I'd try to install Solaris 9 from the CD "Software 1 of 2" (that is, not using the installation CD).
At the stage where the video card, keyboard and mouse is configured select "bypass". The installer
runs in ascii mode now.
Now. in case of a system hang I hope that you are able to see some kernel error message and you
may get a clue what the problem is.

Similar Messages

  • Partition size limitations?

    I attached a 2 TB disk to my Airport Extreme. It has a 600 GB partition and a 1.4 TB partition. I can see and use the 600 GB partition, but not the 1.4 TB partition. Also, I have six partitions (not counting the 1.4 TB partion) that are attached to the aiport. All are working fine, except the 1.4 TB. Are there issues with either the size of the partition or the number of them?
    Thanks.

    According to Apple, you can only use a single-partition drive: http://support.apple.com/kb/HT2426
    But Apple also says you can't do Time Machine backups to an AEBS-connected drive, and I'm doing both!
    The only issue I've had was selecting a partition for use with Time Machine, and changing various names helped. It's probably worth a try:
    Start with your System Name at the top of the System Preferences > Sharing panel.
    It must not be blank; it should not be more than 25 characters long; and you should avoid punctuation, spaces, and unusual characters.
    Then click Edit and make the corresponding name to the Local Hostname.
    If that doesn't help, apply the same rules to the name of your Airport Extreme and network.
    I know, it makes no sense, but often works anyway!

  • How to set desired partition size on Satellite C850-12D with W7 Pro?

    Hi all,
    I recently got a new laptop, Satellite Pro C850-12D with Seven Pro: the disk (500 Gb) is initially partitionned like this:
    - a small hidden, 1.46 Gb
    - the C: one, 449.47 Gb (visible by the user)
    - a hidden one, 14.83 Gb
    I would reduce the C: partition to 100 Gb in order to create a new one for DATA.
    So, I begin the process using the Seven tools for disk managing but the disposable size after reduction seems limited to a max of 226.685 Gb: C remains higher or equal to 233 570 Gb.
    I tried defragmentation of C but its minimal size remains 233 570 Gb
    Is there a method to overcome this limit and achivce my goal of 100 Gb?

    Hi
    Before changing some partitions on the HDD I strongly recommend creating a Recovery disk!
    The Toshiba recovery media creator helps you to create such disk and this is needed in cases something would be wrong with your HDD.
    Back to partition issue:
    You will need to use an 3rd party tool like *Gparted* in order to change the partition size.

  • Disk Utility -- Smallest (minimum) partition size

    I viewed an earlier discussion on this topic that was answered incorrectly, but I couldn't add this information to it because it had already been archived. However, I was unable to find it anywhere else on the web, so I thought it important to get on the record.
    Disk Utility in Leopard DOES enforce a minimum partition size. It is 1GB.
    This is documented in the Apple Service Training course "Leopard Tools and Techniques."

    V.K. wrote:
    it looks like DU will not allow you to create a partition smaller than one tenth of the disk size.
    when i do it on a 200GB drive the smallest size it let's me make is 20Gb, and for a 500GB drive it's 50GB.
    I guess that's a safeguard against creating too many partitions.
    Message was edited by: V.K.
    I thought so. Like most filesystems, all the information must be stored in the bootsector or whatever it's called with BSD, and the total number of files is related to the sizes and total numbers of partitions, so like FAT32 or even NTFS, HFS has it's practical and theoretical limits.
    I believe HFS has two bootrecords or equivalents, one for backup and this adds to the storage space issue.
    Going back to the old 1.44MB floppies, the unformatted size was over 1.7 MB and Norton and MS took advantage of that, but not all disk readers were happy with what they did. MS once shipped Windows on a set of 1.7 MB floppies. Of course this was BC (before CDs).

  • Solaris 10 Max Partition size

    Hi,
    I would like to know the maximum partition size that Solaris 10 can support/create.
    We have a Sun StorEdge 6920 system with a 8 TBytes, based on 146 GBytes Hard Disks.
    Is it possible to create a 4 TBytes partition?
    if not, any suggestions are appreciated.

    Look to EFI, allows filesystems built to large TB sizes if you needed.
    Per SUN:
    Multi-terabyte file systems, up to 16 Tbyte, are now supported under UFS, Solaris Volume Manager, and VERITAS's VxVM on machines running a 64-bit kernel. Solaris cannot boot from a file system greater than 1 Tbyte, and the fssnap command is not currently able to create a snapshot of a multi-terabyte file system. Individual files are limited to 1 Tbyte, and the maximum number of files per terabyte on a UFS file system is 1 million.
    The Extensible Firmware Interface (EFI) disk label, compatible with the UFS file system, allows for physical disks exceeding 1 Tbyte in size. For more information on the EFI disk label, see System Administration Guide: Basic Administration on docs.sun.com. "
    Found this at:
    http://www.sun.com/bigadmin/features/articles/solaris_express.html
    So you may want to look into EFI, which is different way of partitioning the disk.

  • Max GPT partition size for Windows 2008R2

    Is the max partition size 256tb -64K? for 64k stripe size?? What is the max?

    Hi Jared,
    From Windows Server 2003 SP1, Windows XP x64 edition, and later versions, the maximum raw partition of
    18 exabytes can be supported. (Windows file systems currently are limited to 256 terabytes each.)
    GPT disks support partitions of up to 18 exabytes (EBs) in size and up to 128 partitions per disk.
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn640535(v=vs.85).aspx
    Regards,
    Rafic
    If you found this post helpful, please give it a "Helpful" vote.
    If it answered your question, remember to mark it as an "Answer".
    This posting is provided "AS IS" with no warranties and confers no rights! Always test ANY suggestion in a test environment before implementing!

  • Disk Utility - smallest partition size

    I have new 500 GB external drive. Using Disk Utility I created GUID partition schema and created 4 partition - 3 x Mac OS Extended (Journaled) a 1 x FAT32 (initially with same size). Disk Utility doesn't let me resize 1st partion (the highest one) and 4th partition (the lowest one) smaller then cca. 50 GB (divider with 2 arrows changed to divider with 1 arrow). (I don't know if it's possible with other partitions to size them smaller then cca. 50 GB since I want them have large and I use disk already.) I didn't find anything about this in help, books or internet.
    Is there limit in Disk Utility about minimum partition size? (It looks like cca. 10 % from drive size).

    V.K. wrote:
    it looks like DU will not allow you to create a partition smaller than one tenth of the disk size.
    when i do it on a 200GB drive the smallest size it let's me make is 20Gb, and for a 500GB drive it's 50GB.
    I guess that's a safeguard against creating too many partitions.
    Message was edited by: V.K.
    I thought so. Like most filesystems, all the information must be stored in the bootsector or whatever it's called with BSD, and the total number of files is related to the sizes and total numbers of partitions, so like FAT32 or even NTFS, HFS has it's practical and theoretical limits.
    I believe HFS has two bootrecords or equivalents, one for backup and this adds to the storage space issue.
    Going back to the old 1.44MB floppies, the unformatted size was over 1.7 MB and Norton and MS took advantage of that, but not all disk readers were happy with what they did. MS once shipped Windows on a set of 1.7 MB floppies. Of course this was BC (before CDs).

  • A less then 1TB limit on partition sizes?

    Can anyone here confirm that the Airport Extreme 802.11n has a size limit when it comes to drive partition sizes?
    I have found that it will see a drive over 1TB but will not mount it. If you break the drive into 2 partitions, 999GB + other it will see and mount both?
    Can someone from Apple confirm this limitation please?
    Has anyone on this board been able to use a USB disk with a partition of greater then 1TB?
    Please reply to the post with the drive sizes you are using so maybe we can see what works and what does not.
    Thanks All!

    Yes, after sleeping on it I decided to partition the drive. Now, with two partitions about 700GB each it mounts (both) no problem. That was easy fix....just wish apple had included this information with the airport Extreme.

  • Permanent and temporary partition sizes execeeds maximum size allowed

    I receceivd error 818
    permanent and temporary partition sizes execeeds maximum size allowed on this platform
    I'm currently using TimesTen 32 bit version running Solaris and I increased one of my data storage size
    to
    PermSize=1700
    TempSize-350
    which is around 2G.
    The system has 16G of memory, and the shared memory is set to 0xF0000000 which is almost the max of 32 bit.
    Does anyone know this error is a sytem limit (32bit), or it can be fixed by tuning the system paramter? The system limits section does not have any information on the partition size limit. Anyone has any idea?

    That's correct, the 1 GB limit is specific to 32-bit HP/UX. All other 32-bit Unix/Linux platforms have a maximum size limit of 2 GB and as Jim mentioned this limit is such that:
    PermSize + TempSize + LogBufMB + ~20MB < 2 GB
    If you need anything larger you need a 64-bit O/S and 64-bit TimesTen.
    Chris

  • Running out of heap with size-limited cache

    I'm experimenting with creating a size-limited cache using Coherence 3.6.1 and not having any luck. I'm creating four 1GB storage-enabled cache nodes and then a non-storage-enabled process to populate them with test data. When I monitor the cache nodes w/ visualvm and run my test case to populate the cache, I just see the heaps get larger and larger until the processes either just stop responding, or I get an OutOfMemoryError. I've tried slowing down and even stopping, waiting for awhile, and then restarting the test case populating the caches to see if that helps, but it doesn't seem to make a difference.
    A related question, since Coherence doesn't seem to be able to recover from the OOME, is there anyway to configure it to end the process when that happens instead of just sitting there?
    Thanks.
    Stack Trace:
    SRVCoherence[93191:9953 0] 2011/06/29 00:39:04 1,015.5 MB/191.38 KB ERROR Coherence   -2011-06-29 00:39:04.679/449.883 Oracle Coherence GE 3.6.1.0 <Error> (thread=DistributedCache:PartitionedPofCache, member=4):
    java.lang.OutOfMemoryError: Java heap space
         at com.tangosol.util.SegmentedHashMap.grow(SegmentedHashMap.java:893)
         at com.tangosol.util.SegmentedHashMap.grow(SegmentedHashMap.java:840)
         at com.tangosol.util.SegmentedHashMap.ensureLoadFactor(SegmentedHashMap.java:809)
         at com.tangosol.util.SegmentedHashMap.putInternal(SegmentedHashMap.java:724)
         at com.tangosol.util.SegmentedHashMap.put(SegmentedHashMap.java:418)
         at com.tangosol.util.SimpleMapIndex.insertInternal(SimpleMapIndex.java:236)
         at com.tangosol.util.SimpleMapIndex.insert(SimpleMapIndex.java:133)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.updateIndex(PartitionedCache.CDB:20)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ResourceCoordinator.processEvent(PartitionedCache.CDB:74)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ResourceCoordinator.finalizeInvokeSingleThreaded(PartitionedCache.CDB:56)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ResourceCoordinator.finalizeInvoke(PartitionedCache.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.processChanges(PartitionedCache.CDB:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutAllRequest(PartitionedCache.CDB:68)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutAllRequest.onReceived(PartitionedCache.CDB:85)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:680)Cache Config:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>Event</cache-name>
                <scheme-name>EventScheme</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <distributed-scheme>
                <scheme-name>EventScheme</scheme-name>
                <service-name>PartitionedPofCache</service-name>
                <partition-count>16381</partition-count>
                <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                </serializer>
                <backing-map-scheme>
                    <local-scheme>
                        <high-units>300m</high-units>
                        <low-units>200m</low-units>
                        <unit-calculator>BINARY</unit-calculator>
                        <eviction-policy>LRU</eviction-policy>
                    </local-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
            <invocation-scheme>
                <service-name>InvocationService</service-name>
                <thread-count>5</thread-count>
                <autostart>true</autostart>
            </invocation-scheme>
        </caching-schemes>
    </cache-config>

    Hi Timberwolf,
    Some tips to keep in mind when cnofiguring a cache to be size limited:
    - High units are per storage node. For instance, 5 storage nodes with a high-units configuration of 10mb can theoretically store up to 50mb. However this implies a perfect distribution which is normally not the case.
    - High units are per named cache. If two caches are mapped to a scheme and the scheme specifies a high units of 10mb, then each cache will have a high units of 10mb, meaning 20mb of storage total. This is especially important to keep in mind when using a wildcard (*) naming convention.
    - For backing maps, high units only take primary storage into account. With a backup count of 1 (the default) a high-units setting of 10mb really means that up to 20mb of data will be stored on that node for that cache.
    - The entire heap of a dedicated storage node cannot be used for storage. In other words, a 1gb heap cannot hold 1gb of cache data. At the very most, ~75% can be used for cache storage (around 750mb.) If the storage node is performing grid functions (such as entry processors, filters, or aggregations) then the total high-units setting should be less than 75%.
    A sample configuration could look like this: 1gb JVM with 700mb dedicated storage. Half of the storage is primary and half is backup. Therefore, the total high-units for that node should not exceed 350mb.
    Hope it helps.
    Thanks,
    Cris

  • DLP Attachment Scanning - Size Limitations

    Is it documented anywhere what the attachment file size limitations are for DLP scanning?  In the ESA Configuration documentation I read:
    "To scan attachments, the content scanning engine extracts the attachment for the RSA Email DLP scanning engine to scan." 
    Can you identify what scanning engine is referenced by "content scanning engine" and what is the maximum attachment size it can process?  Also are those settings modifiable and some indcation of performance impact if they are increased to a maximum of 50 MB per attachment?
    I know you can make some modifications in the DLP policy, however it is our desire to DLP scan every document sent up to our allowable maximum email size.
    If large attachments cannot be scanned we may be forced to reduce our maximum message/attachment file size. 
    We are currently using Async O/S 7.5.1-102 and will be moving to 7.6.0 when it comes GA.

    Hello David,
    The content scanning engine in reference is the same AsyncOS scanning engine responsible or Message and Content Filter scanning. The maximum size of attachment to scan for this scanning engine is controlled by your 'scanconfig' setttings, as configured in the IronPort CLI. The default 'maximum size of attachment to scan' is 5MB.
    IronPort1.example.com>scanconfig
    There are currently 6 attachment type mappings configured to be SKIPPED.
    Choose the operation you want to perform:
    - NEW - Add a new entry.
    - DELETE - Remove an entry.
    - SETUP - Configure scanning behavior.
    - IMPORT - Load mappings from a file.
    - EXPORT - Save mappings to a file.
    - PRINT - Display the list.
    - CLEAR - Remove all entries.
    - SMIME - Configure S/MIME unpacking.
    []> setup
    1. Scan only attachments with MIME types or fingerprints in the list.
    2. Skip attachments with MIME types or fingerprints in the list.
    Choose one:
    [2]>
    Enter the maximum depth of attachment recursion to scan:
    [5]>
    Enter the maximum size of attachment to scan:
    [5242880]>
    <...>
    Any message that is larger than this limit will be skipped by the scanning engine. This would mean that pertinent DLP policys and filters would not match that same message. Naturally, allowing larger messages to be scanned will result in performance risks, as more system resources would be required to complete the content scanning.
    Regards,
    -Jerry

  • Exchange 2013 Mail Size Limits

    I am having an issue with setting the max send and receive size on Exchange 2013.  I keep getting the following error when I attempt to send a 20 meg file server to an internal exchange account OR if I attempt to send a 20 meg file from the exchange
    server to an external account: 
    #550 5.3.4
    ROUTING.SizeLimit; message size exceeds fixed maximum size for route ##
    I have checked the mail sizes and below is the report.  I currently have both send and receive set to 100MB.  Is there some other setting in 2013 that I am not aware of?
    AnonymousSenderToRecipientRatePerHour                       : 1800
    ClearCategories                                            
    : True
    ConvertDisclaimerWrapperToEml                               : False
    DSNConversionMode                                          
    : UseExchangeDSNs
    ExternalDelayDsnEnabled                                     : True
    ExternalDsnDefaultLanguage                                  :
    ExternalDsnLanguageDetectionEnabled                         : True
    ExternalDsnMaxMessageAttachSize                             : 100 MB (104,857,600 bytes)
    ExternalDsnReportingAuthority                               :
    ExternalDsnSendHtml                                        
    : True
    ExternalPostmasterAddress                                   :
    GenerateCopyOfDSNFor                                        :
    HygieneSuite                                               
    : Standard
    InternalDelayDsnEnabled                                     : True
    InternalDsnDefaultLanguage                                  :
    InternalDsnLanguageDetectionEnabled                         : True
    InternalDsnMaxMessageAttachSize                             : 100 MB (104,857,600 bytes)
    InternalDsnReportingAuthority                               :
    InternalDsnSendHtml                                        
    : True
    InternalSMTPServers                                        
    JournalingReportNdrTo                                       : <>
    LegacyJournalingMigrationEnabled                            : False
    LegacyArchiveJournalingEnabled                              : False
    LegacyArchiveLiveJournalingEnabled                          : False
    RedirectUnprovisionedUserMessagesForLegacyArchiveJournaling : False
    RedirectDLMessagesForLegacyArchiveJournaling                : False
    MaxDumpsterSizePerDatabase                                  : 18 MB (18,874,368 bytes)
    MaxDumpsterTime                                            
    : 7.00:00:00
    MaxReceiveSize                                             
    : 100 MB (104,857,600 bytes)
    MaxRecipientEnvelopeLimit                                   : 500
    MaxRetriesForLocalSiteShadow                                : 2
    MaxRetriesForRemoteSiteShadow                               : 4
    MaxSendSize                                                
    : 100 MB (104,857,600 bytes)
    MigrationEnabled                                           
    : False
    OpenDomainRoutingEnabled                                    : False
    RejectMessageOnShadowFailure                                : False
    Rfc2231EncodingEnabled                                      : False
    SafetyNetHoldTime                                          
    : 2.00:00:00
    ShadowHeartbeatFrequency                                    : 00:02:00
    ShadowMessageAutoDiscardInterval                            : 2.00:00:00
    ShadowMessagePreferenceSetting                              : PreferRemote
    ShadowRedundancyEnabled                                     : True
    ShadowResubmitTimeSpan                                      : 03:00:00
    SupervisionTags                                            
    : {Reject, Allow}
    TLSReceiveDomainSecureList                                  : {}
    TLSSendDomainSecureList                                     : {}
    VerifySecureSubmitEnabled                                   : False
    VoicemailJournalingEnabled                                  : True
    HeaderPromotionModeSetting                                  : NoCreate
    Xexch50Enabled                                             
    : True

    Hello Landfish,
    Good Day...
    The output gives the information that Size limit set for Receive and Send is 100 mb, but setting could have changed. So you can follow the below steps to resolve the issue. 
    There are basically three places where you can configure default message size limits on Exchange:
    Organization transport settings
    Send/receive connector settings
    User mailbox settings.
    To check your server’s current limit you can open Exchange Management Shell
    Try the below commands to check the Message quota size limit
    get-transportconfig | ft maxsendsize, maxreceivesize
    get-receiveconnector | ft name, maxmessagesize
    get-sendconnector | ft name, maxmessagesize
    get-mailbox Administrator |ft Name, Maxsendsize, maxreceivesize
    To change the above size limits based on your requirement.
    Set-TransportConfig -MaxSendSize 200MB -MaxReceiveSize 500MB (Size is based on your requirement)
    Attachment size limit
    To set up the rule you can use the below PowerShell cmdlet, as the method is quite simple
    New-TransportRule -Name LargeAttach -AttachmentSizeOver 20MB -RejectMessageReasonText "Message attachment size over 20MB - email rejected."
    For More info
    https://technet.microsoft.com/en-us/library/bb124708(v=exchg.150).aspx
    Remember to mark as helpful if you find my contribution useful or as an answer if it does answer your question.That will encourage me - and others - to take time out to help you Check out my latest blog posts @ Techrid.com

  • How to Get Around the Memo Size Limitations in CR ?

    I am Using Crystal Reports 2008, SQL Database and ASP .Net Visual Studio 2010 for Team Foundation with Crystal Viewer embedded in a web page.  All current update and patches area installed.
    Database has Memo Fields up to 164000 characters in length. Viewer show fine, but with the reports that have been designed to print this information, only seeing part of the Memo field.
    This happens with both RTF, Text and HTML formatted data from within the database field.
    I have read that there is a limitation on the size of a Memo field that Crystal Reports will print (65,534).
    I actually received an Crystal Reports error box when i try to concatenate multiple substring fields as a formula.
    Does anyone have any suggestions or ideas on a work-around ? 
    Due to legal considerations, this data has to be output as it was input, so it can't be hacked. It can be parsed and again merged  but I really donu2019t want to try and write SQL procedures to parse HTML code into readable multiple pieces based on variable length tags with large memo fields.
    Please offer any and every suggestion,
    Thanks to all  ! !
    Edited by: Ludek Uher on Oct 21, 2010 1:31 PM

    yes sir,
    already did but i didn't receive any answers. . .   Memo Field Size Limitations with Crystal Reports 2008 ?
    Thanks for your help.

  • HT5639 I am attempting to install windows 7 ultimate on a mid2012 production macbook pro using Bootcamp 5 in Mountain Lion. Whenever I set the partition size and tell it to continue, the program quits unexpectedly. I have done this about 5 times now. Sugg

    The lead-in above pretty much states the problem. I am attempting to install Windows 7 from a disk, but that is not the issue. Boot Camp 5 just quits and reopens. I then repeat. Help.

    I am about to run specialized GIS software that may or may not need the full machine resources. I am an attorney who uses QuickBooks for my accounting. The Mac version does not do as much as the Windows version. This is from both Apple and Quick Books professionals. I am using Parallels Desktop version 8 at this time. It does not support QuickBooks Windows version per Parallels. Any other questions? I am a highly competent PC user who is new to Macs. I am entitled to my own configuration choices. That said, I know when I need help which is now.
    As to the free space issue I have 665.18 GB free space out of 749.3 GB on the drive. I am not trying to run the 32 bit version. I know that it requires the 64 bit version. Besides, it does not get that far in the process. As I said Boot Camp asssitant terminates unexpectedly as soon as you hit the continue button after setting the partition size. Therefore I conclude that it does not have a chance to see which version of Windows that I am using. I am using Windows 7 Ultimate 64 bit version in the virtual engine (to use Parallels speak), but again Boot Camp would not see that since it is not running. It can't run at the momenet because Apple just installed a new hard drive and I have just restored the data from TIme Machine and Parallels needs reactivated, which I have not done yet deliberatley. With all of this addtiional information do you have any further suggestions? Thanks for your time and interest my issue.

  • 1.8TB Hitachi HDS722020:  "Partition size is not a multiple of 4K"

    I picked up a 1.8TB Hitachi HDS722020 external USB drive, for an enlarged Time-Machine disk. The box says that it's usable on a Mac, but Disk Utility doesn't seem to like it.
    When I attempt to erase it, it sits there, indefinitely as far as I can tell, trying to format it saying, "Partition size is not a multiple of 4K." I am able to format it as MS-DOS FAT32, but not in any MacOS (Extended) format.
    Since it seems to be complaining about partition sizes, I was able to it into one partition containing almost all of the disk content, and another, "junk" partition that I presumably won't use very often. Unfortunately, the smallest "junk" partition it would let me create was about 190GB.
    So, waddaya folks think? Should I return it to the store as a bad disk, or is there any reason to think that it's behaving in some sense correctly?

    Hello mr88cet
    In Disk Utility highlight the external hard drives first icon on the left side list and select the Partition tab. Then decide on the # of partitions you want, then click the Option box and set the Partition map Scheme to GUID Partition Table. Now then you should be able continue to on with any multiple partition sizing (if more than 1 was selected) and format the drive Mac OS Extended (Journaled)...
    Just follow the Disk Utility prompts, it's really easy after you've done it a time or two.
    Also see > http://www.kenstone.net/fcphomepage/partitioningtiger.html
    Dennis

Maybe you are looking for

  • Stop Mac from Sleeping when iTunes/Front Row is playing

    Just like the title says .. When I play music through front row/iTunes, the computer still sleeps. This is not the case when playing videos. I've looked everywhere for setting to change this, but can't seem to find it. Any thoughts? Thanks, Blue

  • Find a .exe file on my network

    need a script create to fine program file (passport.exe) install on my network.

  • ADF BC Session Issue

    Hi All, I implemented ADF BC Session in following way, protected void prepareSession(Session session) { super.prepareSession(session); Hashtable htblSessionUserData = getSharedSecurityAM().getSession().getUserData(); Hashtable htblUserData = this.get

  • Boundary event bpm 11g

    all, i need to use Boundary event time (Timer Events) in my process but the time is dynamic. in bpm help i saw this, but dont works. 'now' + '30m' deadline - '1day' arrivalDate.dateTime + '1h' anyone knows the simple exp or xpath exp correct ? tks.

  • Office 365 Management Pack - SCOM 2007

    Hi All, Can anyone please point me to an official link on SCOM 2007 R2 Management Pack for Office 365. Just wondering if the below link is the right one which I am reffering to http://www.systemcentercentral.com/scom-monitoring-office-365/ Also, the