BerkeleyDB cache size and Solaris

I am having problems trying to scale up an application that uses BerkelyDB-4.4.20 on Sun Sparc servers running Solaris 8 and 9.
The application has 11 primary databases and 7 secondary databases.
In different instances of the application, the size of the largest pimary database
ranges only from 2MB to 10MB, but those will grow rapidly over the
course of the semester.
The servers have 4-8 GB of RAM and 12-20 GBytes of swap.
Succinctly, when the primary databases are small, the application runs as expected.
But as the primary databases grow, the following, counterintuitive phenomenon
occurs. With modest cache sizes, the application starts up, but throws
std::exceptions of "not enough space" when it attempts to delete records
via a cursor. The application also crashes randomly returning
RUN_RECOVERY. But when the cache size is increased, the application
will not even start up; instead, it fails and throws std::exceptions which say there
is insufficient space to open the primary databases.
Here is some data from a server that has 4GB RAM with 2.8 GBytes free
(according to "top") when the data was collected:
DB_CONFIG............db_stat -m.................................Result
set_cachesize........Pool......Ind. Cache
0 67108864 1.........80 MB.......8 KB................Starts but crashes and can't delete by
.....................................................................cursor because of insufficient space
0 134217728 1.......160 MB......8 KB.................Same as case above
0 268435456 1........320 MB.....8 KB.................Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database.
0 536870912 1.........512 MB...16 KB.................Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database (although it mentions a
......................................................................different primary database than before.
1 073741884 1........1GB 70MB....36 KB............Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database (although it mentions a
......................................................................different pimary database than
......................................................................previously).
2 147483648 1.........2GB 140MB...672 KB........Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database (although it mentions a
......................................................................different pimary database than
......................................................................previously).
I should also mention that the application is written in Perl and uses
the Sleepycat::Db Perl module to interface with the BerkeleyDB C++ API.
Any help on how to interpret this data and, if the problem is the
interface with Solaris, how to tweak that, will be greatly appreciated.
Sincerely,
Bill Wheeler, Department of Mathematics, Indiana University, Bloomington.

Having found answers to my questions, I think I should document them here.
1. On the matter of the error message "not enough space", this message
apparently orginates from Solaris. When a process (e.g., an Apache child)
requests additional (virtual) memory (via either brk or mmap) such that the
total (virtual) memory allocated to the process would exceed the system limit
(set by the setrlimit command), then the Solaris kernel rejects the request
and returns the error ENOMEM . Somewhat cryptically, the text for this error
is "not enough space" (in contrast, for instance, to "not enough virtual
memory").
Apparently, when the BerkeleyDB cache size is set too large, a process
(e.g., an Apache child) that attempts to open the environment and databases
may request a total memory allocation that exceeds the system limit.
Then Solaris will reject the request and return the ENOMEM error.
Within Solaris, the only solutions are apparently
(i) to decrease the cache size or
(ii) to increase the system limit via the setrlimit command.
2. On the matter of the DB_RUNRECOVERY errors, the cause appears
to have been the use of the DB_TXN_NOWAIT flag in combination with
code that was mishandling some of the resulting, complex situations.
Sincerely,
Bill Wheeler

Similar Messages

  • Camera Raw cache size and location

    I've been using ACR for about 3 years now, and today it occurred to me that I'm not that sure I know what the cache is actually used for. I mean, I know what a cache is, but what is ACR caching, and why?
    I use Bridge for most of my photo work, displaying high quality thumbnails and previews, often full-screen ("monitor-sized previews" by default). Bridge has its own cache, which I understand contains jpegs of thumbnails, previews, and 100% views. Does ACR's cache have any bearing on Bridge's behaviour and performance, especially with thousands of raw images?
    What sort of size and location of ACR cache should I be using - what factors do I need to consider?

    Hi,
    I'm still not sure what size of cache I should use, and what are the advantages and disadvantages of bigger or smaller ACR caches.
    Maybe Eric can tell something indeep about the impact of a large cache.  I'm not sure about that myself, but don't believe a larger cache is a real disadvantage, as long the disk on which it reside is maintained: defrag often). Since Bridge and ACR work hand in hand here, it might depend to your workflow and to your hardware. Sorry, I can't give a clear answer here.
    As said I would put it just large enough for a certain number of RAW or some weeks work. Means, when there is a chance I'ld touch a certain RAW in six weeks again, than six weeks * x images per day I shoot.
    Not sure if you are a Pro. I'm not and usually have not much new images every month.  I always was fine with the default settings.
    I don't work with Bridge and open one RAW after another in ACR, or maybe 20+ in a row when doing panorama ;-)
    But Bridge has to do a lot of work on the fly when it generates the previews for all new RAW in a folder, like: "read the image from HDD", "process it by ACR, "write Bridge previews and metadata to HDD", "write to ACR cache " - not necessarily in this order.
    And it seems it does this and some additional steps always (not only when first time reading RAW), because it f.e. needs to check if there are already processing information in ACR database or XMP file for a RAW or if they have been created in "meantime". Means ACR settings could have be changed after Bridge did a first time generation of the cache data, maybe because the RAW was openene directly via PS or by another app.
    this process is fairly processor-dependent, especially now that ACR uses more sophisticated processing in version 6, and, depending on ACR defaults, this can be significantly slower than in previous versions.
    And Bridges eats huge amount of memory as well - seems it holds a lot of information in memory before flushing it to disk.
    After reading this post of yours I made a test with my 12k RAW the other day. I tried to let Bridge create previews for all of them in one step, but it ended up complaining that there is not enough memory after ~ half an hour.
    Bridge appears to generate previews unnecessarily sometimes, because every image is already cached.
    See above. Bridge/ACR checks for updated processing information in XMP or ACR database and metadata updates. If it find some it might need to rebuild its cached image.
    Why would I need to reset the cache with a new monitor profile? Surely this is not applied to cache JPEGs?
    You are right, sorry I mixed things up here :-(  I meant ACR camera profiles. Aside that I changed monitor profile a lot over the last weeks doing some tests, and after that Bridge often refused to start until the cache was purged…
    Your suggestion of a larger cluster size is very interesting, and I may try this when I have some spare time.
    All my disks meant for storing large files (images, music, scratch, temp-file) are formated with a larger cluster size. It speeds things up a bit.
    Supposedly Lightroom's catalogue system is superior, but I am yet to be convinced that putting all my XMP data in one basket is a good idea.
    Of course LR shares ACR's cache. I don't know much about LR, but I believe XMP are not only kept in one basket here, right? They are written to the images and in case of RAW to XMP files. Otherwise there wouldn't be an interoperability between LR and Bridge. Similar to Bridge LR stores previews in it own "cache" which is as well a folder and file based "database". Check "preferences".
    But as said I use another DAM, in which’s database all data needed for searching is kept, but I also write all metadata (IPTC,XMP) to the images, even for RAW files. By this I don't need a large ACR cache and also don't have XMP files around, which I hate. When saying database, I mean a real database, not a collection of files which are kept on disk. ;-)
    Aside other advantages, I can take my database with me on vacations and can work on my images like I do at home. I have all my keywords and categories with me in one file and when back, I just copy the database to my desktop.

  • Question of Berkeley DB "cache size"

    quote:
    Set the size of the shared memory buffer pool, that is, the size of the cache.
    The cache should be the size of the normal working data set of the application, with some small amount of additional memory for unusual situations. (Note: the working set is not the same as the number of pages accessed simultaneously, and is usually much larger.)
    The default cache size is 256KB, and may not be specified as less than 20KB. Any cache size less than 500MB is automatically increased by 25% to account for buffer pool overhead; cache sizes larger than 500MB are used as specified. The current maximum size of a single cache is 4GB. (All sizes are in powers-of-two, that is, 256KB is 2^18 not 256,000.)
    The database environment's cache size may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_cachesize", one or more whitespace characters, and the cache size specified in three parts: the gigabytes of cache, the additional bytes of cache, and the number of caches, also separated by whitespace characters. For example, "set_cachesize 2 524288000 3" would create a 2.5GB logical cache, split between three physical caches. Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time.
    This method configures a database environment, including all threads of control accessing the database environment, not only the operations performed using a specified Environment handle.
    This method may not be called after the environment has been opened. If joining an existing database environment, any information specified to this method will be ignored.
    This method may be called at any time during the life of the application.
    Parameters:
    cacheSize The size of the shared memory buffer pool, that is, the size of the cache.
    The question:
    When I have a host, the memory total is 16G.
    I don't know what mean of this document.
    How many max cache size can be set ?
    4G? 16G?
    or cacheCount (4)* 4G = 16G?
    My Email: [email protected]

    What version of Berkeley DB are you using?
    I'm a little confused about what you are quoting. Most of your quote seems to be from DB_ENV->set_cachesize(), but set_cachesize does not have a parameter named cacheSize. The parameters for set_cachesize are gbytes, bytes and ncache.
    You use set_cachesize to specify the logical cache that you can optionally split into more than one physical region. The maximum size of the logical cache is 4GB and there is only one logical cache. You specify the total size of the logical cache with the gbytes and bytes parameters. If you set ncache to a value greater than 1, you split this logical cache into separate physical regions. So, for example, if you specify (gbytes=2, bytes=0, ncache=1) you will have a logical cache of 2GB that internally is split into 2 separate physical regions of 1GB each.
    You can read more about the memory pool cache in the Reference Guide sections "Selecting a cache size" and "Configuring the memory pool".
    If you have other Berkeley DB questions that are not specific to replication, you should direct them to the general Berkeley DB forum where you will have the benefit of a wider set of Berkeley DB experts:
    Berkeley DB
    Paula Bingham
    Oracle

  • Default Adobe Drive cache size is only 128MB

    The Adobe Drive cache size defaults to 128MB. This doesn't seem a very logical value as a single file may be easily larger than that. Is there a reason it's so small? As most users would probably benefit from a larger cache size and today's harddrives should also allow a larger cache size.
    Would it make sense to have a default cache size of 5-10GB? Maybe depending on the amount of free disk space available during installation?

    Hello.
    To help automatically clear up some cache from Firefox, click on each of the images from left to right. Now at least you won't have to constantly do it yourself.
    Also, to help you with your space issues, download Clean Master from the Google Play app Store: https://play.google.com/store/apps/details?id=com.cleanmaster.mguard this app will clean up hidden cache and useless files on your phone, helping free up space.
    And as for why Firefox keeps reverting to the default cache in "about:config", I do not know. We are sorry for any inconveniences that this has caused you. But please try doing what was mentioned above to help with your issue.
    Hope this helps!

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Trying to change the cache size of FF3.6 from 75meg to a larger size, it only applies on a per session basis. i check the about:config and the changes have applied but when i restart FF it has reset itself to 75 :(

    as per the question, tried to up the cache from 75meg to 300meg but it resets after i restart firefox, have tried to change to various cache sizes but to no avail.
    -=EDIT=-
    it must be something to do with the profile, as when i set up a new profile in the manager, the cache size problem no longer appears. but now, how to repair my profile

    ok, nothing in that text file helped but the original file that it was based on pointed me in the direction that it might be an extension. The only extensions i have are NoScript and FasterFox Lite version....
    I have now traced the fault to lie with FasterFox... if you are not familiar with fasterfox it speeds up internet connections in firefox... several of the options are presets... but when i selected custom it gave me the option of a cache setting, which was set to 75megs.
    I have now changed that cache setting in fasterfox to 300 Megs and it is now persistant in firefox on restart.
    hopefully this information will be helpful to other people in the future that suffer the same problem.
    Thanks for your help TonyE, its greatly appreciated

  • When i put firefox in offline mode, and then click on pages saved in history , it can't load any pages or any images. i put cach size to 250mb but the problem is the same, it saves history for two months, but can't load pages.

    when i put firefox in offline mode, and then click on pages saved in history , it can't load any pages or any images. i put cach size to 250mb but the problem is the same, it saves history for two months, but can't load pages.

    Hi there,
    When I inspect your site in browser tools, I'm getting 404 errors from your page:
    [Error] Failed to load resource: the server responded with a status of 404 (Not Found) (jquery-2.0.3.min.map, line 0)
    [Error] Failed to load resource: the server responded with a status of 404 (Not Found) (edge.4.0.0.min.map, line 0)
    BarnardosIreland wrote:
    I would have thought that publishing should give a complete package that doesn't need any further edits to the code and can just be directly ftp'ed to the web - is this correct?
    In general, you are correct - but also your server does need to be properly configured (and those errors above lead me to think it may not be) to serve the file types that your uploading - but it could be something else entirely. Can you zip up your composition folder, upload it to your Creative Cloud files, set it to share, and then post a link here so I can download it? If you'd rather not share it publicly, can you PM me with a link to your composition files?
    Thanks,
    Joe

  • Swapping and Database Buffer Cache size

    I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.

    Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
    However ... as the buffer cache grows, the time to determine 'which blocks
    need to be cleaned' increases. Therefore, at a certain point the benefit of a
    larger cache is offset by the time to keep it sync'd to the disk. After that point,
    increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
    A checkpoint performs the following three operations:
    1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
    It's the DBWR that writes all modified databaseblocks back to the datafiles.
    2. The latest SCN is written (updated) into the datafile header.
    3. The latest SCN is also written to the controlfiles.
    The following events trigger a checkpoint.
    1. Redo log switch
    2. LOG_CHECKPOINT_TIMEOUT has expired
    3. LOG_CHECKPOINT_INTERVAL has been reached
    4. DBA requires so (alter system checkpoint)

  • Write-Behind Caching and Limited Internal Cache Size

    Let's say I have a write-behind cache and configure its internal cache to be of a fixed limited size, e.g. 10000 units. What would happen if more than 10000 units are added to the write-behind cache within the write-delay period? Would my CacheStore's storeAll() get all of the added values or would some of the values be missed because of the internal cache size limitation?

    Hi Denis,     >
         > If an entry is removed while it is still in the
         > write-behind queue, it will be removed from the queue
         > and CacheStore.store(oKey, oValue) will be invoked
         > immediately.
         >
         > Regards,
         > Dimitri
         Dimitri,
         Just to confirm, that I understand it right if there is a queued update to a key which is then remove()-ed from the cache, then the following happens:
         First CacheStore.store(key, queuedUpdateValue) is invoked.
         Afterwards CacheStore.erase(key) is invoked.
         Both synchronously to the remove() call.
         I expected only erase will be invoked.
         BR,
         Robert

  • Time measuring and cache size

    Hi,
    This has been posted in C forum, but not much activity there.
    I have two questions.
    1. Is it possible to obtain the level 1 and 2 cache size from within a C/C++ program. You can do that with fpversion on the command line.
    2. If I have a multi-threaded program, then I want to meassure the time
    taken from within. Now I use getrusage. However, it includes the time for all child threads. How do I get the time for the main thread. The command line tool times seem to be able to do that. I do not want wall
    clock time but CPU time. This is possible on SGI.
    Thanks in advance.
    Erling

    1. Is it possible to obtain the level 1 and 2 cache size from within a C/C++ program. You can do that with fpversion on the command line. Yes!
    My friend work through this. If you still intresting - amail me at lesson1@mail/.com
    Wishes , a [url http://personallfiles.com/Grant.Scholarship.asp]nursing scholarship-in need for me

  • Disk size in Solaris 10

    I have some confusion about disk subsystem in Solaris, i am trying to clarify from this forum.
    I have recently installed Solaris 10 in one SPARC box. After i installed, the format gives the bellow output.
    0 root wm 19491 - 29648 4.88GB (10158/0/0) 10239264
    1 swap wu 0 - 4062 1.95GB (4063/0/0) 4095504
    2 backup wm 0 - 29648 14.25GB (29649/0/0) 29886192
    From the above output, is the size of my disk is 14 GB ?, or the size of my disk is 14+2+5=21 GB ?
    I am trying to learn ZFS, so i want another partition in this disk so that i create ZFS on that partition.
    I have gone to single user mode by using CD. I assumed that, from the above "format" command output, i thought i have 21GB of disk size and 14GB of free space. So i created another partition with 14GB. Now the format command gives bellow output.
    0 root wm 19491 - 29648 4.88GB (10158/0/0) 10239264
    1 swap wu 0 - 4062 1.95GB (4063/0/0) 4095504
    2 backup wm 0 - 29648 14.25GB (29649/0/0) 29886192
    3 reserved wm 0 - 29127 14.00GB (29128/0/0) 29361024
    When i am creating ZFS, it given me a warning that the the partition i have specified is spanned into root partition (first partition), and it mentioned to use "-f" option.
    With "-f", it created successfully.
    If i assume now that the size of my disk is 14GB only then,
    (1) how come two partitions are pointing to the same area in the disk ?
    (2) How come two different filesystems are pointing to the same area ?
    Please anyone clarify my doubts. Thank you.

    Assuming a standard labeled disk it is standrad practice to have section/slice 2 being 'whole disk' for purposes of 'backup'. That would tend to indicate you have a 14GB disk. A prtvtoc /dev/dsk/c?t?d?s2 (change the ?s to the right values) will give a little more on the disk geometry.
    In the display from format column 4 is the start cylinder of the partition and column 5 is the end cylinder. From the first set out output it looks like cyclinders 4063 to 19490 are not allocated
    In the second set you have created a new slice (section 3) that overlaps both sections 0 and 1 - which is generally considered to be bad!

  • Cached Queries and Memory

    I recently learned about the cachedWithin attribute of the
    cfquery tag. I have found a few places I wish to use this
    attribute.
    I am looking at the Caching page of the ColdFusion
    Administrator and see the "Maximum number of cached queries" is set
    at 100.
    My question is, Is there anyway for me to see which query
    result sets are currently cached? I suspect we cycle through 100
    cached queries fairly quickly. Probably on average it would take
    less then 10 minutes. Therefore, even if I set the cachedWithin
    attribute to the last hour unless the query is called in the last
    10 minutes the cached results will not be used. I would like to
    figure out when the cached result set is actually used.
    In addition, I wonder if anyone has suggestions on how to
    determine the ideal value of the "Maximum number of cached
    queries". I realize the large the number the more memory which will
    be required and also it depends on the size of the result sets. We
    have 1024 mb of memory available. It appears our memory usage it
    usually around 50 mb. My thought was to slowly increase the cache
    size but if someone has a rule of thumb I would appreciate it.
    I guess I will add yet another question. On the Java and JVW
    Settings page there are 2 fields "Min JVM Heap Size" and "Max JVM
    Heap Size". We have the max set at 1024 and the min is blank. I
    think I read somewhere we should set the min and max to the same
    number. Anyone have any comments?
    Thanks, Franz

    Regarding query caching, I have two suggestions.
    1. Go with the default and cease all thought on the topic.
    2. Any query that includes user input should not be
    cached.

  • The ipad won't sync some photos, saying the file can't be read by the ipad, however it will sync some photos taken at the same time which are the same size and file type.  Why does it reject some and accept others?

    The ipad won't sync some photos, saying the file can't be read by the ipad, however it will sync some photos taken at the same time which are the same size and file type.  Why does it reject some and accept others?

    Hi there. I'm having the same problem: my iPad won't import some photos from a folder, saying that they can't be read. They are all JPEGS and some photos taken at the same time have synched fine, but out of a folder with 200 photos, it only lets me synch 37. I'm synching albums created via Photoshop Elements 6, which has worked fine until now.
    I've tried deleting all photos and re-synching, and have also deleted the iPod Photo Cache, but it hasn't made a difference.
    The iPad auto-updated to the latest version of iTunes, so maybe that's what's causing it?
    Any advice gratefully received!

  • How do I install dual-boot Solaris 8 and Solaris 9 on one hard disk ?

    I tried to install Solaris 8 and Solaris 9 on same disk using CDs, but
    the second installation overwrote the first Solaris which was installed
    previoudly on the half-disk size partition of same disk.
    How do I install two Solarises on one hard disk ?
    Thanks
    Yakov

    There are no tricks to get Solaris to dual boot on the same drive. Just allocate and pick the free slices not used by the first Solaris install when you put in the second install. Technically speaking there is nothing preventing you from running seven separately bootable Solaris instances on the same drive (one of 8 available slices is overlap -- slice 2) provided you use a swap file on a root partition instead of reserving a whole slice for swap.

  • Dual Booting Windows and Solaris

    Hi
    how do i dual boot windows and solaris
    Do i install windows first and then solaris or do it the other way around..?
    how do i make sure that Windows and Solaris appear in my boot options..?
    Is their a guide on doing this...?
    Thanks
    Liam

    Hey I did a quick google search for you. So I havent tried this method myself but it sounds reasonable.
    The text below is from the following link:
    http://www.hccfl.edu/pollock/AUnix1/DualBoot.htm
    "Solaris boot loader
    Partition the drive to leave at least 2GB of space available for Solaris;
    more drive space is desirable.
    As with Linux, install Windows first then Solaris.
    Do not use the Installation CD but boot and install
    from Software CD 1.
    If you accept the default partitioning scheme which
    the installer provides you will soon run out of space in
    your / and /usr partitions since only enough space is
    allocated to install the system.
    All extra space is allocated to /export/home.
    A typical installation on a 4.5GB partition might look
    something like this:
    Filesystem Size Used Avail Use% Mounted on
    /dev/dsk/c0d0s0 900M 536M 310M 64% /
    /dev/dsk/c0d0s1 334M 109M 192M 36% /var
    swap 671M 8.0k 671M 1% /var/run
    swap 671M 8.0k 671M 1% /tmp
    /dev/dsk/c0d0s5 845M 222M 565M 29% /opt
    # (FAT32 partition):
    /dev/dsk/c0d0p0:1 5.0G 3.3G 1.6G 66% /c
    /dev/dsk/c0d0s7 1.1G 92M 954M 9% /export/home
    /dev/dsk/c0d0s4 752M 225M 474M 33% /usr/local
    The Solaris boot selector enables you to choose either
    Solaris or Windows with Solaris as the default.
    (I prefer grub or lilo!)
    To mount FAT under Solaris:
    # mount -F pcfs /dev/dsk/c0d0p0:c /dos (or �:1�?)
    And the vfstab file:
    /dev/dsk/c0d0p0:c - /dos pcfs - yes -
    To create a GRUB boot floppy, follow these steps:
    $ mkfs -t ext2 /dev/fd0
    $ mount /dev/fd0 /mnt/fd0
    $ mkdir /mnt/fd0/boot /mnt/fd0/boot/grub
    $ cp /boot/grub/stage[12] /boot/grub/grub.conf \
    > /mnt/fd0/boot/grub
    $ /sbin/grub --batch <
    Hope this helps!
    /Oscar

Maybe you are looking for

  • If my 2 year contract is up, how much will it cost to shut it off?

    my 2 years is up I no longer need this phone.

  • DV NTSC 48 kHz 23.98 - anamorphic?

    With reference to my previous question regarding PAL TO NTSC standards conversions, if you look at the last paragraph in this article the writer refers to a sequence set to DV NTSC 48 kHz 23.98. However, I need to do this to an anamorphic film clip -

  • Supervisor desktop:- Display current skill

    Hi All, In supervisor desktop application, i would like to know, when an agent is in talking mode, the current skill to which he is actually mapped to. a sample code for the same would be of great appreciation. Regards Suhas

  • Help!  Photo organization and archiving.

    I just switched from a pc to a mac and am having nothing but problems. I personally regret ever buying it. I even bought one to one care, but live 6 hours from a mac store and so feel like I'm struggling in the dark. I hope so much I can find help he

  • Radeon DPM/mclk value

    Hi, I've run into this issue in the past and was never able to figure it out. When running a refresh rate / resolution combination which requires a bit more "umph" from the graphic card, default memory clocks when low can cause artifacting on the des