Oracle's recommended Swap Size for Solaris 10 Oracle 11gR2 (11.2.0.2)

I have a solaris server with 16gb of RAM that will subsequently be zoned into three zones to support three separate database instances, one for each zone. Given that physically there is only 16Gb of ram, It is my understanding and experience that I should have 1.5-2 * 4Gb should be at least 6Gb of SWAP per Local Zone.
Please advise. Thanks

yakub21 wrote:
I have a solaris server with 16gb of RAM that will subsequently be zoned into three zones to support three separate database instances, one for each zone. Given that physically there is only 16Gb of ram, It is my understanding and experience that I should have 1.5-2 * 4Gb should be at least 6Gb of SWAP per Local Zone.
Please advise. ThanksI'd look at it like this .....
Your Phyiscal Memory will be used somewhat as follows:-
- Lets say 1GB for the OS ...
- 1.5GB for zfs arc cahce if your useing ZFS.
- The sum of your PGAs and SGA's should fit within the remainder of ram.
- And your may want a little more to be sure and for other things; monitroting software agents, em etc.
In order that you can use more virtual memoy than you have physical memory for your will porably want to add a swap file, and for a 16GB machine this would probably be 4 / 8 or perhaps up to 16GB depending if would will occasionally have higher virutal memory demands or if poeple but big files in /tmp. In general swap -s can be used to monitor a shortage, though it can foolled if zfs_arc_cache is not limited.
The situation for zones not different unless you are using zone.max-swap and the physical capped memory.
If you have zone with a physical capped memory of 5GB, and a zone.max-swap of 8GB with is af the zone has a swapfile of 3GB; and you have to make sure:
- The global zone is able to give that amount of memory to the non-global zone.
- The global zone can provider the zone.max.swap from its virutal memory.
- That the sga+pga and /tmp usage and other will fit in zone.max-swap ... use swap -s in the zone to monitor.

Similar Messages

  • Knowing the Total Swap size in solaris

    Dear all
    can anyone help me finding out Total Swap size(constant) value in solaris 10
    I tried swap -s and used (Used+available as TOTAL swap)
    its keep changing, as its is a current swap size at that time.
    actually My intention is to used this forumula in the script,as I found that solaris uses memory as physical+SWAP
    (Physcial Memory +swap size)
    Total physical+Total swap
    what is the best way to find out Actual Total Swap size in solaris ??

    I tried swap -s and used (Used+available as TOTAL swap)
    its keep changing, as its is a current swap size at that time.
    That's to be expected. See "How does Solaris Operating System calculate available swap? (Doc ID 1010585.1)"
    actually My intention is to used this forumula in the script,as I found that solaris uses memory as physical+SWAP
    (Physcial Memory +swap size)
    Total physical+Total swap
    what is the best way to find out Actual Total Swap size in solaris ??
    If you want to know the size of the swap device, then look at the device(s) being used for swap and check the size of the slice or zvol. That'll tell you the starting value. The available will fluctuate with system usage. See Doc ID 1010585.1
    What exactly are you trying to achieve with your script?

  • Is there a recommended (maximum) size for a catalog?

    I'm loading all my photos into PSE9 Organizer on my new MacBook Pro and was wondering if there is a recommended maximum size for a catalog? On my PC I separated my catalogs by year. My 2011 folder contains over 11,000 photos and is 71GB. Is that pushing the limits for efficiency or could I combine years?

    One catalog for all of your photos makes sense. Alternatively, as Ken said, catalogs by relatively non-overlapping subject matter areas make sense. I still see no scenario where catalogs by years makes sense, unless you can honestly say that you can remember the years of all of your 68000 photos.
    Tags and albums let you organize in ways that are nearly impossible using folders. If your daughter is named Jennifer, for example, and you assign the Jennifer tag to all photos that contain her image, then later when you want to search for pictures of Jennifer, you simply click the tag and boom, there are the photos instantaneously. You don't need to know what folder the photos are in, nor do you need to know what date the photos were taken. If you want photos of Jennifer over the years at Christmas, this is a simple search once you tag the photos, and nearly impossible using folders. The possibilites are endless. You let PSE remember where the photos are, so you don't have to. You let PSE do the searching, instead of you searching the folders.

  • Max UFS filesystems size for Solaris 2.X

    What are the max limits on UFS filesystem sizes for Solaris 2.6, 8 and 9?
    Thanks

    It's called a "search". ;) SunSolve Info Doc 76856.
    Solaris 2.6 - 8 -- 1 TB limit
    Solaris 9 and higher - 16 TB limit
    Individual files, however, can only be about 1,012 GB.

  • /tmp partition size for Solaris 10

    Hi,
    We are trying to install Oracle Enterprise Manager 10G release 5 on Solaris 10 machine and install is failing. Looking at installation guide for OEM 10.2.0.5, I find /tmp requirement for AIX and HP-UX but for Solaris it is not mentioned. How to find what is /tmp requirement for Solaris 10 ?
    $ uname -a
    SunOS xxxxxxx Generic_144500-19 sun4v sparc sun4v
    This is error shown when emctl secure agent command is run from command line
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Thanks !!

    yakub21 wrote:
    I have a solaris server with 16gb of RAM that will subsequently be zoned into three zones to support three separate database instances, one for each zone. Given that physically there is only 16Gb of ram, It is my understanding and experience that I should have 1.5-2 * 4Gb should be at least 6Gb of SWAP per Local Zone.
    Please advise. ThanksI'd look at it like this .....
    Your Phyiscal Memory will be used somewhat as follows:-
    - Lets say 1GB for the OS ...
    - 1.5GB for zfs arc cahce if your useing ZFS.
    - The sum of your PGAs and SGA's should fit within the remainder of ram.
    - And your may want a little more to be sure and for other things; monitroting software agents, em etc.
    In order that you can use more virtual memoy than you have physical memory for your will porably want to add a swap file, and for a 16GB machine this would probably be 4 / 8 or perhaps up to 16GB depending if would will occasionally have higher virutal memory demands or if poeple but big files in /tmp. In general swap -s can be used to monitor a shortage, though it can foolled if zfs_arc_cache is not limited.
    The situation for zones not different unless you are using zone.max-swap and the physical capped memory.
    If you have zone with a physical capped memory of 5GB, and a zone.max-swap of 8GB with is af the zone has a swapfile of 3GB; and you have to make sure:
    - The global zone is able to give that amount of memory to the non-global zone.
    - The global zone can provider the zone.max.swap from its virutal memory.
    - That the sga+pga and /tmp usage and other will fit in zone.max-swap ... use swap -s in the zone to monitor.

  • Maximum recommended file size for public distribution?

    I'm producing a project with multiple PDFs that will be circulated to a goup of seniors aged 70 and older. I anticipate that some may be using older computers.
    Most of my PDFs are small, but one at 7.4 MB is at the smallest size I can output the document as it stands. I'm wondering if that size may be too large. If necessary, I can break it into two documents, or maybe even three.
    Does anyone with experience producing PDFs for public distribution have a sense of a maximum recommended file size?
    I note that at http://www.irs.gov/pub/irs-pdf/ the Internal Revenue Service hosts 2,012 PDFs, of which only 50 are 4 MB or larger.
    Thanks!

    First Open the PDF  Use Optimizer to examine the PDF.
    a Lot of times when I create PDF's I end up with a half-dozen copies of the same font and fontfaces. If you remove all the duplicates that will reduce the file size tremendously.
    Another thing is to reduce the dpi of any Graphicseven for printing they don't need to be any larger than 200DPI.
    and if they are going to be viewed on acomputer screen only no more than 150 DPI tops and if you can get by with 75DPI that will be even better.
    Once you set up the optimized File save under a different name and see what size it turns out. Those to thing s can sometimes reduce file size by as much as 2/3's.

  • Maximum Core File Size for Solaris 8

    I believe that RLM_INFINITY for Solaris for core file sizes comes to 2GB.
    Is there any mechanism ( patch ) etc. by which this can be increasd ?

    if your application is 32-bit, then the core file size would be limited to 4GB (by default) and if your application is
    64-bit, then the core file size would be limited to usigned long max (by default).
    -Saurabh

  • Recommended Block Size For RAID 0

    I am setting up a RAID configuration (Striping, no Parity, Mac G5, OS-X) and was curious what the recommended Block Size should be. Content is primarily (but not limited to) Images created with Adobe Photoshop CS2 and range in size from 1.5MB to >20MB. The default for OS-X is 32K chunks of data.
    Drives are External FW-400.
    Many thanks, and Happy Holidays to all!

    If it is just scratch, run some benchmarks with it set to 128k and 256k and see how it feels with each. The default is too small, though some find it acceptable for small images. For larger files you want larger - and for PS scratch you definitely want 128 or 256k.

  • Recommended Image Size for Contact Pictures?

    Hi All,
    I want to edit up some family members to attach thier images to my iPhone contacts... Does anyone know what the preferred resolution size is for the iPhone for optimal appearance?
    I've tried to do this with outlook before, and I hate the way it dithers the images to a low quality.... I'm hoping that the iPhone doesn't do this and so I wanted to experiment with how they would appear...
    But instead of making the images just any random size, I wanted to crop them to the preferred image/pixel size so that my photoshopped images look like what I expect them to show up like...
    If you could advise with the documented or known pixel resolution size used by the iPhone it would be appreciated!

    Thanks, but I don't believe that the contacts come up full screen do they?
    Usually there is a optimal image size for certain things like contact photos and I am just trying to help ensure that all my photos are the same size and don't vary...
    I might be a little carried away with how I am formatting them, but just hoping someone already knows this answer!

  • Recommended heap sizes for Dev/Live/Shared CF boxes

    Hi all
    I've searched through all the relevant past posts, but most of them were posted before CF8.0.1 going 64-bit, so the situation has changed slightly.
    What are people's opinions on heap sizes for various server setups? I have my dev server - I leave this on 512MB and that's fine for most of the stuff I do. We generally up this to a gig on our live servers (which only run 3-4 small sites), and historically we've not gone above this on our Shared servers (which host anywhere up to 400 sites) because of the 32-bit limit.
    However, now we're running CF9 on 64-bit Server 2008, we've obviously got a lot more RAM available. The obvious choice is to ramp it up high, but I've also been heard that causes issues of its own - the Java GC then has far more work to do when it's called and as the memory fills it's harder to get a contiguous block of RAM, so performance can suffer.
    Does anyone have any experience of running Shared servers, or busy sites? The Server monitor is great for tuning maximum threads and the suchlike, but I can't really sit in front of it for hours waiting for the heap to fill. Also, our live servers are only Standard edition, so there's no Server Monitor.
    Thoughts?
    Ta
    O.

    function(){return A.apply(null,[this].concat($A(arguments)))}
    I have my dev server - I leave this on 512MB and that's fine for
    most of the stuff I do. We generally up this to a gig on our live
    servers (which only run 3-4 small sites), and historically we've not
    gone above this on our Shared servers (which host anywhere up to 400
    sites) because of the 32-bit limit.
    However, now we're running CF9 on 64-bit Server 2008, we've obviously got a lot more RAM available. The obvious choice is to ramp it up high, but I've also been heard that causes issues of its own - the Java GC then has far more work to do when it's called and as the memory fills it's harder to get a contiguous block of RAM, so performance can suffer.
    Leave the memory setting at the default value, until testing requires you to do otherwise, or your sites unravel some unforeseen memory demand. It's a risk our SysAdmin colleagues are prepared to take. With the coming of 64-bit, they've been on the alert, and have always cattle-prodded us whenever we've been tempted to push the button to go higher.
    Our defaults almost coincide with yours: 512 MB for 32-bit and just over 1GB for 64-bit. So what you yourself have just said is my own basic wisdom on the subject, too: if it aint broke, don't try to fix it.
    [postscript: I work full-time at one company, part-time at another. By pure coincidence, both use the same JVM memory settings.]

  • Recommended Movie Sizes for Keynote

    Can someone tell me the best way to compress a movie, step by step in Quicktime, so that the movies will play smoothly in Keynote? I think I'm placing movies that are too large, and consequently, they are playing with a jerky motion, skipping frames, from within Keynote.
    I made the movie sequences in Final Cut Pro, and exported from there to movies (.mov) with an 800 x 600 size. But I don't understand resolution issues, or bit rates, or any of those other details, that might make the movie play more smoothly while retaining a clear, clean look, when projected.
    My movie sequences in Keynote are anywhere from 1-10 minutes in length, and the size of the movies are anywhere between 200 mB to almost 1 Gig.
    Any ideas?
    Uri

    I regularly incorporate videos into my presentations. I encode them using h.264, and save them as MP4 files. I specify an average bitrate around 1200. I also do not embed the files into Keynote.
    One thing I noticed: If I have the ethernet connection plugged in, I get stuttering video, but if it's unplugged, it generally works better. I have no idea why.

  • What is the recommended GB size for a computer to run Photoshop CS6?

    How much installed memory (RAM) is needed and what system type to successfully run CS6?

    Have you read through the system requirements?
    System requirements | Photoshop

  • How to determine SWAP size ??? (Netra 440/ Solaris 10)

    Hi Friends
    I have trouble with the following OUTPUTs:
    # prtdiag
    System Configuration: Sun Microsystems sun4u Sun Fire V440
    System clock frequency: 177 MHZ
    Memory size: 8GB
    # /usr/sbin/swap -l
    swapfile dev swaplo blocks free
    /dev/vx/dsk/bootdg/swapvol 291,7000 16 16373168 13437120
    # swap -s
    total: 6495832k bytes allocated + 236328k reserved = 6732160k used, 6942968k available
    # df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/vx/dsk/bootdg/rootvol 62516603 34004373 27887064 55% /
    /devices 0 0 0 0% /devices
    ctfs 0 0 0 0% /system/contract
    proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    swap 6962904 1608 6961296 1% /etc/svc/volatile
    objfs 0 0 0 0% /system/object
    /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 62516603 34004373 27887064 55% /platform/sun4u-us3/lib/libc_psr.so.1
    /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 62516603 34004373 27887064 55% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
    fd 0 0 0 0% /dev/fd
    swap 6965712 4416 6961296 1% /tmp
    swap 6961344 48 6961296 1% /var/run
    swap 6961296 0 6961296 0% /dev/vx/dmp
    swap 6961296 0 6961296 0% /dev/vx/rdmp
    /dev/vx/dsk/bgw1dg/vol01 203768832 129151641 70294743 65% /var/opt/BGw/Server1
    /dev/vx/dsk/ora1dg/vol01 10480640 5502642 4666959 55% /var/opt/mediation/ora
    /dev/dsk/c1t2d0s2 70589121 219550 69663680 1% /mnt
    At first I would have said that the total SWAP size is 6,6GB, but now I am in doubt since I see several SWAP line on the df OUTPUT.
    Please help:
    1. What is the total SWAP size for the machine?
    2. Why is there 4 lines mentionning SWAP on the df OUTPUT?
    3. With 8GB of RAM on this machine do I need to increase the SWAP?
    4. If yes how to increase the SWAP?
    Thanks in advance !

    Swap can be a confusing thing. It's an overloaded term that people often use to mean somewhat different things.
    1. Determine the swap allocated for the system with swap -l. The number of blocks minus the swaplo value will give you the true count. Blocks are 512 bytes, so do the math to get a GB value.
    2. df output reflects file systems that are currently in use. In Solaris 10 there are a few virtual file systems that are backed by swap space itself. You'll notice all these swap fs components have the same size because they use the swap as a common pool.
    3. Sizing swap shouldn't be based strictly on the amount of physical memory available. While there are guidelines for doing precisely that, those guidelines are given for the general case (i.e., no information on the apps that will be running). Monitoring demand is key. swap -s and vmstat will both give you a good general view of the demand for swap.
    But, if you're a guidelines sort of person:
    http://docs.sun.com/app/docs/doc/817-5093/fsswap-31050?a=view
    4. The easiest way to increase swap is to make files that can be added to the overall swap space. I generally prefer this approach from the ground up, because it means I never have to dedicate disk space to a swap region. There are different steps for doing it with UFS and ZFS. Here's the ufs way:
    a) Pick a UFS file system with space on a disk that isn't hot. Make a swap file and add it to the swap space like so:
    - # mkfile 1024m /filesys2/swapfile
    - # swap -a /filesys2/swapfile
    Then check the outcome with swap -l and swap -s.
    In many cases I will add a swap file to a customer's system and remove the partition-based swap (swap -d) that they have. There is no appreciable difference in performance and the flexibility of moving swap can come in handy when a system has to accept a different workload with a minimum of downtime.

  • O.S. / Hard Drive Size for NIO Server/Client's load testing...

    Hi All
    I am currently load testing a NIO Server/Client's to see what would be the maximum number of connections that could reached, using the following PC: P4, 3GHz, 1GB RAM, Windows XP, SP2, 110 GB Hard Drive.
    However, I would like to test the Server/Client performance on different OS's:
    Which would be the best possible option from the following:
    1. Partition my current drive, (using e.g. Partition Magic), to e.g.
    - Win XP: 90 GB
    - Win Server 2000: 10 GB
    - Linux: 5 GB
    - Shared/Data: 5 GB
    2. Install a separate Hard drive with the different hard drives
    3. Use a disk caddie, to swap in/out test hard drives.
    4. Any thing else?
    - Would the Operating System's hard drive size affect the Server/Client's performance, e.g. affecting the number of connections, number of File Handles, the virtual memory, etc.?
    Many Thanks,
    Matt

    You can use a partition on the same HDD or use a second HDD, disk caddie well if its a direct IDE or SCSI. If its usb no it will be too slow, may be if you have a fire-wire but I still don't recommend it.
    Be careful if you don't have any experience installing Linux you may do multiple partitions on you disk without knowing, because Linux ext partitions are not visible to windows.
    Recommended disk size for fedora is 10 GB. This is the amount of data that will be created on you HDD when you do a full installation.

  • Stack size for native thread attaching to JVM

    All:
    I have a native thread (see below, FailoverCallbackThread) that attaches to the JVM and does a Java call through JNI. The stack size for the native thread is 256KB.
    at psiUserStackBangNow+112()@0x20000000007a96d0
    at psiGuessUserStackBounds+320()@0x20000000007a8940
    at psiGuessStackBounds+48()@0x20000000007a8f60
    at psiGetPlatformStackInfo+336()@0x20000000007a9110
    at psiGetStackInfo+160()@0x20000000007a8b40
    at psSetupStackInfo+48()@0x20000000007a5e00
    at vmtiAttachToVMThread+208()@0x20000000007c88b0
    at tsAttachCurrentThread+896()@0x20000000007ca500
    at attachThread+304()@0x2000000000751940
    at genericACFConnectionCallback+400(JdbcOdbc.c:4624)@0x104b1bc10
    at FailoverCallbackThread+512(vocctx.cpp:688)@0x104b8ddc0
    at start_thread+352()@0x20000000001457f0
    at __clone2+208()@0x200000000030b9f0
    This causes stack overflow in Oracle JRockit JVM. (It does not cause overflow with Oracle Sun JDK.) Is there a recommended stack size for this use case for JRockit? Is there a way to compute it roughly?
    Platform Itanium 64 (linux)]
    java version "1.5.0_06"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05)
    BEA JRockit(R) (build R26.4.0-63-63688-1.5.0_06-20060626-2259-linux-ia64, )
    mp

    How do I found default heap size, stack size for the
    thread, number of threads per jvm/process supported ?The threads is OS, OS install and jvm version specific. That information is also not useful. If you create the maximum number of threads that your application can create you will run out of memory. Threads require memory. And it is unlikely to run very well either.
    The default heap size and stack size are documented in the javadocs that explain the tools that come with the sun jdk.
    and how the above things will vary for each OS and how
    do I found ? Threads vary by OS, and OS install. The others do not (at least not with the sun jvm.)
    If I get "OutOfMemoryError: Unable to create new native thread" Most of the time it indicates a design problem in your code. At the very lease, you should consider using a thread pool instead.
    I found in one forum, in linux you can create maximum
    of 894 threads. Is it true ?Seems high since in linux each thread is a new process, but it could be.

Maybe you are looking for

  • Will I be able to disable the automatic icloud sync between devices?

    What I want to know from looking at the keynote speech it looks like if I buy a song on my phone or ipad it will also get downloaded to my son's iphone even though he may not want the sond and vice versa. I hope there will be a way to disable the aut

  • Creating an iWeb page that is NOT displayed in tabs

    I am trying to build a website for my church. But after getting all the info wanted on the site, I discovered that there would be way too many tabs to keep the site nice looking. So how do you create a page that doesn't have a tab on the top of the p

  • SRM as an Add on to ECC

    Hi All, Can anybody please forward me the presentation for SRM as an Add on to ECC 6.0. Also please provide any other presentations giving overview of SRM activities and configuration steps. Regards, Sourabh

  • Speed of hard drive

    I have noticed that my iBook is running much more slowly than it used to. A PC user friend said, 'You need to defrag your hard drive' -- useful statement in some cryptic way. Any idea how I can get my iBook back up to speed? And why it might be slowe

  • Can't play purchased TV shows on new iTunes.

    Ever since downloading the new iTunes, none of my purchased TV episodes will play on my laptop. I double click on the episode, the laptop goes into the normal full screen mode and shows seconds ticking away as if the episode were playing, but the scr