Impact of project.max-shm-memory configuration in Solaris 10

Dear All,
I'm not sure if this an error or purposely configured as it is.
Current kernel configuration of project.max-shm-memory is *400Gb* while the hardware only have 8 GB RAM, and SGA_max set to 5GB (ORACLE database is 10g).
Will there be any impact in long run with this configuration based on your experiences?
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged *400GB* - deny -
system 16.0EB max deny -
Suggestions and advices are much appreciated.
Thanks and Best Regards,
Eric Purwoko

Hi Helios,
Thanks!, the recommendation is 4294967295, but my SGA MAX and target is 5 GB. Will it cause problem if I put project.max-shm-memory lower than SGA?
Thanks for the link too. I guess I better put those configuration in /etc/system too.
But now wondering what's the best value looking at my SGA max configuraiton
Best Regards,
Eric

Similar Messages

  • Shminfo_shmmax in /etc/system does not match  project.max-shm-memory

    If I specified 'shminfo_shmmax' in /etc/system and hava the system default in /etc/project(no change is made), the size of 'project.max-shm-memory' is 10 times larger than 'shminfo_shmmax'.
    #more /etc/system // (16MB)
    set shmsys:shminfo_shmmax=16000000
    #prctl -n "project.max-shm-memory" -i project user.root
    => will display like below.
    project: 1: user.root
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 1.49GB - deny -
    system 16.0EB max deny
    1.49GB is 10 times larger than 'SHMMAX'. If I add more entries /etc/system like below, max_shm_memory will become even larger.
    #more /etc/system
    set shmsys:shminfo_shmmax=16000000
    set semsys:seminfo_semmni=2000
    set shmsys:shminfo_shmmni=2000
    set msgsys:msginfo_msgmni=2048
    After I reboot with the above /etc/system and no change /etc/project(all default, no values added)
    # prctl -n "project.max-shm-memory" -i project user.root
    project: 1: user.root
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 29.8GB - deny -
    system 16.0EB max deny -
    Can anyone shed light about this area how to configure SHMAX in /etc/system right?

    We saw similar behavior and opened a case with Sun.
    The problem turns out to be that the (deprecated) /etc/system to (new) project resource limits isn't always one-to-one.
    For example process.max-shm-memory gets set to shmsys:shminfo_shmmax * shmsys:shminfo_shmmni.
    The logic here is that under the /etc/system tunings you might have wanted the maximum number of segments of the maximum size so the system has to be able to handle that. Make sense to some degree. I think Sun updated one of their info docs on the process at the end of our case to make this clearer.

  • Prctl -n project.max-shm-memory -i process $$

    Hi all,
    when i execute the following command "prctl -n project.max-shm-memory -i process $$"
    its output is
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 5.85TB - deny
    5.85TB while RAM is only 32GB.
    How i can change it for oracle user.

    What does your /etc/project file says?
    Mine is (showing oracle user):
    oracle:100::oracle::process.max-sem-nsems=(priv,300,deny);project.max-sem-ids=(priv,100,deny);project.max-shm-ids=(priv,512,deny);project.max-shm-memory=(priv,8589934592,deny)
    That is 8 GB RAM allowed for oracle use (max-shm-memory).
    Change it using
    projmod -sK "project.max-shm-memory=(priv,8G,deny)" oracleJan

  • How to set kernel parameter max-shm-memory automatically at startup

    Hi,
    We have a 11.1.0.7 Database on Solaris Sparc 10 64-bit server. We have settings of max-shm-memory as below;
    -bash-3.00# prctl -n project.max-shm-memory -i project default
    project: 3: default
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      50.0GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    -bash-3.00# prctl -n project.max-shm-memory -i project system
    project: 0: system
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      50.0GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    Whenever we restart the db the second one is lost;
    bash-3.00$ prctl -n project.max-shm-memory -i project default
    project: 3: default
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      50.0GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    bash-3.00$ prctl -n project.max-shm-memory -i project system
    prctl: system: No controllable process found in task, project, or zone.
    So our sys admin has to configure them again whenever we restart our db. How could I do this automatically at startup without counfiguring again from command prompt?
    Thanks,
    Hatice

    Ok it is clear now. I have one more question.  When I check system I get below error;
    # prctl -n project.max-shm-memory -i project system
    prctl: system: No controllable process found in task, project, or zone.
    Document says; The reason for the message reported above is because there is no active process(es) belong to the project.
    But it is impossible for us because according to our project settings its for root user;
    bash-3.00$ cat /etc/project
    system:0::root::project.max-shm-memory=(priv,53687091200,deny)
    user.root:1::::
    noproject:2::::
    default:3::oracle::project.max-shm-memory=(priv,53687091200,deny)
    group.staff:10::::
    oracle:100::::project.max-shm-memory=(priv,53687091200,deny)
    Is it because I check with oracle user and I don't have sufficient privileges or there is something wrong with it?
    Thanks.

  • Max-shm-memory Problem - out o memoy. No space on device

    Hi Everyone,
    First time post.  I'm a UNIX SA tying to troubleshhot the problem:  On Solaris 10.
    SQL> startup pfile=inittest1.ora
    ORA-27102: out of memory
    Solaris-AMD64 Error: 28: No space left on device
    SQL>
    u01/app/oracle/admin/dd00lod1/udump/dd00lod1_ora_25782.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /u01/app/oracle/product/10.2.0.4
    System name:    SunOS
    Node name:      reuxeuux1089
    Release:        5.10
    Version:        Generic_147441-10
    Machine:        i86pc
    Instance name: dd00lod1
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Unix process pid: 25782, image: oracle@reuxeuux1089
    skgm warning: ENOSPC creating segment of size 0000000005800000
    fix shm parameters in /etc/system or equivalent
    We have tied modifying the max-shm-memory settings, but no joy !  Please assist if you can.
    Thanks
    Amreek
    prctl -n project.max-shm-memory -i project 100
    project: 100: ORACLE
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged       124GB      -   deny                                 -
            system          16.0EB    max   deny

    consider  to Read The Fine Manual; INSTALLATION GUIDE

  • Max-shm-memory - definition

    Hi Guys,
    I'm trying to get a clear definition on project.max-shm-memory. Lets say its set to 4GB for user.oracle. Does this mean that the maximum amount of shared memory available to the project is 4GB or that the maximum shared memory segment size created by ANY process in the project can be 4GB (i.e 2 processes could create 2 separate 4GB segments)? I'm pretty sure its the former but I wanted to check..
    Thanks,
    Tony

    Even though SUN says many of the kernel tunables are now obsolete in /etc/sytem, some like shmmax actually will still work if reset with the global zone. The default is 1/4 system memory.

  • Max-shm-memory (Solaris 11.1)

    I wonder if someone can give a definitive answer on this, or point me to somewhere that does?
    If I have a 64GB RAM server running Solaris 11.1 that will run a single Oracle instance, what is the correct setting to make for max-shm-memory?
    Should it be 64GB, or something a bit smaller? Or something a lot smaller?
    I have read the installation documentation, but it gives examples of 2gb, 4gb and so on. They don't seem relevant to a 64GB+ server

    Thank you, but that document doesn't answer my questions.
    Specifically, it states at one point that "project.max-shm-memory=(privileged,51539607552,deny); ... sets a limit of 48GB per shared memory segment" (which is true, as far as I understand it). But it then goes on to say in the next breath that "The project.max-shm-memory limit is the __total__ shared memory size for your project. -- ie maximum total of all your segments.". Which, to my mind, contradicts its first statement.
    The article then also goes on to give an example where "This system has 8GB of memory..." and shows the author setting "projmod -s -K "project.max-shm-memory=(privileged,4GB,deny)" 'user.oracle'"... so are we to deduce that you should set max-shm-memory to 50% of your physically-available RAM? Or not??
    I had actually read this before I posted. It's the same sort of document I see over and over on this subject: lots of examples, some contradictory, but none stating what principles should govern the setting of max-shm-memory, and none stating what the consequences of (for example) allocating 100% of physically available RAM as max-shm-memory would be.
    So thank you for the reference, but my question still stands: can someone provide a definitive answer on what the setting here should be? It's a 64GB server and will run nothing but Oracle database, with a single instance. max-shm-memory should be set to.... what, exactly?

  • Solaris Environment Manager maxs out memory.

    We have a forte system in production, running on two Solaris 2.6 boxes. Each
    has its own environment and they are connected. Our environments are called
    "Data Center" and "Call Center". Our Forte memory settings are fairly
    standard. We start at 20Mb and max out at 64Mb. We also have Forte keepalive
    enabled, with the Forte recommended values. We are running Forte 3.0.L.3 and
    using Oracle 8.0.5 for our database (as if it mattered). Each environment
    has 1 or 2 NT Server boxes in it, and the clients are all NT4 using a model
    node. Our Solaris machines have the patch installed that fixes the problem
    that Peggy Adrian posted about earlier in the year.
    Three times now in the past week, we have suddenly had the environment
    manager for the data center suddenly start chewing up memory. When the data
    center manager gets to about 64Mb, the data center environment manager
    starts to do the same thing. Soon after, both environments die a hideous
    death, and we have go in and start the node managers again.
    Please note that we always export our environments to a ".edf" file and
    bootstrap the environment manager (removing the environment repository)
    whenever we install a new copy of the Forte applications, or whenever our
    node manager dies (which seems far more frequent that I would like).
    Interestingly, the environment repository hadn't grown any after the
    environment manager "runs up the curtain and joins the choir invisibule".
    If anyone can throw some light on this, and point us in the direction to
    look to solve this problem it would be much appreciated.
    Nick.

    Hi,
    @ Sunny: Thanks for response, the referred note was already checked and parameters are in sync as per note.
    @Mohit: SAP wouldn't proceed till create database if Oracle software was not installed. thanks for the response.
    @Markus: Thanks I agree with you, but I have doubt in this area. Isn't it like project.max-shm-memory was new parameter we need to set in local zone rather using shmsys:shminfo_shmmax in /etc/system. Do we need to still maintain this parameter in /etc/system in global zone.
    As per SUN doc below parameter was obsolete from Solaris 10.
    The following parameters are obsolete.
    ■ shmsys:shminfo_shmmni
    ■ shmsys:shminfo_shmmax
    As per your suggestion, do we need to set below parameters in that case, please clarify.
    Parameter                           Replaced by Resource Control      Recommended Value
    semsys:seminfo_semmni   project.max-sem-ids                      100
    semsys:seminfo_semmsl   process.max-sem-nsems               256
    shmsys:shminfo_shmmax  project.max-shm-memory             4294967295
    shmsys:shminfo_shmmni   project.max-shm-ids                       100
    Also   findings of /etc/release
    more /etc/release
    Solaris 10 10/08 s10s_u6wos_07b SPARC
    Regards,
    Sitarama.

  • Help with memory allocation on Solaris 10

    I am trying to make a FEM analysis that requires at least 6,5 GB of memory according to software's logging. I have increased the per project memory limit to 12 GB, machine has 8 GB of RAM and 6 GB of swap. But the analysis fails every time to "out of memory"
    Here's a list of limits I've taken with prctl for this process. I am hoping that someone is better of with Solaris kernel parameters than I am.
    process: 5301: bin/msengine MODAL_MJU_10 -i . -w . -solram 1042 -modeltype mda
    process.max-port-events privileged 65536 - deny -
    process.max-port-events system 2147483647 max deny -
    process.max-msg-messages privileged 8192 - deny -
    process.max-msg-messages system 4294967295 max deny -
    process.max-msg-qbytes privileged 65536 - deny -
    process.max-msg-qbytes system 18446744073709551615 max deny -
    process.max-sem-ops privileged 512 - deny -
    process.max-sem-ops system 2147483647 max deny -
    process.max-sem-nsems privileged 512 - deny -
    process.max-sem-nsems system 32767 max deny -
    process.max-address-space privileged 18446744073709551615 max deny -
    process.max-address-space system 18446744073709551615 max deny -
    process.max-file-descriptor basic 256 - deny 5301
    process.max-file-descriptor privileged 65536 - deny -
    process.max-file-descriptor system 2147483647 max deny -
    process.max-core-size privileged 9223372036854775807 max deny -
    process.max-core-size system 9223372036854775807 max deny -
    process.max-stack-size basic 10485760 - deny 5301
    process.max-stack-size privileged 137988707188736 - deny -
    process.max-stack-size system 137988707188736 max deny -
    process.max-data-size privileged 18446744073709551615 max deny -
    process.max-data-size system 18446744073709551615 max deny -
    process.max-file-size privileged 9223372036854775807 max deny,signal=XFSZ -
    process.max-file-size system 9223372036854775807 max deny -
    process.max-cpu-time privileged 18446744073709551615 inf signal=XCPU -
    process.max-cpu-time system 18446744073709551615 inf none -
    task.max-cpu-time system 18446744073709551615 inf none -
    task.max-lwps system 2147483647 max deny -
    project.max-contracts privileged 10000 - deny -
    project.max-device-locked-memory privileged 528614912 - deny -
    project.max-port-ids privileged 8192 - deny -
    project.max-shm-memory privileged 12884901888 - deny -
    project.max-shm-ids privileged 128 - deny -
    project.max-msg-ids privileged 128 - deny -
    project.max-sem-ids privileged 128 - deny -
    project.max-crypto-memory privileged 2114459648 - deny -
    project.max-tasks system 2147483647 max deny -
    project.max-lwps system 2147483647 max deny -
    project.cpu-shares privileged 1 - none -
    zone.max-lwps system 2147483647 max deny -
    zone.cpu-shares privileged 1 - none -

    is the code 32 bit or 64 bit?
    32 bit application has a max address space of about 3.7 GB
    64 bit apps are effectively unlimited .
    tim

  • Ora-27102: out of memory error on Solaris 10 SPARC zone install or Oracle10

    All
    I'm stuck!! Tried for days now ,and can't get this working.
    I"m getting the classic: "ora-27102: out of memory" error. However, my memory settings seem fine.
    I'm running Solaris 10 on a Zone, while installing Oracle 10.2.0.1.
    I've changed the max-memory to 6Gig's, but still, I get this error message.
    Could it be possible that this error message means something else entirely?
    Thank you for your assistance!!
    Anne

    Hi V
    Thanks for the response.
    I fixed the problem. Turns out it was because my physical memory for box is 16 gig, but my max-shm-memory was only at 6 GIG. I upped to 8 gig, and everything worked.
    I'm sure you were going to tell me to do just that! I found another post that explained it.
    Thanks!
    Anne

  • Memory Configuration & Upgrade Recommendations - 2006 Mac Pro

    Hello,
    It's been a long time since I have been posting here. Nice to see that everyone is still here. I have a memory configuration and upgrade I am now looking at. I'm also interested in how to use Bootcamp and install 64-bit Windows 7 when I have the EFI32. It appears that many of you build a new Windows 7 install DVD?
    Here is my current configuration:
    A1: 2GB
    A2: 2GB
    B1: 1GB
    B2: 1GB
    A3: 512 MB
    A4: 512 MB
    B3: 512 MB
    B4: 512 MB
    I'm interested in comparing the 4GB vs. 2GB modules and what are the best options for maxing out this Mac Pro. I have forensic software that uses Oracle that I want to put on Windows 7 or a 64-bit Windows server. The system requirements say 16-32GB in a 64-bit environment for Oracle.
    I also want to use CS4, Lightroom, Aperture, Capture One Pro, and FC HD on Snow Leopard.
    I would appreciate input on configuration and memory quantities for what appears to now be an ancient Mac Pro and how to use a 64-bit OS on Bootcamp.
    Thank you for your time and input!
    Michael

    2006: 4 memory chips is ideal. obviously that only takes you to 16GB max.
    2008: 8 memory chips (don't read about 2008 then!)
    I've seen people report their 4GB DIMMs run cooler, too, and heat is always an issue when it comes to FBDIMMs. They use a lot of power (watts) and one of the large sources of heat.
    What you have now, 2s and 1s and 512s don't get you to 16GB or above, and as I posted you can get to 20GB replacing 4 x 512s.
    I'd try with 4 x 4GB, which seems like you have to buy anyway, and see how heat and performance is. And only then consider adding back 2s and 1s to get to 20GB.
    in the end, I'd sell the RAM you have if it makes a difference.

  • The disk configuration is not insync with the in-memory configuration. Software RAID 1 reactivation

    In trying to reactivate disk 3 of a raid mirror failed redundancy I get the error - The disk configurationis not insyncwith the in-memory configuration. The drive is accessible but I have no idea which of drive 2 and 3 are in use, drive 2 has unspecified
    errors but there is the option to reactivate disk 3.
    Does anyone have any idea what this means?
    I am running on a HP Proliant ML350 G5 with Windows Server 2012
    Thanks

    Hi,
    Since the Disk 2's status is Errors, we could not reactivate disk 3. Please try to reactivate disk 2 to check the results.
    If the disk does not return to the Online status and the volume does not return to the Healthy status, there may be something wrong with the disk. You should replace the failed mirror disk region.
    For more detailed information, please refer to the article below:
    Volume status descriptions
    http://technet.microsoft.com/en-us/library/cc739417(v=ws.10).aspx#BKMK_2
    Best Regards,
    Mandy 
    If you have any feedback on our support, please click
    here .
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.

    I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.  Help!  I can't access my iTunes content.

    I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.  Help!  I can't access my iTunes content.

  • /etc/profile: line 28: ulimit: max locked memory: cannot modify limit: Oper

    Hi;
    I writed one sh which is checking tablespace size and its working well ( i used this before for someother client) Now i try to run it on one server and i have this error:
    /etc/profile: line 28: ulimit: max locked memory: cannot modify limit: Operation not permitted in /var/spool/mail/root
    Anyone has idea what is problem?
    Thanks

    Well, check line 28 of /etc/profile, and see what command it is trying to execute. If it's a ulimit -l command, check if the value it is trying to set the limit to is higher than the current ulimit value of the current user. If it is, then that operation is not allowed. You can increase the limit for the current user (at login time) by modifying /etc/security/limits.conf . This is documented in the various guides, notes, whitepapers that talk about installing Oracle database on Linux.

  • Sql Server max server memory

     Hi,
     I know that , max server memory property is about physical memory limit for  buffer pool. But some says that, it is about Vas (virtual adress space) limit for buffer pool. I know that , Vas structure includes
    physical memory and virtual memory. Then it means that max server memory is not equal max
    physical memory for buffer pool.
    Which one is true?

    Both are true depending on the memory model your SQL Server is using
    In conventional memory model. Max server memory is about memory limit for Bpool ( Bpool can be from RAM or Page file), so there is no guarantee that bpool will always be placed in physical memory it can be paged to page file when there is memory pressure.
    In Lock pages and large pages memory model Bpool cant be paged and always placed in RAM so its limit for BPOOL in RAM
    I assume you are referring  Page file as virtual memory and RAM as
    physical memory
    Read http://mssqlwiki.com/2013/03/26/sql-server-lock-pages-in-memory/ and http://mssqlwiki.com/sqlwiki/sql-performance/basics-of-sql-server-memory-architecture/ you will get clarity
    Thank you,
    Karthick P.K |My blogs|My Scribbles|Twitter|My Facebook
    Group|
    www.Mssqlwiki.com
    Please click the Mark as answer button and vote as helpful if this reply solves your problem

Maybe you are looking for