Shared memory in solaris 10 with oracle

Hi, I am a newbie to solaris and i have some questions on shared memory, Oracle in Solaris
My Questions might seem different, however please do read and try to answer. Thanks in advance.
1) if a solaris server has say 40gb of Ram, what would be the maximum size of a shared memory segment in this machine?
I know that if the server has 40GB. then max shared memory size is 10GB i.e. one fourth of ram, however not sure
2) What is the maximum size of a shared memory segment in solaris that a root user can define.
+ i know that its some where near 14 GB not very sure +
3) Assume i have created a user X and i allocated say 10GB limit for this user for shared memory.
I login to solaris using X and now, can i increase the size of the shared memory that this user can use?
I have a situation, where the root user, created a user named DBA and the root user allocated some 15gb for this DBA user as the max SHM limit.
Now the DBA user has set the max limit for shared memory as 1TB, which is causing hell of problems in the system.
* I am Not very sure on the concept. I am new to this product and facing this problem. please advice.*
+ Thanks +
Krishnakanth (Simply KK)

Not sure why your "oracle" user (owner who will be creating the instance) has been assigned the project user.root. I would say create a seperate project may be "dba" and give access to this project to the owner of the user who will be creating the oracle instance.
and then try to issue the command:
prctl -n project.max-shm-memory -v 8gb -r -i project dba
and check to see if you are still facing a problem?
Edited by: somjitdba on Apr 2, 2009 9:54 PM

Similar Messages

  • [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL

    Hi All,
    I am running an SSIS solution that runs 5 packages in sequence.  Only one package fails because of the error:
    [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    I have added myself to the performance counters group. 
    I am running windows 7 with SSIS 2008.
    Any ideas would be appreciated.  I have read that some have disabled the warning, but I cannot figure out how to disable a warning. 
    Thanks.
    Ivan

    Hi Ivan,
    A package would not fail due the warning itself, speaking of which means the account executing it is not privileged to load the Perf counters, and should thus be safely ignored.
    To fix visit: http://support.microsoft.com/kb/2496375/en-us
    So, the package either has an error or it actually runs.
    Arthur My Blog

  • SSIS.Pipeline : Warning: Could not open global shared memory to communicate with performance DLL

    I am getting the following warning for my SSIS08 package:
    Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    I did check Warning in SSIS 2008 , but didn't find any solution.
    The package processes data and executes fine , but why do I see this warning? When I run this package on my machine, I see no such warning, it's only when I deploy it to our DEV SSIS server, I get this warning.
    Thanks,
    Sonal

    I am getting the following warning for my SSIS08 package:
    Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    I did check
    Warning in SSIS 2008 , but didn't find any solution.
    The package processes data and executes fine , but why do I see this warning? When I run this package on my machine, I see no such warning, it's only when I deploy it to our DEV SSIS server, I get this warning.
    Thanks,
    Sonal
    You have to include the account you use to execute the package in the performance counters group.
    SSIS Tasks Components Scripts Services | http://www.cozyroc.com/
    Could you tell me where is "performance counters" group, I can not find it.
    Thanks !

  • Sun Solaris with Oracle 8i &java 1.3

    i am going to work on Sun solaris with Oracle 8i
    in Java platform
    is there any major issue to this combination
    pls , help me out to -> [email protected]
    Thanks in advance
    oviya

    I've been able to get Apache_1.3.(9|11|12)+PHP-(3.0.1[45]|4.0b4pl1)+Oracle-8.1.5.0.2 to work (OCI8 API calls) but with much difficulty.
    Compiling was pretty straightforward. I used APXS to load the PHP module. However, when I start Apache with the PHP modules loaded and added, Apache would die (no hints, etc.) What I did was start Apache without PHP loaded, reconfigure in PHP once Apache started and THEN restart apache. This works for me. Kinda tedious, though. And the time when Apache is running without support for PHP would allow anyone to see my PHP code with my database password.
    (Incidentally, how should I configure Apache so it doesn't serve files which end in .php[3] unless it was through the PHP module?)
    I've emailed the Apache OCI8 module developers about it (Stig and Thies) so I hope it'll be resolved soon.

  • Installation BOBJ XI 3 on UNIX ( Sun Solaris ) with Oracle database

    Hi All,
    I want to deploy and install BOBJ XI Enterprise 3.1 on Sun Solaris with oracle database.
    My question is, do we need to add some oracle licenses stuff in BOBJ XI Enterprise server or we just using a connection from Sun Solaris to oracle database ?
    Pls Advise,
    Rgds,
    Denny

    Hi Denny,
    the risk is the same whatever DB server you are using. Once you have a setup where more than one application access the server, overload caused by one application can influence the performance of the other ones. You have to check if your DB vendor offers tools to monitor and limit the load on your DB server. But we are talking hypothetically now. I would recommend to check what's the actual load on your oracle DB server. If the server is not working at the limit (let's over 80% all the time), then I think that you can try to install BOBJ also in an existing oracle instance on the oracle server. You can reduce the risk and make the maintenance easier if you setup a dedicated oracle instance on your DB server just for the BOBJ repository.
    Regards,
    Stratos

  • How do I remove a DB from shared memory in Solaris 10?

    I'm having trouble removing  an in-memory database placed in shared memory. I set SHM key and cache size, and then open an environment with flags: DB_CREATE | DB_SYSTEM_MEM | DB_INIT_MPOOL | DB_INIT_LOG | DB_INIT_LOCK | DB_INIT_TXN. I also set the flag DB_TXN_NOSYNC on the DbEnv. At the end, after closing all Db and DbEnv handles, I create a new DbEnv instance and call DbEnv::remove. That's when things get weird.
    If I have the force flag set to 0, then it throws an exception saying "DbEnv::remove: Device busy". The shared memory segments do not get removed in this case (checking with `ipcs -bom`).
    When the force flag is set to zero, the shared memory is released but the program crashes saying "Db::close: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery".
    What am I doing wrong?

    This is curious, since a simple program similar to what is described is known to work. I've modified the standard sample program examples/cxx/EnvExample.cpp C++ to use an in-memory database, DB_SYSTEM_MEM, and DB_TXN_NOSYNC. The "Device busy" symptom occurs if the close of the environment handle is bypassed. I have not been able to reproduce the DB_RUNRECOVERY error.
    How does the program's use of Berkeley DB different from what is provided in EnvExample.cpp?
    Is it possible to send me the relevant portions of it?
    Regards,
    Charles Koester
    Oracle Berkeley DB

  • Shared memory:  apache memory usage in solaris 10

    Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!

    a) How or why does solaris choose to share memory
    between processes
    from the same program invoked multiple times
    if that program has not
    been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
    Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
    If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
    Simply: if we run pmap / ipcs we can see a
    shared memory reference
    for our oracle database and ldap server. There
    is no entry for apache.
    But the total memory usage is far far less than
    all the apache procs'
    individual memory totted up (all 100 of them, in
    prstat.) So there is
    some hidden sharing going on somewhere that
    solaris(2.9) is doing,
    but not showing in pmap or ipcs. (virtually
    no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
    b) Under solaris 10, each apache process takes up
    precisely the
    memory reported in prstat - add up the 100
    apache memory details
    and you get the total RAM in use. crank up the
    number of procs any
    more and you get out of memory errors so it
    looks like prstat is
    pretty good here. The question is - why on
    solaris10 is apache not
    'shared' but it is on solaris 9? We set up
    all the usual project details
    for this user, (jn /etc/projects) but I'm
    guessing now that these project
    tweaks where you explicitly set the shared
    memory for a user only take
    effect for programs explicitly coded to use
    shared memory , e.g. the
    oracle database, which correctly shows up a
    shared memory reference
    in ipcs .
    We can fire up thousands of apaches on the 2.9
    system without
    running out of memory - both machines have the
    same ram !
    But the binary versions of apache are exactly
    the same, and
    the config directives are identical.
    please tell me that there is something really
    simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
    Darren

  • Read & Write Shared memory with Java

    Hi,
    I have to read and write from/to shared memory four integers with Java, using Posix package (the shared memory is already created).
    I've looking in google for any examples that attachs, read, and write to shared memory using that package but i couldn't find anything (that's the reason to write in this forum).
    Posix package link
    http://www.bmsi.com/java/posix/docs/Package-posix.html
    Operative System
    Suse 10
    Could anyone help me with any example, please?
    Thank you very much
    Oscar

    Hi, i can't post any code because i have no idea about posix and shared memory.
    I came from web world and my company has send me to space department, two weeks ago, that use Unix as its operative system.
    The first thing i have to do it's what i post but i don't know even how to start doing it, because this is almost the first time i hear about reading and writing shared memory.
    Java is a high level and non-dependent platform (but this kind of things i think are dependent platform, in this case the opearative system) and it's few time since i am working with this operative system.
    That's the trouble i don't know how to start working with this.
    Thanks again

  • Locate shared memory segments outside of pool 10

    Dear All,
    When i am starting my sap through STARTSAP its showing started successfully .but i am not able to logon to the system
    Oracle is coming up without any issues but No dialog process is running
    Facing the below errors in start profile when i am running sappfpar check =pf= START_DVEBMGS00_SAPDEV 
    ***ERROR: Size of shared memory pool 10 too small
    ================================================================
    SOLUTIONS: (1) Locate shared memory segments outside of pool 10
    with parameters like: ipc/shm_psize_<key> =0
    SOLUTION: Increase size of shared memory pool 10
    with parameter: ipc/shm_psize_10 =56000000
    ***ERROR: Size of shared memory pool 40 too small
    ================================================================
    SOLUTIONS: (1) Locate shared memory segments outside of pool 40
    with parameters like: ipc/shm_psize_<key> =0
    SOLUTION: Increase size of shared memory pool 40
    with parameter: ipc/shm_psize_40 =62000000
    I tired the above by giving recommended values 56000000 and 6200000 to  ipc/shm_psize_10 abd  ipc/shm_psize_40 respectively.but its not working.
    My O/s is Linux suse9.0 and oracle 9 i
    Is this is related to sysctl.conf ???
    help !
    Regards

    Dear Manoj,
    my ERP2005 EhP4 Unicode system has
    ipc/shm_psize_10             = 156000000
    ipc/shm_psize_40             = 132000000
    try these values, they are at least high enough.
    Regarding your question with sysctl.conf. If the error is "shm_psize too small", then it has probably nothing to do with sysctl.conf.
    Thanks,
      Hannes

  • NOT ENOUGH SHARED MEMORY

    Hi,
    I am facing "NOT ENOUGH SHARED MEMORY " problem even with 1-2 users in portal 3.0.6.x.
    The init.ora settings are:
    shared_pool_size=104857600 large_pool_size=614400
    java_pool_size=20971520
    Is it due to hanging sessions?
    How to see the hanging sessions inside database?
    Can you tell me what is going wrong?
    Thanks
    Vikas
    null

    Think I got it figured out. Oracle 10g XE doesn't
    have a DB_HANDLE initialization parameter. The
    problem is that the initialization parameters are
    located in $ORACLE_HOME/dbs/spfileXE.ora, but sqlplus
    is looking for initORCL.ora.You mean instance is looking for initORCL.ora and not for SPFILE, or ;-)
    So does anyone besides Faust knowSorry, again me ;-)
    how to configure
    sqlplus to look for spfileXE.ora instead of
    initORCL.ora? I can't find an SQL*Plus setting that
    will do this.How to set SPFILE and arond it you can find here:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/create.htm#i1013934
    Cheers!

  • Q_CAT:1494: ERROR: xa_open() - failed to get shared memory

    I am getting this error while booting tuxedo.
    My question is how do I calculate shared memory on solaris and then which parameter to increase in /etc/system file.
    Thanks

    Vaibhav,
    You need to make sure that the product of SHMMAX and SHMSEG is at least
    equal to the total amount of shared memory needed by Tuxedo and any other
    applications running on your machine.
    "tmboot -c" will tell you the amount of shared memory required for the
    Tuxedo bulletin board, but this does not include any space required by /Q.
    To get /Q shared memory requirements, you can use the qmadmin subcommand
    "qspacelist". If you have more than one queuespace on your system, you must
    do this for each queuespace on your system, and add this total to the
    bulletin board requirements. If any other applications on your machine use
    shared memory, add their requirements as well. In case requirements change
    in the future, it is good to add a comfortable amount of padding to this
    sum.
    Ed
    <Vaibhav Gaur> wrote in message news:[email protected]..
    I am getting this error while booting tuxedo.
    My question is how do I calculate shared memory on solaris and then which
    parameter to increase in /etc/system file.
    Thanks

  • Solaris 10, Oracle 10g, and Shared Memory

    Hello everyone,
    We've been working on migrating to Solaris 10 on all of our database servers (I'm a UNIX admin, not a DBA, so please be gentle) and we've encountered an odd issue.
    Server A:
    Sun V890
    (8) 1.5Ghz CPUs
    32GB of RAM
    Server A was installed with Solaris 10 and the Oracle data and application files were moved from the old server (the storage hardware was moved between servers). Everything is running perfectly, and we're using the resource manager to control the memory settings (not /etc/system)
    The DBAs then increase the SGA of one of the DBs on the system from 1.5GB to 5GB and it fails to start (ORA-27102). According to the information I have, the maximum shared memory on this system should be 1/4 of RAM (8 GB, actually works out to 7.84 GB according to prctl). I verified the other shared memory/semaphore settings are where they should be, but the DB would not start with a 5 GB SGA. I then decided to just throw a larger max shared memory segment at it, so I used the projmod to increase project.max-shm-memory to 16GB for the project Oracle runs under. The DB now starts just fine. I cut it back down to 10GB for project.max-shm-memory and the DB starts ok. I ran out of downtime window, so I couldn't continue refining the settings.
    Running 'ipcs -b' and totalling up the individual segments showed we were using around 5GB on the test DB (assuming my addition is correct).
    So, the question:
    Is there a way to correlate the SGA of the DB(s) into what I need the project.max-shm-memory to? I would think 7.84GB would be enough to handle a DB with 5GB SGA, but it doesn't appear to be. We have some 'important' servers getting upgraded soon and I'd like to be able to refine these numbers / settings before I get to them.
    Thanks for your time,
    Steven

    To me, setting a massive shared memory segment just seems to be inefficient. I understand that Oracle is only going to take up as much memory (in general) as the SGA. And I've been searching for any record of really large shared memory segments causing issues but haven't found much (I'm going to contact Sun to get their comments).
    The issue I am having is that it doesn't make sense that the DB with a 5GB SGA is unable to startup when there is an 8GB max shared memory segment, but a 10GB (and above) seems to work. Does it really need double the size of the SGA when starting up, but 'ipcs' shows it's only using the SGA amount of shared memory? I have plans to cut it down to 4GB and test again, as that is Oracle's recommendation. I also plan to run the DB startup through truss to get a better handle on what it's trying to do. And, if it comes down to it, I'll just set a really big max shared memory segment, I just don't want it to come back and cause an issue down the road.
    The current guidance on Metalink still seems to be suggesting a 4GB shared memory segment (I did not get a chance to test this yet with the DB we're having issues with).
    I can't comment on how the DBA specifically increased the SGA as I don't know what method they use.

  • Oracle 11g problem with creating shared memory segments

    Hi, i'm having some problems with the oracle listener, when i'm trying to start it or reload it I get the follow error massages:
    TNS-01114: LSNRCTL could not perform local OS authentication with the listener
    TNS-01115: OS error 28 creating shared memory segment of 129 bytes with key 2969090421
    My system is a: SunOS db1-oracle 5.10 Generic_144489-06 i86pc i386 i86pc (Total 64GB RAM)
    Current SGA is set to:
    Total System Global Area 5344731136 bytes
    Fixed Size 2233536 bytes
    Variable Size 2919238464 bytes
    Database Buffers 2399141888 bytes
    Redo Buffers 24117248 bytes
    prctl -n project.max-shm-memory -i process $$
    process: 21735: -bash
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 64.0GB - deny
    I've seen that a solution might be "Make sure that system resources like shared memory and heap memory are available for LSNRCTL tool to execute properly."
    I'm not exactly sure how to check that there is enough resources?
    I've also seen a solution stating:
    "Try adjusting the system-imposed limits such as the maximum number of allowed shared memory segments, or their maximum and minimum sizes. In other cases, resources need to be freed up first for the operation to succeed."
    I've tried to modify the "max-sem-ids" parameter and set it to recommended 256 without any success and i've kind of run out of options what the error can be?
    /Regards

    I see, I do have the max-shm-ids quite high already so it shouldn't be a problem?
    user.oracle:100::oracle::process.max-file-descriptor=(priv,4096,deny);
    process.max-stack-size=(priv,33554432,deny);
    project.max-shm-memory=(priv,68719476736,deny)

  • Oracle Error 'ORA-27102: out of memory' - Shared memory parameters correct.

    Advice please!
    We’ve recently shut down our Oracle test server in order to increase file system capacity.  When we rebooted some of the databases wouldn’t start up.  It started the first 4 instances and then errored out saying “ORA-27102: out of memory“.
    I’m pretty sure it’s nothing to do with the file system because we actually reverted back to the old file system and the databases still wouldn’t start.  I think it’s more likely that something’s gone awry whilst the databases were actually running, and the problem has only manifested itself once we stopped and restarted them.
    I have researched the error and forud this artical and similar:  http://var-adm.blogspot.co.uk/2013/04/adjust-solaris-10-shared-memory-to.html 
    Everything suggests that Oracle is trying to create a larger shared memory segment than is allowed.  The thing is, we’ve never changed our shared memory settings, and one minute it was working, the next it isn’t.  To confirm this I checked the shared memory, which is as follows:
    sswift4# prctl -n project.max-shm-memory $$
    process: 926: bash
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      7.64GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    As suggested in the above article, I checked the alert log and found the ‘WARNING: EINVAL’ message which is as follows:
    WARNING: EINVAL creating segment of size 0x000000005e002000
    Converting this to decimal, it’s trying to create something of 1.5 GB, well within the shared memory settings, which suggests that this isn’t the problem.
    We are running Oracle 10g and 11g on Solaris 10 Sparc. The error does not seem to be instance specific, we have 8 instances on this box all with SGA max of 2000m. The server has 32GB of memory available.
    Any advice would be helpful.
    Thanks in advance.
    Debs

    Thanks for your quick responses - we have now resolved the issue.
    The shared memory value was set on the command line but not saved.
    Therefore once we rebooted it lost the configuration. this has been altered by our UNIX admin and all DBs have started without issue.
    Thanks
    Debs

  • Solaris 10 shared memory config/ora 11g

    The ora 11 install guide for spark solaris 10 is very confusing wrt shared memory and my system does not seem to using memory correctly, lots of swapping on an 8GB real memory system.
    The doc says to set /etc/system to:
    shmsys:shminfo_shmmax project.max-shm-memory 4294967296
    but infers that this is not used.
    Then, the doc states to set a project shared mem value of 2GB:
    # projmod -sK "project.max-shm-memory=(privileged,2G,deny)" group.dba
    Why is this number different?
    By setting to to 2G as documented oracle did not work at all and so I found Note:429191.1
    on the solaris 10 memory which hints that these numbers should be big:
    % prctl -n project.max-shm-memory -r -v 24GB -i project oracle_dss
    % prctl -n project.max-shm-memory -i project oracle_dss
    project: 101: oracle_dss
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 24.0GB - deny -
    system 16.0EB max deny
    Is there some logic in how to get solaris 10/ora 11 to hold hands. The install doc does not seem to contain it.

    system does not seem to using memory correctly, lots of swapping on an 8GB real memory system.We could start (for example) with this question - How big is your SGA or how much of 8GB RAM takes your SGA?
    The doc says to set /etc/system to:
    shmsys:shminfo_shmmax project.max-shm-memory 4294967296
    but infers that this is not used.From documentation:
    In Solaris 10, you are not required to make changes to the /etc/system file to implement the System V IPC. Solaris 10 uses the resource control facility for its implementation. However, Oracle recommends that you set both resource control and /etc/system/ parameters. Operating system parameters not replaced by resource controls continue to affect performance and security on Solaris 10 systems.
    Then, the doc states to set a project shared mem value of 2GB:
    # projmod -sK "project.max-shm-memory=(privileged,2G,deny)" group.dba
    Why is this number different?It's an example how "To set the maximum shared memory size to 2 GB"
    By setting to to 2G as documented oracle did not work at all Docs says:
    On Solaris 10, verify that the kernel parameters shown in the following table are set to values greater than or equal to the recommended value shown.
    If your SGA was greater than 2G I'm nor wondering why "oracle did not work at all".
    So for 4GB SGA (for example) you need allow allocation of 4G of shared memory.
    Note: shmsys:shminfo_shmmax != project.max-shm-memory. "project.max-shm-memory" is replacement of "shmsys:shminfo_shmmax" but function of these parameters differs.
    "project.max-shm-memory resource control limits the total amount of shared memory of one project, whereas previously, the shmsys:shminfo_shmmax parameter limited the size of a single shared memory segment."
    Relevant link to Sun docs: http://docs.sun.com/app/docs/doc/819-2724/chapter1-33

Maybe you are looking for

  • My ipad 4 wont restore

    how do i restore it i have tried it with itunes and with my device. when i do it with my device it takes forever but it doesnt work. when i did it on itunes it showed up as an error.

  • How to handle OK button in faces error message

    Hi everyone, Iam using Jdev 11.1.2.0. I have a scenario where faces error message will be displayed . After clicking ok button, some validation should be done . If iam using popup with dialog, i can handle my requirement using java script but now my

  • My TV2 does not have the Films on offer anymore?

    I have an Apple TV2 I am connected via wifi (50mb) The internet, Computer and Setting menu shows, but the film section is not there anymore I have rebooted I have resigned in I have a good connection, and utube etc play well Where has the film sectio

  • After editing photos in pse9 how to download to cd

    after editing photos in pse9 how do you download to c d.

  • Liboci.so missing on linux -x64

    I have installed oracle 11g on centos6.3-x64 .But I can't find liboci.so in the lib forder! why?