ORA-27102: out of memory on Solaris 10 when running dbca

Hi,
This is a new box T2000 running the latest update of Solaris 10.
I installed 10gR2 and its latest patch 4547817.
When running dbca, when it tries to do clone database, it fails with ORA-27102.
I have already set shared memory limits for oracle account using prctl.
projadd -U oracle -K "project.max-shm-memory=(priv,4294967295B,deny)" user.oracle
I tried increasing the limit to 8GB. It didn't work. The system has 32GB of memory.
When I trussed the process I saw quite a few cals of shmget failing with EINVAL.
Any ideas how I can fix this.
Thanks,
..Senthil

I had the same problem and managed to solve it. The problem is with your shared memory settings.
By default, Oracle 10 will allocate 40% of the total system RAM to the SGA and PGA.
On my T2000 with 16GB of RAM this equates to ~6.5GB. Using the value of 4GB for the SHM as stated in the Oracle installation docs is therefore too low. I set the value to 8GB and the database created without problems.
On your 32GB machine, assuming you are using the default 40% allocation, your minimum shared memory requiredment will be ~13GB. Working on the principal of setting it a little higher to get things working (you can always reduce it later), I would set 'project.max-shm-memory' to 16GB and try re-running DBCA.

Similar Messages

  • Getting ORA-27102: out of memory while creating DB using DBCA

    Hi All,
    I am working on 11.2.0.3 oracle version and linux OS. I am trying to create a new database using dbca and getting error "ORA-27102: out of memory".
    Please find the DB version and OS level parameters info below and let me know what i need to do to overcome this issue.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    $uname -a
    Linux greenlantern1a 2.6.18-92.1.17.0.1.el5 #1 SMP Tue Nov 4 17:10:53 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
    $cat /etc/sysctl.conf
    kernel.shmall = 2097152
    kernel.shmmax = 4294967295
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.core.rmem_default = 4194304
    net.core.wmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_max = 1048576
    fs.file-max = 6815744
    fs.aio-max-nr = 1048576
    net.ipv4.ip_local_port_range = 9000 65500
    $free -g
    total used free shared buffers cached
    Mem: 94 44 49 0 0 31
    -/+ buffers/cache: 12 81
    Swap: 140 6 133
    $ulimit -l
    32
    $ipcs -lm
    ------ Shared Memory Limits --------
    max number of segments = 4096
    max seg size (kbytes) = 4194303
    max total shared memory (kbytes) = 8388608
    min seg size (bytes) = 1
    Please let me know for any other details.
    Thanks in advance.

    Ok, first, let's set aside the issue of hugepages for a moment. (Personally, IMHO, if you're doing manual memory mangement, and you're not using hugepages, you're doing it wrong.)
    Anyhow, looking at your SHM parameters:
    kernel.shmall = 2097152
    kernel.shmmax = 4294967295
    kernel.shmmni = 4096
    Let's take those in reverse order:
    1.) shmmni - This is the max number of shared memory segments you can have on your system, regardless of the size of each segment.
    2.) shmmax - Contrary to popular belief, this is NOT the max amount of shared memory you can allocate system wide! This is the max size, in bytes of a single shared memory segment. You currently have it set to 4GB-1. This is probably fine. Even if you wanted an SGA larger than 4GB, having shmmax set to this wouldn't hurt you. Oracle would simply allocate multiple shared memory segments, until it had allocated enough memory for the SGA. There's really no harm there, unless this parameter is set really low, causing a huge number of tiny shared memory segments to be allocated.
    3.) shmall - This is the real shared memory segment limit. This number is the total amount of shared memory you're permitted to allocate, system wide, expressed in pages. Pagesize here is the native OS pagesize, which is 4096 bytes, so, this is 2097152 * 4096 = 8589934592, or, 8GB. So, 8GB is the maximum amount of memory that can currnetly be allocated to shared memory, on your machine.
    So, having said all that, you haven't mentioned how many, if any, other Oracle databases are running on the server or their sizes. Secondly, we have no idea what memory sizing parameters you have set on the database that you're trying to create, that's getting the error.
    So, if you can provide more details, in terms of how many other databases are already on this server, and their SGA sizes, and the parameters you've chosen for the database that's failing to create, perhaps we can help more.
    Finally, if you're not using SGA_TARGET or MEMORY_TARGET, you really need to take the time to configure hugepages. Particularly if you've got a server that has as much memory as you do, and you're planning to have non-trivially sized SGA (10s of GB), then you really want to configure hugepages.
    Hope that helps,
    -Mark

  • ORA-27102: out of memory,  when I try to increase the SGA in 10gR2 linux 64

    Hi, I´m trying to increase sga_max_size parameter, but when I startup the DB the following error is appears:
    ORA-27102: out of memory...The maximun value is 3.6 Gb Approximately, if I increase this value, the DB not start and is appears this error.
    My DB version is 10.2.0.4.0
    My OS: Enterprise Linux Enterprise Linux AS release 4 (October Update 6)
    The linux Kernel: 2.6.9-67.0.0.0.1.ELsmp x86_64 GNU/Linux
    Physical memory: 14 GB
    My init.ora:
    orcl10.__db_cache_size=2113929216
    orcl10.__java_pool_size=33554432
    orcl10.__large_pool_size=16777216
    orcl10.__shared_pool_size=301989888
    orcl10.__streams_pool_size=16777216
    *.audit_file_dest='/home/oracle/product/bbdd/admin/orcl10/adump'
    *.background_dump_dest='/home/oracle/product/bbdd/admin/orcl10/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='/home/oracle/product/oradata/orcl10/control01.ctl','/home/oracle/product/oradata/orcl10/control02.ctl','/home/oracle/product/oradata/orcl10/
    control03.ctl'
    *.core_dump_dest='/home/oracle/product/bbdd/admin/orcl10/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl10'
    *.db_recovery_file_dest='/home/oracle/product/bbdd/flash_recovery_area'
    *.db_recovery_file_dest_size=96636764160
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=orcl10XDB)'
    *.job_queue_processes=10
    *.open_cursors=300
    *.pga_aggregate_target=830472192
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.session_cached_cursors=50
    *.sga_max_size=3670016000
    *.sga_target=3670016000
    *.shared_pool_size=267386880
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='/home/oracle/product/bbdd/admin/orcl10/udump'
    My sysctl.conf
    # Kernel sysctl configuration file for Enterprise Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename.
    # Useful for debugging multi-threaded applications.
    kernel.core_uses_pid = 1
    #Parametros del kernel para rac-oracle
    kernel.shmall = 2097152
    kernel.shmmax = 10737418240
    kernel.shmmni = 4096
    # semaforos: senmsl semmns semopm semmni
    kernel.sem = 256 32000 100 128
    #kernel.msgmnb = 65535
    #kernel.msgmni = 2878
    #kernel.msgmax = 8192
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    I need any idea for how to resolve this problem, thank you
    Edited by: cehache on 10-mar-2011 11:50

    sorry, the previously init.ora is old, this is the actual init.ora file and the shared_pool_size=0:
    orcl10.__db_cache_size=3271557120
    orcl10.__java_pool_size=33554432
    orcl10.__large_pool_size=16777216
    orcl10.__shared_pool_size=335544320
    orcl10.__streams_pool_size=0
    *.audit_file_dest='/home/oracle/product/bbdd/admin/orcl10/adump'
    *.background_dump_dest='/home/oracle/product/bbdd/admin/orcl10/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='/home/oracle/product/oradata/orcl10/control01.ctl','/home/oracle/product/oradata/orcl10/control02.ctl','/home/oracle/product/oradata/orcl10/
    control03.ctl'
    *.core_dump_dest='/home/oracle/product/bbdd/admin/orcl10/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl10'
    *.db_recovery_file_dest='/home/oracle/product/bbdd/flash_recovery_area'
    *.db_recovery_file_dest_size=96636764160
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=orcl10XDB)'
    *.job_queue_processes=10
    *.open_cursors=300
    *.pga_aggregate_target=2147483648
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.session_cached_cursors=50
    *.sga_max_size=3670016000
    *.sga_target=3670016000
    *.shared_pool_size=0
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='/home/oracle/product/bbdd/admin/orcl10/udump'
    this is the result of free command:
    [oracle@edubi dbs]$ free
    total used free shared buffers cached
    Mem: 14366428 13844616 521812 0 65468 11599248
    -/+ buffers/cache: 2179900 12186528
    Swap: 10482404 44200 10438204
    the result of cat /proc/meminfo:
    [oracle@edubi dbs]$ cat /proc/meminfo
    MemTotal: 14366428 kB
    MemFree: 123396 kB
    Buffers: 67076 kB
    Cached: 11823056 kB
    SwapCached: 1504 kB
    Active: 9671280 kB
    Inactive: 3965444 kB
    HighTotal: 0 kB
    HighFree: 0 kB
    LowTotal: 14366428 kB
    LowFree: 123396 kB
    SwapTotal: 10482404 kB
    SwapFree: 10438432 kB
    Dirty: 868 kB
    Writeback: 56 kB
    Mapped: 5582756 kB
    Slab: 350380 kB
    CommitLimit: 17665616 kB
    Committed_AS: 13194688 kB
    PageTables: 212384 kB
    VmallocTotal: 536870911 kB
    VmallocUsed: 267732 kB
    VmallocChunk: 536603127 kB
    HugePages_Total: 0
    HugePages_Free: 0
    Hugepagesize: 2048 kB
    the result of vmstat command;
    [oracle@edubi dbs]$ vmstat
    procs -----------memory---------- ---swap-- -----io---- system ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 1 43896 58332 67220 11686176 0 0 166 195 15 10 2 0 94 3
    thank you

  • ORA-27102: out of memory. Faild to install oracle 10gR2 on Solaris 10

    Hi, I want to install oracle on my solaris machine. I have 2.5G RAM and more than 5G swap file. But the ORA-27102: out of memory error occurred at installing the Oracle Database Configuration Assistant step(Copying database files Creating and starting Oracle instance).
    The only warnings bellow:
    Checking kernel parameters
    Checking for BIT_SIZE=64; found BIT_SIZE=64. Passed
    Checking for shmsys:shminfo_shmmax=4294967295; found no entry. Failed <<<<
    Checking for shmsys:shminfo_shmmni=100; found no entry. Failed <<<<
    Checking for semsys:seminfo_semmni=100; found no entry. Failed <<<<
    Checking for semsys:seminfo_semmsl=256; found no entry. Failed <<<<
    Check complete. The overall result of this check is: Failed <<<<
    Problem: The kernel parameters do not meet the minimum requirements (see above).
    What's the reason and how may I resolve it?
    Thanks!

    I set some kernel parameters like: set shmsys:shminfo_shmmax=4294967295 and so on. Restart the computer and run dbca+ and configure in advance mode. There's a stage require the size of flash_recover_segment of which the default size number is 2048M. WOW, out of memory on my pc of course.
    So I think it's the parameter value during the installation progress bring on the ORA-27102 problem. However, during the first installation, there's no prompt for me to enter the parameter; or I didn't see.

  • Ora-27102: out of memory error on Solaris 10 SPARC zone install or Oracle10

    All
    I'm stuck!! Tried for days now ,and can't get this working.
    I"m getting the classic: "ora-27102: out of memory" error. However, my memory settings seem fine.
    I'm running Solaris 10 on a Zone, while installing Oracle 10.2.0.1.
    I've changed the max-memory to 6Gig's, but still, I get this error message.
    Could it be possible that this error message means something else entirely?
    Thank you for your assistance!!
    Anne

    Hi V
    Thanks for the response.
    I fixed the problem. Turns out it was because my physical memory for box is 16 gig, but my max-shm-memory was only at 6 GIG. I upped to 8 gig, and everything worked.
    I'm sure you were going to tell me to do just that! I found another post that explained it.
    Thanks!
    Anne

  • Oracle Error 'ORA-27102: out of memory' - Shared memory parameters correct.

    Advice please!
    We’ve recently shut down our Oracle test server in order to increase file system capacity.  When we rebooted some of the databases wouldn’t start up.  It started the first 4 instances and then errored out saying “ORA-27102: out of memory“.
    I’m pretty sure it’s nothing to do with the file system because we actually reverted back to the old file system and the databases still wouldn’t start.  I think it’s more likely that something’s gone awry whilst the databases were actually running, and the problem has only manifested itself once we stopped and restarted them.
    I have researched the error and forud this artical and similar:  http://var-adm.blogspot.co.uk/2013/04/adjust-solaris-10-shared-memory-to.html 
    Everything suggests that Oracle is trying to create a larger shared memory segment than is allowed.  The thing is, we’ve never changed our shared memory settings, and one minute it was working, the next it isn’t.  To confirm this I checked the shared memory, which is as follows:
    sswift4# prctl -n project.max-shm-memory $$
    process: 926: bash
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      7.64GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    As suggested in the above article, I checked the alert log and found the ‘WARNING: EINVAL’ message which is as follows:
    WARNING: EINVAL creating segment of size 0x000000005e002000
    Converting this to decimal, it’s trying to create something of 1.5 GB, well within the shared memory settings, which suggests that this isn’t the problem.
    We are running Oracle 10g and 11g on Solaris 10 Sparc. The error does not seem to be instance specific, we have 8 instances on this box all with SGA max of 2000m. The server has 32GB of memory available.
    Any advice would be helpful.
    Thanks in advance.
    Debs

    Thanks for your quick responses - we have now resolved the issue.
    The shared memory value was set on the command line but not saved.
    Therefore once we rebooted it lost the configuration. this has been altered by our UNIX admin and all DBs have started without issue.
    Thanks
    Debs

  • ORA-27102: out of memory SVR4 Error: 22: Invalid argument

    Hi all,
    I'm doing an install of a Solaris 10.2, Oracle 10.2 system. During the Create Database phase, I am getting;
    ORA-27102: out of memory SVR4 Error: 22: Invalid argument
    Doing some research, and reading through the details here:
    Link: [http://technopark02.blogspot.com/2006/09/solaris-10oracle-fixing-ora-27102-out.html]
    I think my issue is my SHM parameters, reinforced by the repeated entry in the Oracle Alert log:
    +WARNING: EINVAL creating segment of size 0x0000000085000000+*
    +fix shm parameters in /etc/system or equivalent+*
    when the create fails.
    I am not familar with Solaris' new project mechanism, although from what I have read, it seems to be set up properly.
    Here are my server details:
    # prtconf | grep Mem
    Memory size: 8192 Megabytes
    # prctl -n project.max-shm-memory -i project 200
    project: 200: QBI
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      10.0EB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    And as for Oracle:
    shared_pool_size = 1522029035
    shared_pool_reserved_size = 152202903
    pga_aggregate_target = 2705829396
    sga_max_size = 3439329280
    db_cache_size = 1159641169
    During the course of troubleshooting, I have:
    1 - Increased the amount of SHM allocated in the project. I have tried 16GB, 8 GB, 10 GB, 11GB etc, to no effect, so I reset it to 10GB (as seen above) and focused my efforts elsewhere.
    2. SHARED_POOL_SIZE - I have decreased this by roughly 75% from the original value, again to no effect.
    3. PGA and SGA sizes - I have increased these from the original values by an increment of 25%
    Following the advice from the referenced blog (which does a good job of explaining the logic behind the actions) I have determined that the Alert log error message is telling me that it is lacking
    2231369728
    (Hex conversion value, which I think I need to read as 2GB, not 100% sure)
    I've increased my project allocation, and the PGA sizes, did I just not do it enough?
    Any advice?
    Thanks for any input,
    Troy Shane

    Hi,
    check following sap note
    Note 546006 - Problems with Oracle due to operating system errors
    Note 743328 - Composite SAP note: ORA-27102
    regards,
    kaushal

  • ORA-27102: out of memory error associated with SGA increase.

    Hi members,
    We are using Oracle 10.2.0.3 on Windows 2003 Server 32-bit. The total RAM on the box is 32 GB. Current SGA is 1700M. PGA is 700M.
    The issue is with one query that is completely hanging when run on this windows database but it it running fine on Oracle 10.2.0.3 database on Solaris 10. SGA on Solaris is 3GB. PGA is 700M. The record counts of the tables that this query uses are same in both the databases.
    Even when no other queries are running in the windows database, this query still hangs. Is SGA increase recommended in this situations? I have already increased it to 1700M and the query still runs slow. I don't think SGA will improve it, and I asked the developer to tune the query. Please let me know smoe of your thoughts. Is 3GT, PAE, or AWE the recommended approach. I do not want to change them unless I tune the query..
    I tried to increase it to 3GB, just gave a try and as I expected I ran into the ora-27102 error, so I brought the SGA back to 1700M. What is the hard limit of SGA on Windows 2003 server 32-bit?
    ORA-27102: out of memory
    OSD-00029: additional error information
    O/S-Error: (OS 8) Not enough storage is available to process this command.
    All your suggestions will be appreciated.
    Regards.

    Everything I've been able to read points to enabling PAE. I'm only assuming its not enabled because you mentioned wanting to tune the query before looking into AWE, PAE, etc. I don't think oracle knows it has more than 4GB's of memory available or it wouldn't/shouldn't be complaining.
    I think once you enable PAE you will no longer get this error. Otherwise, I believe the limit is 2048MB
    I would agree that a query hanging on one DB and not on another is related to the amount of memory allocated. For testing, several of our databases have far less memory available then our production databases. Performance changes significantly...especially (obviously) with larger queries.

  • Oracle is not starting (ORA-27102: out of memory)

    Hi All,
    We have installed successfuly SAP Netweaver EHP1(SR1) (ABAP and Java)  with  SUNOS & Oracle10 on the same server.
    After installation we have stopped the java instance and tried to start but the database is not starting and it is showing the error
    SQL> startup
    ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    Then we have added project in /etc/project
    NWJ:500:SAP System NWJ:nwjadm,oranwj::process.max-sem-nsems=(priv,2048,deny);project.max-sem-ids=(priv,1024,deny);project.max-shm-ids=(priv,256,deny);project.max-shm-memory=(priv,18446744073709551615,deny)
    but it doesn't resolve the issue.
    Even we have stopped the Abap instance including database but no use.
    Can any one provide the soluion ASAP.
    Regards,
    Venu

    Hello,
    I've hit this ORA-27102 several times when installing on Solaris 10. The problem was that SAPINST (which runs as root) runs its child processes under the root project, even though they have user ID ora<sid> or <sid>adm. Because the root project often has shared memory limited to 2 GB, Oracle fails with ORA-27102 if it needs more than 2 GB.
    I'm not quite sure that the above problem corrseponds with yours though. Is SAPINST finished and are you logged in as ora<sid> or <sid>adm and running SQLPLUS manually? If so, make sure that you are indeed running in the correct project. Commands that can help:
    id -p
    ps -e -o pid,comm,project | grep sqlplus
    Regards,
    Mark

  • Database creation = ora-27102 out of memory

    Hi,
         I have a solaris sparc 9.5
    Memory size: 16384 Megabytes
         swapfile dev swaplo blocks free
    /dev/dsk/c1t0d0s1 32,25 16 1068464 1068464
         And when I try to create a database with the following configuration
         DUMMY AREA NAME SUM(BYTES)
    2 Shared Pool shared pool 603979776
    3 Large Pool large pool 352321536
    4 Java Pool java pool 33554432
    5 Redo Log Buffer log_buffer 787456
    6 Fixed SGA fixed_sga 731328
         =>     ora-27102 out of memory
         Help me please

    The error is reported by Oracle during allocation of the SGA and will happen
    in cases where the kernel parameter SHMMAX/SHM_MAX is not set high enough.
    The SHMMAX kernel parameter decides the maximum size of a shared memory segment
    that can be allocated in the system. Since Oracle implements SGA using shared
    memory, this parameter should be set appropriately. The value of the SHMMAX
    kernel parameter should be higher than the maximum of SGA sizes of the Oracle
    instances used in the server. In cases where the SHMMAX is smaller than the SGA
    size, Oracle tries to fit the entire SGA into a single shared memory segment,
    which will fail, and you will see the warning message in the alert.log.
    The recommended value for this parameter is 4294967295 (4 GB), or the size of
    physical memory, or half the size of physical memory, depending on platform.
    Setting the SHMMAX to recommended value in the kernel parameter configuration
    file and rebooting the server will get rid of the warning messages. See the
    platform specific Oracle installation guide for detailed information on how to
    modify the SHMMAX/SHM_MAX kernel parameter.
    General guidelines for SHMMAX on common platforms
    (check with your vendor for maximum settings):
    Platform Recommended value
    Solaris/Sun 4 GB or max sga whichever is higher

  • Create database fails with ORA-27102 -out of memory

    Hi,
    I have Solaris 10 server with 16 GB ram. On it there are 10 databases (8 of them 9.2.0.7, and 2 of them 10.2.0.4) running -but they have small SGAs -300 mb each (some even smaller 200 mb or so). Now I have to create two more database on it. When I try to create the db, it fails with the error:
    Connected to an idle instance.
    ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    And alert log has meesages as below:
    Starting ORACLE instance (normal)
    Tue May 26 07:37:39 2009
    WARNING: EINVAL creating segment of size 0x0000000029002000
    fix shm parameters in /etc/system or equivalent
    Also see the output of this command :
    prctl -n project.max-shm-memory -i project user.root
    project: 1: user.root
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
    privileged      3.92GB      -   deny                                 -
    system          16.0EB    max   deny                                 -
    Now I tried to change this with this command (as suggested in installation guide):
    prctl -n project.max-shm-memory -v 8gb -r -i project user.root
    but still I get teh same error. So I refer to Metalink document 399895.1. It says that manually change the settings in /etc/system. this needs a reboot and I got premission to do this reboot tomorrow. But my question is: What are the values that I should be putting in this file?+As suggested in the note, should I put the below values? -
    For example, a sample value (mentioned in the note) are: for /etc/system entry setting SHMMAX = 6GB.
    set shmsys:shminfo_shmmax=6442450944
    set semsys:seminfo_semmni=1024
    set semsys:seminfo_semmsl=1024
    set shmsys:shminfo_shmmni=100
    or should I put some other values (for all the parameters like semmni, semmsl etc) ? I am not clear which values I should be specifying.
    Thanks
    Edited by: orausern on May 26, 2009 7:24 AM
    Edited by: orausern on May 26, 2009 7:27 AM

    Wow! your help comes like an angles' helping hand! thank you so much. I am not very knowledgable on solaris so a few questions :
    Currently there is no project set up for oracle on this server, so the steps I need to do -to make the changes permanent are:
    # projadd -c "Oracle" 'user.oracle'
    # projmod -s -K "project.max-shm-memory=(privileged,8GB,deny)" 'user.oracle'
    correct?
    thanks a lot again

  • Dbca failure (9i install, Solaris64):cant find oratab, and ORA-27102 out of memory

    Trying to install 9i Enterprise Edition on Solaris 9 (64 bit), get two messages during running Database Configuration Assistant:
    "/var/opt/oracle/oratab file not found"
    followed later by
    "ORA-27102 out of memory"
    1. I searched the entire system (all drives, directories) but no such file oratab.
    2. ?
    What to do?
    Thanks,
    Joe

    Yes, the missing oratab problem was solved by running root.sh
    Had been missed on first install attempt.
    Thanks for your help!
    Hi,
    i think is some problem with installation coz oratab is a file created by root.sh and updated by the Database Configuration Assistant when creating
    a database.so you required oratab in /var/opt/oracle in solaris as database configure assistance lookup there for oratab.
    either copy this file from somewhere and try or try to reinstall.
    reply me the following question:
    1)which user and group you use to install oracle
    2)check is root.sh avilable in your $ORACLE_HOME or/and in $ORACLE_HOME/install/util.
    Yogi
    [email protected]

  • Oracle 9i Database installation error ORA-27102: out of memory HELP

    Hello
    Appologies if this post has been answered already, or if I am meant to post some data capture to show what is the issue however i am a bit unsure what I need.
    I have downloaded oracle 9i for my university course as I need to have it to do some SQL and Forms building.
    I have had a lot of issues but I have battled through them - however now I am stuck on this one.
    I install Oracle and then the below:
    Install Oracle Database 9.2.0.1.0
    Personal Edition 2.80gb
    General Purpose
    I leave the defualt port
    Set my database name
    Select the location
    Character set etc
    then the database config assistant starts to install the new database at 46% i get the error on a pop up window :
    ORA-27102: out of memory
    How can I resolve this??
    I am a mainframe programmer and not at all in anyway a windows whizz - please oculd someone help a dummy understand??
    Again thank you all very much

    You have too few RAM on your machine, even you could successfully create an instance, it's going to slow as hell.
    When you run DBCA to create database, instead of actually creating the database you could choose to dump the SQL scripts and files used for database creation to a directory. This way will give you a chance to modify pfile and reduce the SGA parameter. I believe the default SGA of instance created by DBCA is already beyond your RAM limit.

  • ORA-27102: out of memory SVR4 Error: 12: Not enough space

    We got image copy of one of production server that runs on Solaris 9 and our SA guys restored and handed over to us (DBAs). There is only one database running on source server. I have to up the database on the new server. while startup the database I'm getting following error.
    ====================================================================
    SQL*Plus: Release 10.2.0.1.0 - Production on Fri Aug 6 16:36:14 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup
    ORA-27102: out of memory
    SVR4 Error: 12: Not enough space
    SQL>
    ====================================================================
    ABOUT THE SERVER AND DATABASE
    Server:
    uname -a
    SunOS ush** 5.9 Generic_Virtual sun4u sparc SUNW,T5240*
    Database: Oracle 10.2.0.1.0
    I'm giving the "top" command output below:
    Before attempt to start the database:
    load averages: 2.85, 9.39, 5.50 16:35:46
    31 processes: 30 sleeping, 1 on cpu
    CPU states: 98.9% idle, 0.7% user, 0.4% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 239G free, 49M swap in use, 16G swap free
    the moment I run the "startup" command
    load averages: 1.54, 7.88, 5.20 16:36:44
    33 processes: 31 sleeping, 2 on cpu
    CPU states: 98.8% idle, 0.0% user, 1.2% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 224G free, 15G swap in use, 771M swap free
    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.
    and ulimit -a gives as below..
    root@ush**> ulimit -a*
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) unlimited
    stack(kbytes) 8192
    coredump(blocks) unlimited
    nofiles(descriptors) 256
    memory(kbytes) unlimited
    root@ush**>*
    and ipcs shows nothing as below:
    root@ush**> ipcs*
    IPC status from <running system> as of Fri Aug 6 19:45:06 PDT 2010
    T ID KEY MODE OWNER GROUP
    Message Queues:
    Shared Memory:
    Semaphores:
    Finally Alert Log gives nothing, but "instance starting"...
    Please let us know where else I should check for route cause ... Thank You.

    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.are identical initSID,ora or spfile being used to start the DB.
    Clues indicate Oracle is requesting more shared memory than OS can provide.
    Do any additional clues exist within alert_SID.log file?

  • ORA-27102: out of memory

    I have a Iinux X86 64bit server with Oracle 10.1. The RAM is 16G, and there are 12 databases running on it, each with 600M SGA and 100M PGA. I encounter following error when try to start a new instance:
    SQL*Plus: Release 10.1.0.4.0 - Production on Mon Jul 23 09:47:42 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup nomount
    ORA-27102: out of memory
    Linux-x86_64 Error: 28: No space left on device
    SQL> !oerr ora 27102
    27102, 00000, "out of memory"
    // *Cause: Out of memory
    // *Action: Consult the trace file for details
    SQL>
    Following is the setting in /etc/sysctl.conf
    # Oracle Setup
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 8589934592
    kernel.shmmni = 4096
    kernel.shmall = 2097152
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    Thanks in advance.

    Output from "free" and "ipcs" command will be also helpful.
    Message was edited by:
    Ivan Kartik
    I have changed the post because I have posted totally wrong info (maybe I need some cofee break).
    SHMMAX is settings for max size of shared memory segment.
    So Terry's post/answer is relevant to this problem.

Maybe you are looking for