Multiple shared memory segments per instance (redhat AS2.1)

We are having some trouble with oracle 9.2.0 on redhat linux advanced server. ipcs shows that quite a lot of shm segments are allocated for a single instance, which is quite strange since it should be one. Did anyone out there encounter similar problems ?
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 12845056 oracle 640 4194304 13
0x00000000 12877825 oracle 640 33554432 13
0x00000000 12910594 oracle 640 25165824 13
0x00000000 12943363 oracle 640 20971520 13
0x00000000 12976132 oracle 640 29360128 13
0x00000000 13008901 oracle 640 29360128 13
0x00000000 13041670 oracle 640 20971520 13
0x00000000 13074439 oracle 640 33554432 13
0xe7f3c788 13107209 oracle 640 33554432 65

The maximum size of the shared memory segment is too small. Since Oracle cannot fit the SGA into one large shared memory segment, it allocates several shared memory segments. The maximum size of your shared memory segments is around 32 MB.
To set the maximum size for a shared memory segment, see http://www.puschitz.com/TuningLinuxForOracle.shtml#SettingSHMMAXParameter
Werner

Similar Messages

  • How to find which shared memory segment corresponds to which instance.

    If you have two or more instances running on a unix machine, is there a way to find out which shred memory segment and what semaphores are allocated. You could do a "ipcs -a" which will give you all the shared memory segments and semaphores, but how to find what corresponds to what instance.
    Thanks
    Devinder
    PS: I dont know whether this is correct discussion forum or not, cause I could not find anything related to it.
    null

    if you type ipcs -a you could see:
    ipcs -aIPC status from <running system> as of Tue Jan 30 12:02:20 EST 2001
    Message Queue facility inactive.
    T ID KEY MODE OWNER GROUP CREATOR CGROUP NATTCH SEGSZ CPID LPID ATIME DTIME CTIME
    Shared Memory:
    m 0 0x500008c7 rw-rr-- root root root root 1 68 373 373 11:48:11 11:48:11 11:48:11
    m 1 0xb9359140 rw-r--- oracle dba oracle dba 28 109568000 593 8631 11:29:43 12:01:36 11:49:06
    m 2 0x21d38b58 rw-r--- oracle dba oracle dba 53 83746816 654 8630 12:01:15 12:01:15 11:49:58
    T ID KEY MODE OWNER GROUP CREATOR CGROUP NSEMS OTIME CTIME
    Semaphores:
    s 196608 0xbfb87050 ra-r--- oracle dba oracle dba 54 12:01:37 11:49:06
    s 196609 0xcc222a00 ra-r--- oracle dba oracle dba 104 11:53:31 11:49:59
    note that the CTIME shows you when the entry was created... they usually map to within a second between the memory segment and the semaphore (M and S). 11:49:06 and 11:49:59 in my case.
    If I look in my alert_log for each instance I will see:
    --Sun Jan 28 11:49:04 2001
    --Starting ORACLE instance (normal)
    and in the other file
    --Sun Jan 28 11:49:58 2001
    --Starting ORACLE instance (normal)
    This lets me know which instance uses which semaphores/memory segments.
    Just be sure the start your instances at least 30 seconds apart so the times are different enough.
    enjoy.
    null

  • Redhat: TT0837: Cannot attach data store shared-memory segment, error 12

    Customer has two systems, one Solaris and one Linux.  We have six DSNs with one dsn PermSize at 1.85G.  Both OS systems are 32-bit.   After migrating from TT6.0 to 11.2,  I can not get replication working on the Linux system for the 1.85G dsn.   The Solaris system is working correctly.   I've been able to duplicate the issue in out lab also.  If I drop the PermSize down to 1.0G, replication is started.   I've tried changing multiple parameters including setting up HugePages.  
    What else could I be missing?  Decreasing the PermSize is not an option for this customer.   Going to a full 64-bit system is on our development roadmap but is at least a year away due to other commitments.
    This is my current linux lab configuration.
    ttStatus output for the failed Subscriber DSN and a working DynamicDB DSN.    As you can see, the policy is set to "Always" but it has no Subdaemon or Replication processes running.
    Data store /space/Database/db/Subscriber
    There are no connections to the data store
    Replication policy  : Always
    Replication agent is running.
    Cache Agent policy  : Manual
    Data store /space/Database/db/DynamicDB
    There are 14 connections to the data store
    Shared Memory KEY 0x5602000c ID 1826586625 (LARGE PAGES, LOCKED)
    Type            PID     Context     Connection Name              ConnID
    Replication     88135   0x56700698  LOGFORCE                          4
    Replication     88135   0x56800468  REPHOLD                           3
    Replication     88135   0x56900468  TRANSMITTER                       5
    Replication     88135   0x56a00468  REPLISTENER                       2
    Subdaemon       86329   0x08472788  Manager                        2032
    Subdaemon       86329   0x084c5290  Rollback                       2033
    Subdaemon       86329   0xd1900468  Deadlock Detector              2037
    Subdaemon       86329   0xd1a00468  Flusher                        2036
    Subdaemon       86329   0xd1b00468  HistGC                         2039
    Subdaemon       86329   0xd1c00468  Log Marker                     2038
    Subdaemon       86329   0xd1d00468  AsyncMV                        2041
    Subdaemon       86329   0xd1e00468  Monitor                        2034
    Subdaemon       86329   0xd2000468  Aging                          2040
    Subdaemon       86329   0xd2200468  Checkpoint                     2035
    Replication policy  : Always
    Replication agent is running.
    Cache Agent policy  : Manual
    Summary of Perm and Temp Sizes of each system. 
    PermSize=100
    TempSize=50
    PermSize=100
    TempSize=50
    PermSize=64
    TempSize=32
    PermSize=1850    => Subscriber
    TempSize=35     => Subscriber
    PermSize=64
    TempSize=32
    PermSize=200
    TempSize=75
    [SubscriberDir]
    Driver=/opt/SANTone/msc/active/TimesTen/lib/libtten.so
    DataStore=/Database/db/Subscriber
    AutoCreate=0
    DurableCommits=0
    ExclAccess=0
    LockLevel=0
    PermWarnThreshold=80
    TempWarnThreshold=80
    PermSize=1850
    TempSize=35
    ThreadSafe=1
    WaitForConnect=1
    Preallocate=1
    MemoryLock=3
    ###MemoryLock=0
    SMPOptLevel=1
    Connections=64
    CkptFrequency=300
    DatabaseCharacterSet=TIMESTEN8
    TypeMode=1
    DuplicateBindMode=1
    msclab3201% cat ttendaemon.options
    -supportlog /var/ttLog/ttsupport.log
    -maxsupportlogsize 500000000
    -userlog /var/ttLog/userlog
    -maxuserlogsize 100000000
    -insecure-backwards-compat
    -verbose
    -minsubs 12
    -maxsubs 60
    -server 16002
    -enableIPv6
    -linuxLargePageAlignment 2
    msclab3201# cat /proc/meminfo
    MemTotal:       66002344 kB
    MemFree:        40254188 kB
    Buffers:          474104 kB
    Cached:         19753148 kB
    SwapCached:            0 kB
    HugePages_Total:
    2000
    HugePages_Free:
    2000
    HugePages_Rsvd:   
    0
    HugePages_Surp:   
    0
    Hugepagesize:  
    2048 kB
    ## Before loading Subscriber Dsn
    msclab3201# ipcs -m
    ------ Shared Memory Segments --------
    key        shmid      owner      perms      bytes      nattch     status
    0xbc0101d6 1703411712 ttadmin    660        1048576    1
    0x79010649 24444930   root       666        404        0
    ## After loading Subscriber Dsn
    msclab3201# ipcs -m
    ------ Shared Memory Segments --------
    key        shmid      owner      perms      bytes      nattch     status
    0xbc0101d6 1703411712 ttadmin    660        1048576    2
    0x7f020012 1825964033 ttadmin    660        236978176  2
    0x79010649 24444930   root       666        404        0
    msclab3201#
    msclab3201# sysctl -a  | grep huge
    vm.nr_hugepages = 2000
    vm.nr_hugepages_mempolicy = 2000

    The size of these databases is very close to the limit for 32-bit systems and you are almost certainly running into address space issues given that 11.2 has a slightly larger footprint than 6.0. 32-bit is really 'legacy' nowadays and you should move to a 64-bit platform as soon as you are able. That will solve your problems. I do not think there is any other solution (other than reducing the size of the database).
    Chris

  • ORA-27123 unable to attach shared memory segment

    Running oracle 8.1.5.0.0 on Redhat 6.0 with kernel 2.2.12, I keep getting the error ORA-27123 unable to attach shared memory segment when trying to startup and instance with an SGA > 150 MB or so. I have modified the shmmax and shmall kernel parameters via the /proc/sys interface. The relevant output of ipcs -l is below:
    ------ Shared Memory Limits --------
    max number of segments = 128
    max seg size (kbytes) = 976562
    max total shared memory (kbytes) = 16777216
    min seg size (bytes) = 1
    This system has 2gb of physical memory and is doing nothing except oracle.
    I changed the shmmax and shmall parameters after the instance was created, was their something I needed to do to inform Oracle of the changes?

    High JW,
    i had the same problem on my installation.
    The solution is written in the Oracle8i Administrator Refernece on page 1-26 "Relocating the SGA"
    a) determine the valid adress range for Shared Memory with:
    $ tstshm
    in the output Lowest & Highest SHM indicate the valid adress range
    b) run genksms to generate the file ksms.s
    $ cd $ORACLE_HOME/rdbms/lib
    $ $ORACLE_HOME/bin/genksms -b "sga_beginn_adress" > ksms.s
    c) shut down any instance
    d) rebuilt the oracle exe in $ORACLE_HOME/rdbms/lib
    $ make -f ins_rdbms.mk ksms.o
    $ make -f ins_rdbms.mk ioracle
    the result is a new oracle kernel that loads the SGA at the adress specified in "sga_beginn_adress".
    regards
    Gerhard

  • 836: Cannot create data store shared-memory segment, error 22

    Hi,
    I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
    I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
    report on it through a J2EE website.
    We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
    only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
    see if we can store about 50-60gb in memory.
    Is this correct? Or are there any caveats in relation to this?
    We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
         836: Cannot create data store shared-memory segment, error 22
         703: Subdaemon connect to data store failed with error TT836
    Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
    Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
    * Existing Oracle Database instances are not adversely impacted
    * We are able to create a Data Store which is able fully utilise the physical memory on the box
    * We don't need to change these settings for quite some time, and still be able to complete our evaluation
    We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
    The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
    Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
    Machine
    ## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
    SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
    FJSV,SPARC64-V
    System Configuration: Sun Microsystems sun4us
    Memory size: 32768 Megabytes
    12 processors
    /etc/system
    set rlim_fd_max = 1080                # Not set on the machine
    set rlim_fd_cur=4096               # Not set on the machine
    set rlim_fd_max=4096                # Not set on the machine
    set semsys:seminfo_semmni = 20           # machine has 0x42, Decimal = 66
    set semsys:seminfo_semmsl = 512      # machine has 0x81, Decimal = 129
    set semsys:seminfo_semmns = 10240      # machine has 0x2101, Decimal = 8449
    set semsys:seminfo_semmnu = 10240      # machine has 0x2101, Decimal = 8449
    set shmsys:shminfo_shmseg=12           # machine has 1024
    set shmsys:shminfo_shmmax = 0x20000000     # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
    $ /usr/sbin/sysdef | grep -i sem
    sys/sparcv9/semsys
    sys/semsys
    * IPC Semaphores
    66 semaphore identifiers (SEMMNI)
    8449 semaphores in system (SEMMNS)
    8449 undo structures in system (SEMMNU)
    129 max semaphores per id (SEMMSL)
    100 max operations per semop call (SEMOPM)
    1024 max undo entries per process (SEMUME)
    32767 semaphore maximum value (SEMVMX)
    16384 adjust on exit max value (SEMAEM)

    Hi,
    I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
    Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
    You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
    TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
    If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
    Regards, Chris

  • Ora- shared memory segment

    Hello, gurus!
    I can connect this way:
    oracle@mypc sqlplus sys/sys1@mydatabase
    While trying to connect this way:
    oracle@mypc sqlplus sys/sys1
    I am getting an error:
    ORA-01034: Oracle not available
    ORA-27123: unable to attach to shared memory segment
    Linux Error: 13 permission denied
    What's wrong?
    I am on LINUX RHEL 4 + Oracle 10.2.0.4
    Thanks in advance.

    user21123, the init.ora and/or spfile have nothing to do with this issue. These errors means that you are attempting to connect to an Oracle instance that cannot be located. This can be because you have an environment error setting as the first response mentioned or before the target instance is not running as I mentioned.
    Here is the error when the environment variables are set correctly but the database is not started.
    JServer Release 9.2.0.6.0 - Production
    $ print $ORACLE_SID $TWO_TASK
    TRN1
    $ sqlplus mpowel01
    SQL*Plus: Release 9.2.0.6.0 - Production on Fri Aug 22 13:07:48 2008
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Enter password:
    ERROR:
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    IBM AIX RISC System/6000 Error: 2: No such file or directory
    SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
    $ sqlplus /nolog
    SQL*Plus: Release 9.2.0.6.0 - Production on Fri Aug 22 13:08:24 2008
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    > connect / as sysdba
    Connected to an idle instance.
    > startup
    ORACLE instance started.
    Total System Global Area  114468624 bytes
    <snip>
    > exit
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.6.0 - Production
    $ sqlplus mpowel01
    SQL*Plus: Release 9.2.0.6.0 - Production on Fri Aug 22 13:11:58 2008
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Enter password:
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.6.0 - Production
    TRN1 >HTH -- Mark D Powell --

  • TNS-01115: OS error 28 creating shared memory segment of 129 bytes

    hi
    we are operating a solaris v5.8 with 10 instances of 10.2.0.1 databases running. each with its own listener. the system shmmni=3600 and using ipcs all are being used causing the error TNS-01115: OS error 28 creating shared memory segment of 129 bytes to occur.
    The kernal parameters were set to be the same as a similiar server we have with the same configuration and more databases and that box uses only 53 memory segments
    Does anyone have any ideas as to what would make this happen?

    i wish i could. there was one db that was not needed so i just shut it down and stopped the listener. then took an ipcs -m reading. it returned 48 rows, instead of 3603 as it did when this particular db was up. in my haste i removed the db as it was not needed so i no longer have the logs to research. too bad on my part.
    well at least i have a fix but have no idea why this happened. thank you for your responses. greatly appreciated.

  • Shared Memory Segments

    I am trying to install the Oracle 8.1.6 database that came with the Solaris 8 Intel package. In the installation it is necessary to set the Shared Memory Segments. An example of how to do this is found in the Solaris 8 System Admin Guide Vol 2 on page 465. There is a line:
    "set shmsys:shinfo_shmmax = 268435456".
    My question is how is "268435456" computed or what is it related to? Can someone give me a formula to calculate this from megabytes?
    The Oracle documentation was also vague in this area. Their example simply gave a number also, which is a minimal amount required for the database. There was no explanation of how is was actually computed.
    Thanks in advance.
    George R. Sealy
    ISS

    Hi,
    As per
    http://docs.sun.com/ab2/coll.707.1/SOLTUNEPARAMREF/%40Ab2PageView/6980?DwebQuery=shmmax&oqt=shmmax&Ab2Lang=C&Ab2Enc=iso-8859-1
    its unit is bytes. Therefore above value is 256Mbytes.
    Thanks
    Kalpesh

  • Sysresv returns multiple shared memory IDs for one database

    Shared Memory:
    ID KEY
    8 0x00000000
    9 0x00000000
    10 0x00000000
    13 0x00000000
    14 0xae2ae9d0
    Please see this 'sysresv' output. It returns multiple shared memory IDs for one database (althouth all are 0s except one). Why is this? Does it matter? It seems not eating up memory. Stop and start database didn't help.

    * System Configuration
    swap files
    swapfile dev swaplo blocks free
    /dev/md/dsk/d101 85,101 16 201342320 201342320
    * Tunable Parameters
    2055864320 maximum memory allowed in buffer cache (bufhwm)
    30000 maximum number of processes (v.v_proc)
    99 maximum global priority in sys class (MAXCLSYSPRI)
    29995 maximum processes per user id (v.v_maxup)
    30 auto update time limit in seconds (NAUTOUP)
    25 page stealing low water mark (GPGSLO)
    1 fsflush run rate (FSFLUSHR)
    25 minimum resident memory for avoiding deadlock (MINARMEM)
    25 minimum swapable memory for avoiding deadlock (MINASMEM)
    CO4P:/opt/oracle:>ipcs -ma
    IPC status from <running system> as of Monday, April 21, 2008 2:50:10 PM PDT
    T ID KEY MODE OWNER GROUP CREATOR CGROUP NATTCH SEGSZ CPID LPID ATIME DTIME CTIME
    Shared Memory:
    m 14 0xae2ae9d0 rw-r--- oracle dba oracle dba 481 24576 20322 25104 14:50:10 14:50:10 11:24:46
    m 13 0 rw-r--- oracle dba oracle dba 481 2030043136 20322 25104 14:50:10 14:50:08 11:24:43
    m 10 0 rw-r--- oracle dba oracle dba 481 2030043136 20322 25104 14:50:10 14:50:08 11:24:40
    m 9 0 rw-r--- oracle dba oracle dba 481 2013265920 20322 25104 14:50:10 14:50:08 11:24:37
    m 8 0 rw-r--- oracle dba oracle dba 481 2063597568 20322 25104 14:50:10 14:50:08 11:24:34
    m 0 0xcace --rw-rw-rw-     root     root     root     root      0          2  2344  2344 14:49:57 14:49:57  9:35:43                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Max attached shared memory segments

    Hi all.
    I have a client/server model using SunOs 5.8 in which the server creates one shared memory segment in a single process and attaches that segment to all connected clients' shared memory.
    My question is how many clients can I support with this model? The maximum # of shared memory segments that can be attached is 6 per process according to sysdef info, but so far I have attached 8 ( meaning I have connected 8 clients to the server and have attached the shared memory of the server to each of the eight clients' shared memory). Therefore, I have 8 attached shared memory segments to the server's shared memory segment, which is more than 6. The system value of SHMSEG has not been changed. Any info would be appreciated.
    Thanks
    Roy Park
    [email protected]

    I am not sure if I am reading your setup correctly, but it sounds like you may only be allocating one shared memory segment. That one shared memory segment is then attached by all of the clients. If that is the case, you can attach a virtually unlimited number of clients.
    The shmseg parameter limits the number of segments which can be attached by a single process. In other words if the clients each create a shared memory segment and the server attaches to all of those, then the limit should have been reached. I would like to see a little more information if that is the case. (ipcs output to start)
    Alan
    Sun Developer Technical Support
    http://www.sun.com/developers/support

  • Cannot create data store shared-memory segment error

    Hi,
    Here is some background information:
    [ttadmin@timesten-la-p1 ~]$ ttversion
    TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
    Instance admin: ttadmin
    Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
    Group owner: ttadmin
    Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
    PL/SQL enabled.
    [ttadmin@timesten-la-p1 ~]$ uname -a
    Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
    [root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
    68719476736
    [ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
    MemTotal: 148426936 kB
    MemFree: 116542072 kB
    Buffers: 465800 kB
    Cached: 30228196 kB
    SwapCached: 0 kB
    Active: 5739276 kB
    Inactive: 25119448 kB
    HighTotal: 0 kB
    HighFree: 0 kB
    LowTotal: 148426936 kB
    LowFree: 116542072 kB
    SwapTotal: 16777208 kB
    SwapFree: 16777208 kB
    Dirty: 60 kB
    Writeback: 0 kB
    AnonPages: 164740 kB
    Mapped: 39188 kB
    Slab: 970548 kB
    PageTables: 10428 kB
    NFS_Unstable: 0 kB
    Bounce: 0 kB
    CommitLimit: 90990676 kB
    Committed_AS: 615028 kB
    VmallocTotal: 34359738367 kB
    VmallocUsed: 274804 kB
    VmallocChunk: 34359462519 kB
    HugePages_Total: 0
    HugePages_Free: 0
    HugePages_Rsvd: 0
    Hugepagesize: 2048 kB
    extract from sys.odbc.ini
    [cachealone2]
    Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
    DataStore=/u02/timesten/datastore/cachealone2/cachealone2
    PermSize=14336
    OracleNetServiceName=ttdev
    DatabaseCharacterset=WE8ISO8859P1
    ConnectionCharacterSet=WE8ISO8859P1
    [ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
    SwapTotal: 16777208 kB
    Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
    [ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
    Copyright (c) 1996-2009, Oracle. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "DSN=cachealone2";
    836: Cannot create data store shared-memory segment, error 28
    703: Subdaemon connect to data store failed with error TT836
    The command failed.
    Done.
    I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
    Regards,
    Raj

    Those parameters look ok for a 100GB shared memory segment. Also check the following:
    ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
    This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
    To view the current setting run the OS command
    $ ulimit -l
    and to set it to a value dynamically use
    $ ulimit -l <value>.
    Once changed you need to restart the TimesTen master daemon for the change to be picked up.
    $ ttDaemonAdmin -restart
    Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
    If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
    -linuxLargePageAlignment 2
    So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
    Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
    PermSize+TempSize+LogBufMB+64MB Overhead
    For example consider a TimesTen database of size:
    PermSize=250000 (unit is MB)
    TempSize=100000
    LogBufMB=1024
    Total Memory = 250000+100000+1024+64 = 351088MB
    The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
    351088/2 = 175544
    As user root edit the /etc/sysctl.conf file
    Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
    vm.nr_hugepages=175544
    Add/modify vm.hugetlb_shm_group = 600
    This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
    $ id
    $ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
    As user root edit the /etc/security/limits.conf file
    Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
    oracle hard memlock 359514112
    oracle soft memlock 359514112
    THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
    $ sysctl -p
    Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
    Check Hugepages has been setup correctly, look for Hugepages_Total
    $ cat /proc/meminfo | grep Huge
    Based on the example values above you would see the following:
    HugePages_Total: 175544
    HugePages_Free: 175544

  • Error message: ORA-27125: unable to create shared memory segment Linux-x86_

    Hi,
    I am doing an installtion of SAP Netweaver 2004s SR3 on SusE Linux 11/Oracle 10.2
    But i am facing the follow issue in Create Database phase of SAPInst.
    An error occurred while processing service SAP NetWeaver 7.0 Support Release 3 > SAP Systems > Oracle > Central System > Central System( Last error reported by the step :Caught ESAPinstException in Modulecall: ORA-27125: unable to create shared memory segment Linux-x86_64 Error: 1: Operation not permitted Disconnected
    Please help me to resolve the issue.
    Thanks,
    Nishitha

    Hi Ratnajit,
    I am too facing the same error but my ORACLE is not starting,
    Here are my results of following command:
    cat /etc/sysctl.conf
    # created by /sapmnt/pss-linux/scripts/sysctl.pl on Wed Oct 23 22:55:01 CEST 2013
    fs.inotify.max_user_watches = 65536
    kernel.randomize_va_space = 0
    ##kernel.sem = 1250 256000 100 8192
    kernel.sysrq = 1
    net.ipv4.conf.all.promote_secondaries = 1
    net.ipv4.conf.all.rp_filter = 0
    net.ipv4.conf.default.promote_secondaries = 1
    net.ipv4.icmp_echo_ignore_broadcasts = 1
    net.ipv4.neigh.default.gc_thresh1 = 256
    net.ipv4.neigh.default.gc_thresh2 = 1024
    net.ipv4.neigh.default.gc_thresh3 = 4096
    net.ipv6.neigh.default.gc_thresh1 = 256
    net.ipv6.neigh.default.gc_thresh2 = 1024
    net.ipv6.neigh.default.gc_thresh3 = 4096
    vm.max_map_count = 2000000
    # Modified for SAP on 2013-10-24 07:14:17 UTC
    #kernel.shmall = 2097152
    kernel.shmall = 16515072
    # Modified for SAP on 2013-10-24 07:14:17 UTC
    #kernel.shmmax = 2147483648
    kernel.shmmax = 67645734912
    kernel.shmmni = 4096
    # semaphores: semmsl, semmns, semopm, semmni
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    And here is mine Limit.conf File
    cat /etc/security/limits.conf
    #<domain>      <type>  <item>         <value>
    #*               soft    core            0
    #*               hard    rss             10000
    #@student        hard    nproc           20
    #@faculty        soft    nproc           20
    #@faculty        hard    nproc           50
    #ftp             hard    nproc           0
    #@student        -       maxlogins       4
    # Added for SAP on 2012-03-14 10:38:15 UTC
    #@sapsys          soft    nofile          32800
    #@sapsys          hard    nofile          32800
    #@sdba            soft    nofile          32800
    #@sdba            hard    nofile          32800
    #@dba             soft    nofile          32800
    #@dba             hard    nofile          32800
    # End of file
    # Added for SAP on 2013-10-24
    #               soft    nproc   2047
    #               hard    nproc   16384
    #               soft    nofile  1024
    #               hard    nofile  65536
    @sapsys                 soft   nofile          131072
    @sapsys                 hard   nofile         131072
    @sdba                  soft  nproc          131072
    @sdba                  hard   nproc         131072
    @dba                 soft    core           unlimited
    @dba                 hard     core          unlimited
                      soft     memlock       50000000
                      hard     memlock       50000000
    Here is mine   cat /proc/meminfo
    MemTotal:       33015980 kB
    MemFree:        29890028 kB
    Buffers:           82588 kB
    Cached:          1451480 kB
    SwapCached:            0 kB
    Active:          1920304 kB
    Inactive:         749188 kB
    Active(anon):    1136212 kB
    Inactive(anon):    39128 kB
    Active(file):     784092 kB
    Inactive(file):   710060 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    SwapTotal:      33553404 kB
    SwapFree:       33553404 kB
    Dirty:              1888 kB
    Writeback:             0 kB
    AnonPages:       1135436 kB
    Mapped:           161144 kB
    Shmem:             39928 kB
    Slab:              84096 kB
    SReclaimable:      44400 kB
    SUnreclaim:        39696 kB
    KernelStack:        2840 kB
    PageTables:        10544 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:    50061392 kB
    Committed_AS:    1364300 kB
    VmallocTotal:   34359738367 kB
    VmallocUsed:      342156 kB
    VmallocChunk:   34359386308 kB
    HardwareCorrupted:     0 kB
    AnonHugePages:    622592 kB
    HugePages_Total:       0
    HugePages_Free:        0
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    DirectMap4k:       67584 kB
    DirectMap2M:    33486848 kB
    Please let me know where i am going wrong.
    Wat thing basically u check on /proc/meminfo command
    Regards,
    Dipak

  • Oracle 11g problem with creating shared memory segments

    Hi, i'm having some problems with the oracle listener, when i'm trying to start it or reload it I get the follow error massages:
    TNS-01114: LSNRCTL could not perform local OS authentication with the listener
    TNS-01115: OS error 28 creating shared memory segment of 129 bytes with key 2969090421
    My system is a: SunOS db1-oracle 5.10 Generic_144489-06 i86pc i386 i86pc (Total 64GB RAM)
    Current SGA is set to:
    Total System Global Area 5344731136 bytes
    Fixed Size 2233536 bytes
    Variable Size 2919238464 bytes
    Database Buffers 2399141888 bytes
    Redo Buffers 24117248 bytes
    prctl -n project.max-shm-memory -i process $$
    process: 21735: -bash
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 64.0GB - deny
    I've seen that a solution might be "Make sure that system resources like shared memory and heap memory are available for LSNRCTL tool to execute properly."
    I'm not exactly sure how to check that there is enough resources?
    I've also seen a solution stating:
    "Try adjusting the system-imposed limits such as the maximum number of allowed shared memory segments, or their maximum and minimum sizes. In other cases, resources need to be freed up first for the operation to succeed."
    I've tried to modify the "max-sem-ids" parameter and set it to recommended 256 without any success and i've kind of run out of options what the error can be?
    /Regards

    I see, I do have the max-shm-ids quite high already so it shouldn't be a problem?
    user.oracle:100::oracle::process.max-file-descriptor=(priv,4096,deny);
    process.max-stack-size=(priv,33554432,deny);
    project.max-shm-memory=(priv,68719476736,deny)

  • Cannot attach data store shared-memory segment using JDBC (TT0837) 11.2.1.5

    Hi,
    I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
    I encounter this issue in Windows XP, and application gets connection from jboss data source.
    url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
    username=test
    password=test
    Error information:
    java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
    shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
    at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
    at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
    at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
    at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
    at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
    at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
    at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
    I am confused that if I use jdbc, there is no such error.
    Connection conn = DriverManager.getConnection("url", "username", "password");
    Regards,
    Nesta

    I think error 8 is
    net helpmsg 8
    Not enough storage is available to process this command.
    If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
    You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
    "Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
    A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
    As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
    You can use tools like the free "Process Explorer" to see the used address ranges in your process.
    Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses."

  • Locate shared memory segments outside of pool 10

    Dear All,
    When i am starting my sap through STARTSAP its showing started successfully .but i am not able to logon to the system
    Oracle is coming up without any issues but No dialog process is running
    Facing the below errors in start profile when i am running sappfpar check =pf= START_DVEBMGS00_SAPDEV 
    ***ERROR: Size of shared memory pool 10 too small
    ================================================================
    SOLUTIONS: (1) Locate shared memory segments outside of pool 10
    with parameters like: ipc/shm_psize_<key> =0
    SOLUTION: Increase size of shared memory pool 10
    with parameter: ipc/shm_psize_10 =56000000
    ***ERROR: Size of shared memory pool 40 too small
    ================================================================
    SOLUTIONS: (1) Locate shared memory segments outside of pool 40
    with parameters like: ipc/shm_psize_<key> =0
    SOLUTION: Increase size of shared memory pool 40
    with parameter: ipc/shm_psize_40 =62000000
    I tired the above by giving recommended values 56000000 and 6200000 to  ipc/shm_psize_10 abd  ipc/shm_psize_40 respectively.but its not working.
    My O/s is Linux suse9.0 and oracle 9 i
    Is this is related to sysctl.conf ???
    help !
    Regards

    Dear Manoj,
    my ERP2005 EhP4 Unicode system has
    ipc/shm_psize_10             = 156000000
    ipc/shm_psize_40             = 132000000
    try these values, they are at least high enough.
    Regarding your question with sysctl.conf. If the error is "shm_psize too small", then it has probably nothing to do with sysctl.conf.
    Thanks,
      Hannes

Maybe you are looking for

  • Airplay doesn't work with my speakers or Apple TV after upgrade to Mavericks

    Hello I upgraded to Mavericks yesterday, thanks for the free of charge OS. Before upgrading my MBP Retina ran Mountain Lion which was perfect Today I tried to route sound through my airplay speaker, but the mirroring doesn't work. Tried routing the s

  • Maps3.0 on 5800 won't go online

    I've installed maps 3.0 (3.0.1 really) on my 5800. Worked fine except it won't go online, and unlike some people it seems I definitely want it to go online! Basically when I tap on the "go online" entry, on the tools menu, nothing happens! If I selec

  • Help!  Syntax Error in SQL statement

    Hello. I'm getting an error message and I'm just not seeing where I went wrong. The SQL statement is: updateSQL = "UPDATE TrainingHistory SET Status='" & fFormat(Request.Form(cStatus)) & "', StatusComments='" & fFormat(Request.Form(cStatusComments))

  • Delete row in DataTable using JSF2 and Ajax

    Hi All, i am trying to delete a row in a DataTable with an Ajax call. I have a Bean holding an ArrayList of Articles. In the UI I iterate through my Articles using dataTable like this: <fieldset>        <legend><b>Articles</b></legend>         <h:dat

  • WLC 2125 with 2x 1520 Mesh Points - Ethernet Bridging

    Hi - I'm looking for assistance with a new Mesh set up we are deploying, the solutions consists of: 1x Wireless LAN Controller 2125 2x 1520 AP/Mesh Points What we are trying to achieve is to bridge two networks together, I have followed the Cisco con