TNS-01115: OS error 28 creating shared memory segment of 129 bytes
hi
we are operating a solaris v5.8 with 10 instances of 10.2.0.1 databases running. each with its own listener. the system shmmni=3600 and using ipcs all are being used causing the error TNS-01115: OS error 28 creating shared memory segment of 129 bytes to occur.
The kernal parameters were set to be the same as a similiar server we have with the same configuration and more databases and that box uses only 53 memory segments
Does anyone have any ideas as to what would make this happen?
i wish i could. there was one db that was not needed so i just shut it down and stopped the listener. then took an ipcs -m reading. it returned 48 rows, instead of 3603 as it did when this particular db was up. in my haste i removed the db as it was not needed so i no longer have the logs to research. too bad on my part.
well at least i have a fix but have no idea why this happened. thank you for your responses. greatly appreciated.
Similar Messages
-
Oracle 11g problem with creating shared memory segments
Hi, i'm having some problems with the oracle listener, when i'm trying to start it or reload it I get the follow error massages:
TNS-01114: LSNRCTL could not perform local OS authentication with the listener
TNS-01115: OS error 28 creating shared memory segment of 129 bytes with key 2969090421
My system is a: SunOS db1-oracle 5.10 Generic_144489-06 i86pc i386 i86pc (Total 64GB RAM)
Current SGA is set to:
Total System Global Area 5344731136 bytes
Fixed Size 2233536 bytes
Variable Size 2919238464 bytes
Database Buffers 2399141888 bytes
Redo Buffers 24117248 bytes
prctl -n project.max-shm-memory -i process $$
process: 21735: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 64.0GB - deny
I've seen that a solution might be "Make sure that system resources like shared memory and heap memory are available for LSNRCTL tool to execute properly."
I'm not exactly sure how to check that there is enough resources?
I've also seen a solution stating:
"Try adjusting the system-imposed limits such as the maximum number of allowed shared memory segments, or their maximum and minimum sizes. In other cases, resources need to be freed up first for the operation to succeed."
I've tried to modify the "max-sem-ids" parameter and set it to recommended 256 without any success and i've kind of run out of options what the error can be?
/RegardsI see, I do have the max-shm-ids quite high already so it shouldn't be a problem?
user.oracle:100::oracle::process.max-file-descriptor=(priv,4096,deny);
process.max-stack-size=(priv,33554432,deny);
project.max-shm-memory=(priv,68719476736,deny) -
Error message: ORA-27125: unable to create shared memory segment Linux-x86_
Hi,
I am doing an installtion of SAP Netweaver 2004s SR3 on SusE Linux 11/Oracle 10.2
But i am facing the follow issue in Create Database phase of SAPInst.
An error occurred while processing service SAP NetWeaver 7.0 Support Release 3 > SAP Systems > Oracle > Central System > Central System( Last error reported by the step :Caught ESAPinstException in Modulecall: ORA-27125: unable to create shared memory segment Linux-x86_64 Error: 1: Operation not permitted Disconnected
Please help me to resolve the issue.
Thanks,
NishithaHi Ratnajit,
I am too facing the same error but my ORACLE is not starting,
Here are my results of following command:
cat /etc/sysctl.conf
# created by /sapmnt/pss-linux/scripts/sysctl.pl on Wed Oct 23 22:55:01 CEST 2013
fs.inotify.max_user_watches = 65536
kernel.randomize_va_space = 0
##kernel.sem = 1250 256000 100 8192
kernel.sysrq = 1
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.neigh.default.gc_thresh1 = 256
net.ipv4.neigh.default.gc_thresh2 = 1024
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv6.neigh.default.gc_thresh1 = 256
net.ipv6.neigh.default.gc_thresh2 = 1024
net.ipv6.neigh.default.gc_thresh3 = 4096
vm.max_map_count = 2000000
# Modified for SAP on 2013-10-24 07:14:17 UTC
#kernel.shmall = 2097152
kernel.shmall = 16515072
# Modified for SAP on 2013-10-24 07:14:17 UTC
#kernel.shmmax = 2147483648
kernel.shmmax = 67645734912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
And here is mine Limit.conf File
cat /etc/security/limits.conf
#<domain> <type> <item> <value>
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
# Added for SAP on 2012-03-14 10:38:15 UTC
#@sapsys soft nofile 32800
#@sapsys hard nofile 32800
#@sdba soft nofile 32800
#@sdba hard nofile 32800
#@dba soft nofile 32800
#@dba hard nofile 32800
# End of file
# Added for SAP on 2013-10-24
# soft nproc 2047
# hard nproc 16384
# soft nofile 1024
# hard nofile 65536
@sapsys soft nofile 131072
@sapsys hard nofile 131072
@sdba soft nproc 131072
@sdba hard nproc 131072
@dba soft core unlimited
@dba hard core unlimited
soft memlock 50000000
hard memlock 50000000
Here is mine cat /proc/meminfo
MemTotal: 33015980 kB
MemFree: 29890028 kB
Buffers: 82588 kB
Cached: 1451480 kB
SwapCached: 0 kB
Active: 1920304 kB
Inactive: 749188 kB
Active(anon): 1136212 kB
Inactive(anon): 39128 kB
Active(file): 784092 kB
Inactive(file): 710060 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 33553404 kB
SwapFree: 33553404 kB
Dirty: 1888 kB
Writeback: 0 kB
AnonPages: 1135436 kB
Mapped: 161144 kB
Shmem: 39928 kB
Slab: 84096 kB
SReclaimable: 44400 kB
SUnreclaim: 39696 kB
KernelStack: 2840 kB
PageTables: 10544 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 50061392 kB
Committed_AS: 1364300 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 342156 kB
VmallocChunk: 34359386308 kB
HardwareCorrupted: 0 kB
AnonHugePages: 622592 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 67584 kB
DirectMap2M: 33486848 kB
Please let me know where i am going wrong.
Wat thing basically u check on /proc/meminfo command
Regards,
Dipak -
SAP installtion stopped - error cant create shared memory
Dear All,
Greetings!
We are trying to install SAP ECC 6.0 IDES system on Windows 2003 x64 Server and DB2 9.1, during the process of installation - in the step of Start Instance. The sapinst Gives Up! the process since the enqueue server is found in stopped state when the instance tries to start up.
I found the below given error message from the Developer Trace files of the enqueue server.
[Thr 1384] Sat May 09 18:21:13 2009
[Thr 1384] *** ERROR => ShmDelete: Invalid shared memory Key=34. [shmnt.c 719]
[Thr 1384] *** ERROR => ShmCleanup: ShmDelete failed for Key:34. [shmnt.c 793]
[Thr 1384] initialize_global: enqueue server without replication
[Thr 1384] Enqueue: EnqMemStartupAction Utc=1241873473
[Thr 1384] *** ERROR => [CreateOsShm] CreateFileMapping(37,65 KB) failed with Err=5
ERROR_ACCESS_DENIED: Access is denied. [shmnt.c 2174]
[Thr 1384] *** ERROR => ShmCreate: Create (37,67072,1) failed [shmnt.c 506]
To note - we had a virus attack on the server recently and an Anti-Virus tool was used to clean the server, after that I found most of the SAP folders in Read-Only mode.
I suspect any causes of the same for the above mentioned ACCESS_DENIED error. Currently I have allocated 28GB of swap size, but the SAP instance is not able to create a shared memory from the same I hope.
Num
Pagefile
Min.Size
Max.Size
Avail.Max
Curr.Size
1
c:\pagefile.sys
8192000 K
8192000 K
8192000 K
8192000 K
2
e:\pagefile.sys
10485760 K
10485760 K
10485760 K
10485760 K
3
f:\pagefile.sys
10485760 K
10485760 K
10485760 K
10485760 K
Please help me with your suggestions for the workaround,
- How will I be able to enable the swap size of the server to be used by the SAP instance?
- Is this the effect of the anti-virus or an aspect in windows server to change the folders and files to read-only after a virus attack?
I have tried the possibilities of adding more shared memory, removing the shared memory and restarting the OS and assigning back the same, but these dint prove useful.
Kindly help me with your suggestions.
Thank you
Regards,
VineethHi,
I would suggest you to go to run > services.msc
now try to manually stop / start the SAP<SID>_<nr> services. are you able to start it properly? If you get error here, that means SAP services not able to start as it has permission problem.
login with <sid>adm & reregister the service by running sapstartsrv.exe in <drive>:\usr\sap\SID\sys\exe. after you give the parameters and press ok, wait for sometime for the 'success' message.
once its done, then start sap in MMC.
another thread talks about similar kind of problem.
Shared Memory Ceation error when we install NW04S Java Stack.
Regards,
Debasis. -
Shared memory segment: function not implemented
Hi!
I tried to install Oracle8 on a dual pentium II / 233 system
running Suse 6.0 (kernel 2.2.1, glibc6)
Everything went fine, until I got the message
"Database creation failed, see logfile"
The logfile /u01/app/oracle/products/8.0.5/orainst/install.log
tells me the following:
- Entering database actions section.
- Creating initORCL.ora file
- Creating crdb2ORCL.sql database catalog and file creation
script
- ERROR: The 'CREATE DATABASE' statement for the ORCL
database failed.
egrep failed to find 'ORA-' error in the file:
/u01/app/oracle/admin/ORCL/create/crdbORCL.lst
/u01/app/oracle/admin/ORCL/create/crdORCL.lst tells me:
Connected.
ORA-27125: unable to create shared memory segment
Linux Error: 38: Function not implemented
create database "ORCL"
ORA-01034: ORACLE not available
Disconnected.
So, I guess it3s something wrong with the kernel. I did the
following:
changed in /usr/src/linux/include/asm/shmparam.h:
#define SHMIDX_BITS 16 (was 15)
#define SHMMNI 100 (was (1<<_SHM_ID_BITS)) #define SHMSEG
10 (was SHMSEG SHMMNI)
checked /usr/src/linux/include/linux/sem.h:
#define SEMMNI 128
#define SEMMSL 32
#define SEMMNS (SEMMNI*SEMMSL)
Compiled new kernel, rebooted, verified that the right kernel
was loaded, installed oracle new, but it still doesn3t work.
Have I missed anything ?
thanks
Frank
nullHey, I ran into this problem too. Look and see if any db
processes failed to die last time you stopped oracle. I found a
ps_mon daemon still going when the database was down. I killed
it, restarted the database and everything was fine.
StE (guest) wrote:
: Frank Schmitt (guest) wrote:
: : ORA-27125: unable to create shared memory segment
: : Linux Error: 38: Function not implemented
: : create database "ORCL"
: : Compiled new kernel, rebooted, verified that the right kernel
: : was loaded, installed oracle new, but it still doesn3t work.
: Silly question, but did you check you had enabled SysV IPC
when
: you configured the kernel?
: -michael
null -
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
ERROR - ORA-01034: shared memory realm does not exist
Hallo!I am a newbie in Oracle in Linux.I have just installed Oracle 10g in Oracle Eenterprise Linux version 4 Update 7.The installation was successful and I could
work with sqlplus,isqlplus and Enterprise Manager.When I restarted my machine,I manually started the listener,OEM and isqlplus which started successfully.
However,when I try to log into OEM and isqlplus,the error message below appears
ERROR - ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux Error: 2: No such file or directory
How do I resolve this?
Thanks.4joey1 wrote:
However,when I try to log into OEM and isqlplus,the error message below appears
ERROR - ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux Error: 2: No such file or directory An Oracle instance consists of a number of Oracle server processes (the limbs) and a shared memory area (the brain). Each and every server process participating in that Oracle instance needs to attach to the shared memory area.
The error message you see, states that the server process (launched in order to service your sqlplus/OEM client), failed to find and attach to this shared memory segment.
Two basic reasons for the failure.
The Oracle instance is not running. There are no shared memory area and Oracle server processes running for that instance. Solution: start up the database instance.
The server process was launched with the incorrect parameters (ORACLE_SID specifically) and attempted to attach to shared memory that does not exist. Solution: review the TNS/JDBC parameters of the client connection and configuration of the Oracle Listener to ensure that a server process launched to service a client, does so with the correct parameters and environment. -
Hi,
For maintenance activity we have restarted the system. When I checked in
ST06 for os collecotor is not running and "warning: cannot create shared memory". Then I have performed the following commands at OS level
saposcol u2013d
Collector > clean
Collector > quit
saposcol -k to stop the collector.
Before restarting
saposcol -d
Collector > leave (You should get a message Shared memory deleted)
Collector > quit
move the coll.put file
saposcol u2013f(Start the saposcol)
When I executed the last command. I got message saying that "Cannot create Shared Memory"
Environment:
Windows 2003 cluser environment.
4.6c
oracle 10g
Thanks and Regards
SatyaHi Sergo,
No, but I restarted the SAP, DB and SAPOSCOL. SAPOSCOL is up and running but there is waring mmessage in log file as
02:27:11 19.04.2010 LOG: ====================================================================================
02:27:11 19.04.2010 LOG: = OS Collector Start
02:27:11 19.04.2010 LOG: ====================================================================================
02:27:11 19.04.2010 LOG: Starting C:\WINDOWS\SapCluster\SAPOSCOL.EXE
02:27:11 19.04.2010 LOG: Saposcol Version is [COLL 20.79 03/08/22 46D - 20.46 NT 04/08/01]
02:27:11 19.04.2010 LOG: Allocate Counter Buffer [10000 Bytes]
02:27:11 19.04.2010 LOG: Allocate Instance Buffer [10000 Bytes]
02:27:11 19.04.2010 LOG: You can ignore :"Index of Title:[Disk Queue Length] not found" on Windows NT 4.0
02:27:11 19.04.2010 LOG: You can ignore :"Index of Title:[Disk Queue Length] not found" on Windows NT 4.0
02:27:11 19.04.2010 LOG: You can ignore :"Index of Title:[Disk Queue Length] not found" on Windows NT 4.0
02:27:11 19.04.2010 LOG: You can ignore :"Index of Title:[Disk Queue Length] not found" on Windows NT 4.0
02:27:11 19.04.2010 LOG: You can ignore :"Index of Title:[Disk Queue Length] not found" on Windows NT 4.0
02:27:11 19.04.2010 LOG: You can ignore :"Index of Title:[Disk Queue Length] not found" on Windows NT 4.0
02:27:11 19.04.2010 LOG: INFO: saposcol's shared memory size is 86420.
02:27:11 19.04.2010 LOG: Connected to existing shared memory !
02:27:11 19.04.2010 LOG: MaxRecords = 637 <> RecordCnt + Dta_offset = 751 + 61
02:27:16 19.04.2010 WARNING: WaitFree: could not set new shared memory status after 5 sec
02:27:16 19.04.2010 WARNING: Cannot create Shared Memory
When I trying to stop the SAPOSCOL service. I got the message saying that
" Could not stop the SAPOSCOL service on Local Computer.
Error 1053: The service did not respond to the start or control request in a timely fashion."
and it stopped the SAPOSCOL.
Then I am able to start SAPOSCOL.
The strange thing is in my environment, there are OS 10 drives but now I can see only 3 drives c, p and q.
Regards
Satya -
Getting Error : Cannot attach data store shared-memory segment,
HI Team,
I am trying to integrate Timesten IMDB in my application.
Machine details
Windows 2003, 32 bit, 4GB RAM.
IMDB DB details
Permanent size 500MB, temp size 40MB.
If I try to connect to database using ttisql it get connected. But If I try to connect in my Java application I get following exception.
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.3.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 7966, procedure "sbDbCreate"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3269)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3418)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3383)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:787)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1800)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:207)
Maximum permanent size that works with Java application is 100MB. But it would not be enough for our use.
Could anybody let me know the way to resolve/reason for getting this error? Any response would be appreciated.
Thanks in Advance,
Regards,
atulThis is a very common problem on 32-bit Windows. A TimesTen datastore is a single region of 'shared memory' allocated as a shared mapping from the paging file. In 'direct mode', when the application process(in your case either ttIsql or the JVM) 'connects' to the datastore the datastore memory region is mapped into the process address space. In order for this to happen it is necessary for there to be a free region in the process adddress space that is at least the size of the datastore. This region must be contiguous (i.e. a single region). Unfortunately, the process memory map in 32-bit Windows is typically highly fragmented and the more DLLs that a process uses the worse this is. Also, JVMs typically use a lot of memory, depending on configuration.
Your options to solve this are really limited to:
1. Significantly reduce the memory used by the JVM (may not be possible).
2. Use a local client/server connection from Java instead of a direct mode connection. To minismise the performance overhead make sure you use the optimised ShmIpc connectivity rather than TCP/IP. Even with this there is likely to be a >50% reduction in performance compared to direct mode.
3. Switch to 64-bit Windows, 64-bit TimesTen and 64-bit Java. Even without adding any extra memory to your machine thsi will very likely fix the problem.
Option (3) is by far the best one.
Regards,
Chris -
Help with error:"ora-27101 shared memory realm does not exist"
Hello friends of the forum:
I would like to help me with a question imortante I am very new to this aspect of oracle:
And install Oracle 11g on win "view" and followed the steps in the installation of l at the start page www.oracle.com funcionava all fine, but after a month of any change:
The database does not start qe and get an error "ora-27101 shared memory realm does not exist".
At first and solved momentarily reboot oracle service ...... but after a while does not work but I can not conextar to leave the base because this errror ......
The database is a lapto instalao .... and I have 100gb of free space and 4 gb of memory do not understand why this happens .....
I have to do so you can have acces oracle based ...
I can help with this problem .... thanks ....
I like to see that show me with pictures as I am new in this aspect of oracle .... and want to learn it .... thank you very muchHello friend..
The error you are getting *"ORA-27101: shared memory realm does not exist"* may be coz you are trying to connect to your database which is already shutdown not as sysdba user but as some other user, becoz of which you are getting this error. It sometimes happen that the service of the database is already started but when you try to connect to a shutdown database as users other then sys even having the DBA priviledges you cannot connect to database.
Try to connect as SYS user hope your problem will be resolved.
Deepak Sharma -
Redhat: TT0837: Cannot attach data store shared-memory segment, error 12
Customer has two systems, one Solaris and one Linux. We have six DSNs with one dsn PermSize at 1.85G. Both OS systems are 32-bit. After migrating from TT6.0 to 11.2, I can not get replication working on the Linux system for the 1.85G dsn. The Solaris system is working correctly. I've been able to duplicate the issue in out lab also. If I drop the PermSize down to 1.0G, replication is started. I've tried changing multiple parameters including setting up HugePages.
What else could I be missing? Decreasing the PermSize is not an option for this customer. Going to a full 64-bit system is on our development roadmap but is at least a year away due to other commitments.
This is my current linux lab configuration.
ttStatus output for the failed Subscriber DSN and a working DynamicDB DSN. As you can see, the policy is set to "Always" but it has no Subdaemon or Replication processes running.
Data store /space/Database/db/Subscriber
There are no connections to the data store
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Data store /space/Database/db/DynamicDB
There are 14 connections to the data store
Shared Memory KEY 0x5602000c ID 1826586625 (LARGE PAGES, LOCKED)
Type PID Context Connection Name ConnID
Replication 88135 0x56700698 LOGFORCE 4
Replication 88135 0x56800468 REPHOLD 3
Replication 88135 0x56900468 TRANSMITTER 5
Replication 88135 0x56a00468 REPLISTENER 2
Subdaemon 86329 0x08472788 Manager 2032
Subdaemon 86329 0x084c5290 Rollback 2033
Subdaemon 86329 0xd1900468 Deadlock Detector 2037
Subdaemon 86329 0xd1a00468 Flusher 2036
Subdaemon 86329 0xd1b00468 HistGC 2039
Subdaemon 86329 0xd1c00468 Log Marker 2038
Subdaemon 86329 0xd1d00468 AsyncMV 2041
Subdaemon 86329 0xd1e00468 Monitor 2034
Subdaemon 86329 0xd2000468 Aging 2040
Subdaemon 86329 0xd2200468 Checkpoint 2035
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Summary of Perm and Temp Sizes of each system.
PermSize=100
TempSize=50
PermSize=100
TempSize=50
PermSize=64
TempSize=32
PermSize=1850 => Subscriber
TempSize=35 => Subscriber
PermSize=64
TempSize=32
PermSize=200
TempSize=75
[SubscriberDir]
Driver=/opt/SANTone/msc/active/TimesTen/lib/libtten.so
DataStore=/Database/db/Subscriber
AutoCreate=0
DurableCommits=0
ExclAccess=0
LockLevel=0
PermWarnThreshold=80
TempWarnThreshold=80
PermSize=1850
TempSize=35
ThreadSafe=1
WaitForConnect=1
Preallocate=1
MemoryLock=3
###MemoryLock=0
SMPOptLevel=1
Connections=64
CkptFrequency=300
DatabaseCharacterSet=TIMESTEN8
TypeMode=1
DuplicateBindMode=1
msclab3201% cat ttendaemon.options
-supportlog /var/ttLog/ttsupport.log
-maxsupportlogsize 500000000
-userlog /var/ttLog/userlog
-maxuserlogsize 100000000
-insecure-backwards-compat
-verbose
-minsubs 12
-maxsubs 60
-server 16002
-enableIPv6
-linuxLargePageAlignment 2
msclab3201# cat /proc/meminfo
MemTotal: 66002344 kB
MemFree: 40254188 kB
Buffers: 474104 kB
Cached: 19753148 kB
SwapCached: 0 kB
HugePages_Total:
2000
HugePages_Free:
2000
HugePages_Rsvd:
0
HugePages_Surp:
0
Hugepagesize:
2048 kB
## Before loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 1
0x79010649 24444930 root 666 404 0
## After loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 2
0x7f020012 1825964033 ttadmin 660 236978176 2
0x79010649 24444930 root 666 404 0
msclab3201#
msclab3201# sysctl -a | grep huge
vm.nr_hugepages = 2000
vm.nr_hugepages_mempolicy = 2000The size of these databases is very close to the limit for 32-bit systems and you are almost certainly running into address space issues given that 11.2 has a slightly larger footprint than 6.0. 32-bit is really 'legacy' nowadays and you should move to a 64-bit platform as soon as you are able. That will solve your problems. I do not think there is any other solution (other than reducing the size of the database).
Chris -
ORA-27123 unable to attach shared memory segment
Running oracle 8.1.5.0.0 on Redhat 6.0 with kernel 2.2.12, I keep getting the error ORA-27123 unable to attach shared memory segment when trying to startup and instance with an SGA > 150 MB or so. I have modified the shmmax and shmall kernel parameters via the /proc/sys interface. The relevant output of ipcs -l is below:
------ Shared Memory Limits --------
max number of segments = 128
max seg size (kbytes) = 976562
max total shared memory (kbytes) = 16777216
min seg size (bytes) = 1
This system has 2gb of physical memory and is doing nothing except oracle.
I changed the shmmax and shmall parameters after the instance was created, was their something I needed to do to inform Oracle of the changes?High JW,
i had the same problem on my installation.
The solution is written in the Oracle8i Administrator Refernece on page 1-26 "Relocating the SGA"
a) determine the valid adress range for Shared Memory with:
$ tstshm
in the output Lowest & Highest SHM indicate the valid adress range
b) run genksms to generate the file ksms.s
$ cd $ORACLE_HOME/rdbms/lib
$ $ORACLE_HOME/bin/genksms -b "sga_beginn_adress" > ksms.s
c) shut down any instance
d) rebuilt the oracle exe in $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk ksms.o
$ make -f ins_rdbms.mk ioracle
the result is a new oracle kernel that loads the SGA at the adress specified in "sga_beginn_adress".
regards
Gerhard -
JLaunchInitAdministration: Can't attach to shared memory segment 69
Hi ,
I am trying to bring up a EP system , and i am getting the following error in my dev_sdm and dev_dispatcher logs ...
[Thr 1] Wed Nov 10 10:30:25 2010
[Thr 1] JLaunchRequestQueueInit: create named pipe for ipc
[Thr 1] JLaunchRequestQueueInit: create pipe listener thread
[Thr 515] WaitSyncSemThread: Thread 515 started as semaphore monitor thread.
[Thr 258] JLaunchRequestFunc: Thread 258 started as listener thread for np messages.
[Thr 1] *** ERROR => JLaunchInitAdministration: Can't attach to shared memory segment 69 (rc = 7 locking (semaphore/mutex) error) [jlnchxx_mt.c 926]
[Thr 1] *** ERROR => can't initialize JControl Administration [jlnchxx_mt.c 375]
[Thr 1] SigISetIgnoreAction : SIG_IGN for signal 20
[Thr 1] *** ERROR => JsfCloseShm: FiDetachIndex(SESSION) failed (rc = 6 invalid argument) [jsfxxshm_mt. 1243]
[Thr 1] *** ERROR => JsfCloseShm: FiDetachIndex(ALIAS) failed (rc = 6 invalid argument) [jsfxxshm_mt. 1250]
[Thr 1] *** ERROR => JsfCloseShm: FiDetachIndex(SERVICE) failed (rc = 6 invalid argument) [jsfxxshm_mt. 1257]
[Thr 1] *** ERROR => JsfCloseShm: ShmDelete(69) failed (rc = 2 invalid function argument) [jsfxxshm_mt. 1283]
[Thr 1] JLaunchCloseProgram: good bye (exitcode = -1)
It would be nice if someone can provide we some direction how to solve this ...
(btw i have already done cleanipc ... it didn't help )
Regards,
Neelit might help.
but mine is a EP system (Java Only) so not sure if its applicable in my case
You should always have latest kernel on your system , irrespective of Runtime Engine (ABAP or JAVA).
So go ahead and upgrade kernel to latest available for your release.
Regards, -
Hi,
I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
I encounter this issue in Windows XP, and application gets connection from jboss data source.
url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
username=test
password=test
Error information:
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
I am confused that if I use jdbc, there is no such error.
Connection conn = DriverManager.getConnection("url", "username", "password");
Regards,
NestaI think error 8 is
net helpmsg 8
Not enough storage is available to process this command.
If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
"Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
You can use tools like the free "Process Explorer" to see the used address ranges in your process.
Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses."
Maybe you are looking for
-
ITunes no longer sees my ipod at all...constantly slowly flashes orange
Quite odd this...ipod shows up under 'about this mac' as being plugged into a usb plug, but iTunes 8.02 just won't see it. Also the ipod constantly flashes a slow orange. The ipod actually plays the songs that are on it...I just cant get itunes to ed
-
PK with TIMESTAMP causes insert unique constraint error at DST switch
Hi, I have a test that inserts rows in a table that has a TIMESTAMP in its PK. When inserting rows that cross over the November DST change, it tries to insert these dates: --- Sun Nov 02 02:00:00 EST 2008 --- Sun Nov 02 01:00:00 EST 2008 --- Sun Nov
-
I pay money for sticker in line app already. But when i use my ipad to download its again My money lost and now i have a bill no 146037781699 Why? I pay money already for that in-app purchase But when i download its again to my ipad my money lost. G
-
How can i know what transaction is handle by the TM?
How can i know what transaction is handle by the TM?
-
Basic JPS page loading accurately on my home web server.
I have recently set up my home PC (Windows XP Version 2002 SP2) to work as a web server using apache_2.2.8-win32-x86-openssl-0.9.8g.msi from http://httpd.apache.org/download.cgi. The URL is http://99.235.126.44 so my local host is http://localhost wh