Data store space exhausted
Hi all
When trying to prepare a TTCmd with a lot of parameters, I get the following error message:
1117_093719 0000 ERROR in TTCommand.cpp, line 247: Error in TTCmd::Prepare
[TimesTen][TimesTen 6.0.1 ODBC Driver][TimesTen]TT0802: Data store space exhausted -- file "blk.c", lineno 2660, procedure "sbBlkAlloc"
*** ODBC Error/Warning = S1000, TimesTen Error/Warning = 802
[TimesTen][TimesTen 6.0.1 ODBC Driver][TimesTen]TT6221: Temporary data partition free space insufficient to allocate 64720 bytes of memory -- file "blk.c", lineno 2660, procedure "sbBlkAlloc"
*** ODBC Error/Warning = S1000, TimesTen Error/Warning = 6221
According to tt_ref.pdf, I should "Try increasing the size of the appropriate data partition"
Who can tell me, what I should really do?
Thanks
Frans
1. The output of the ttVersion command
C:\>ttversion
TimesTen Release 7.0.2.0.0 (32 bit NT) (tt70_32:17000) 2007-05-04T21:01:22Z
Instance admin: Rajib.Sarkar
Instance home directory: C:\TimesTen\tt70_32
Daemon home directory: C:\TimesTen\tt70_32\srv\info
Access control enabled.
2. The full ODBC settings for the datastore
Data Source Name : timescif
Description : Test
Data Stoer Path : C:\Temp\Timescif
Log Directory : NULL
Database Charecterf set : WE8ISO8859P1
Type Mode : 0-Oracle
check box
Temporary : unchecked
Authenticate Times Ten Client Connection : Unchecked
3. The output of the follwing command run from ttIsql with PassThrough=0.
Command> call ttusers;
< RAJIB.SARKAR , 1, FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF >
< SYSTEM , 1, FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF >
< SYS , 0, 00000000000000000000000000000000 >
< TTREP , 0, 00000000000000000000000000000000 >
< PUBLIC , 0, 03000000000000000000000000000000 >
< CIF , 0, FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF >
< SCOTT , 0, FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF >
7 rows found.
Regards
Similar Messages
-
Redhat: TT0837: Cannot attach data store shared-memory segment, error 12
Customer has two systems, one Solaris and one Linux. We have six DSNs with one dsn PermSize at 1.85G. Both OS systems are 32-bit. After migrating from TT6.0 to 11.2, I can not get replication working on the Linux system for the 1.85G dsn. The Solaris system is working correctly. I've been able to duplicate the issue in out lab also. If I drop the PermSize down to 1.0G, replication is started. I've tried changing multiple parameters including setting up HugePages.
What else could I be missing? Decreasing the PermSize is not an option for this customer. Going to a full 64-bit system is on our development roadmap but is at least a year away due to other commitments.
This is my current linux lab configuration.
ttStatus output for the failed Subscriber DSN and a working DynamicDB DSN. As you can see, the policy is set to "Always" but it has no Subdaemon or Replication processes running.
Data store /space/Database/db/Subscriber
There are no connections to the data store
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Data store /space/Database/db/DynamicDB
There are 14 connections to the data store
Shared Memory KEY 0x5602000c ID 1826586625 (LARGE PAGES, LOCKED)
Type PID Context Connection Name ConnID
Replication 88135 0x56700698 LOGFORCE 4
Replication 88135 0x56800468 REPHOLD 3
Replication 88135 0x56900468 TRANSMITTER 5
Replication 88135 0x56a00468 REPLISTENER 2
Subdaemon 86329 0x08472788 Manager 2032
Subdaemon 86329 0x084c5290 Rollback 2033
Subdaemon 86329 0xd1900468 Deadlock Detector 2037
Subdaemon 86329 0xd1a00468 Flusher 2036
Subdaemon 86329 0xd1b00468 HistGC 2039
Subdaemon 86329 0xd1c00468 Log Marker 2038
Subdaemon 86329 0xd1d00468 AsyncMV 2041
Subdaemon 86329 0xd1e00468 Monitor 2034
Subdaemon 86329 0xd2000468 Aging 2040
Subdaemon 86329 0xd2200468 Checkpoint 2035
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Summary of Perm and Temp Sizes of each system.
PermSize=100
TempSize=50
PermSize=100
TempSize=50
PermSize=64
TempSize=32
PermSize=1850 => Subscriber
TempSize=35 => Subscriber
PermSize=64
TempSize=32
PermSize=200
TempSize=75
[SubscriberDir]
Driver=/opt/SANTone/msc/active/TimesTen/lib/libtten.so
DataStore=/Database/db/Subscriber
AutoCreate=0
DurableCommits=0
ExclAccess=0
LockLevel=0
PermWarnThreshold=80
TempWarnThreshold=80
PermSize=1850
TempSize=35
ThreadSafe=1
WaitForConnect=1
Preallocate=1
MemoryLock=3
###MemoryLock=0
SMPOptLevel=1
Connections=64
CkptFrequency=300
DatabaseCharacterSet=TIMESTEN8
TypeMode=1
DuplicateBindMode=1
msclab3201% cat ttendaemon.options
-supportlog /var/ttLog/ttsupport.log
-maxsupportlogsize 500000000
-userlog /var/ttLog/userlog
-maxuserlogsize 100000000
-insecure-backwards-compat
-verbose
-minsubs 12
-maxsubs 60
-server 16002
-enableIPv6
-linuxLargePageAlignment 2
msclab3201# cat /proc/meminfo
MemTotal: 66002344 kB
MemFree: 40254188 kB
Buffers: 474104 kB
Cached: 19753148 kB
SwapCached: 0 kB
HugePages_Total:
2000
HugePages_Free:
2000
HugePages_Rsvd:
0
HugePages_Surp:
0
Hugepagesize:
2048 kB
## Before loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 1
0x79010649 24444930 root 666 404 0
## After loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 2
0x7f020012 1825964033 ttadmin 660 236978176 2
0x79010649 24444930 root 666 404 0
msclab3201#
msclab3201# sysctl -a | grep huge
vm.nr_hugepages = 2000
vm.nr_hugepages_mempolicy = 2000The size of these databases is very close to the limit for 32-bit systems and you are almost certainly running into address space issues given that 11.2 has a slightly larger footprint than 6.0. 32-bit is really 'legacy' nowadays and you should move to a 64-bit platform as soon as you are able. That will solve your problems. I do not think there is any other solution (other than reducing the size of the database).
Chris -
How to determine what's using data store temp space?
How can one determine what's using data store temp space? We are interested to know what structures are occupying space in temp space and if possible what pid/process connected to TimesTen created them.
Also, is there a procedure that will work if temp space is full?
Recently one of our data stores ran of space. We we're unable to run commands like "monitor", "select * from monitor", "select count(*) from my_application_table", etc. These commands failed because they required temp space to run and temp space was full. We killed the application processes, this in turned freed up temp space, then we were able to run these queries.
Ideally, we'd like to have a procedure to figure out what's using temp space when temp space is full.
The other thing we could do is periodically monitor temp space prior to it filling to determine what's using temp space.That was my original thought, but once you click the slider track or thumb, and then enter a value in the text control, the clickTarget on the change event envoked by the change to the bound data (after entering a value in the text control) will be whatever slider element had last been clicked. If you've never clicked the slider, clickTarget=null. But once you've clicked the slider the clickTarget always has a value of "thumb" or "track", regardless of what triggered the change event.
-
ORA-15041: diskgroup "DATA" space exhausted
Hi,
We have created two node rac on linux and oracle version is 11.2.0.3. When i try to create tbs some got created and others getting below error.
ORA-15041: diskgroup "DATA" space exhausted
I see space in my disk and below are the asm disk details.
GROUP_NUMBER~NAME~TOTAL_MB~FREE_MB~USABLE_FILE_MB
*1~DATA~2017128~1701824~784344*
my FRA is inside DATA disk that have 4gb of space. currently FRA is 85% available.
Can you please help why its giving error even if we have space in asm disk..
thanks for your help and time.Ora_DBA wrote:
SQL> connect sys@+ASM as sysdba
Enter password:
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specifiederror has NOTHING to do with RAC.
Basic SQL*Net issue
ORA-12154 ALWAYS only occurs on SQL Client & no SQL*Net packets ever leave client system
ORA-12154 occurs when client requests a connection to some DB server system using some connection string.
The lookup operation fails because the name provided can NOT be resolved to any remote DB.
The analogous operation would be when you wanted to call somebody, but could not find their name in any phonebook.
The most frequent cause for the ORA-12154 error is when the connection alias can not be found in tnsnames.ora.
The lookup operation of the alias can be impacted by the contents of the sqlnet.ora file; specifically DOMAIN entry.
TROUBLESHOOTING GUIDE: ORA-12154 & TNS-12154 TNS:could not resolve service name [ID 114085.1]
http://edstevensdba.wordpress.com/2011/02/26/ora-12154tns-03505/ -
ORA-15041: diskgroup space exhausted
Hi,
I've an Oracle Database 11g Release 11.1.0.6.0 - 64bit Production With the Real Application Clusters option.
Some days ago I was receiving the error:
unable to extend segment in tablespace system
I looked at the tablespace table and this was the situation:
+DATA/evodb/datafile/system.270.720825887 SYSTEM 0,68359375 (GB)
+DATA/evodb/datafile/system.267.717679031 SYSTEM 1,953125 (GB)
+DATA/evodb/datafile/system.256.662113891 SYSTEM 14,775390625 (GB)
All files are autoextensible.
This was the situation of ASM diskgroup/file:
.PATH DN GROUP_NAME HEADER_STA FREE_MB OS_MB TOTAL_MB NAME FAILGROUP
/dev/asm0 0 ONLINELOG MEMBER 3434 6138 6138 ONLINELOG_0000 ONLINELOG_0000
/dev/asm1 0 ARCHIVELOG MEMBER 47529 48127 48127 ARCHIVELOG_0000 ARCHIVELOG_0000
/dev/asm10 8 DATA MEMBER 11412 48127 48127 DATA_0008 DATA_0008
/dev/asm11 9 DATA MEMBER 10995 51199 51199 DATA_0009 DATA_0009
/dev/asm12 0 DATA MEMBER 0 5115 5115 DATA_0010 DATA_0010
/dev/asm2 0 DATA MEMBER 11404 48127 48127 DATA_0000 DATA_0000
/dev/asm210 0 STORE MEMBER 152914 208500 208500 STORE_0000 STORE_0000
/dev/asm211 11 DATA MEMBER 49178 208500 208500 DATA_0011 DATA_0011
/dev/asm212 16 CANDIDATE 0 208500 0
/dev/asm213 12 DATA MEMBER 52546 228352 228352 DATA_0012 DATA_0012
/dev/asm3 1 DATA MEMBER 11390 48127 48127 DATA_0001 DATA_0001
/dev/asm4 2 DATA MEMBER 11398 48127 48127 DATA_0002 DATA_0002
/dev/asm5 3 DATA MEMBER 11403 48127 48127 DATA_0003 DATA_0003
/dev/asm6 4 DATA MEMBER 11401 48127 48127 DATA_0004 DATA_0004
/dev/asm7 5 DATA MEMBER 11403 48127 48127 DATA_0005 DATA_0005
/dev/asm8 6 DATA MEMBER 11402 48127 48127 DATA_0006 DATA_0006
/dev/asm9 7 DATA MEMBER 11389 48127 48127 DATA_0007 DATA_0007
My first doubt is:
why it was not able to extend itself as it had 3 files and 2 very small size?
Anyway I tried an alter tablespace add datafile, but:
SQL> alter tablespace system add datafile;
alter tablespace system add datafile
*+
ERROR at line 1:
ORA-01119: error in creating database file 'DATA'+
ORA-17502: ksfdcre:4 Failed to create file DATA+
ORA-15041: diskgroup space exhausted
Why I got this error? Every memeber had enough space and 1 even 52gb...
I had to had the disk asm212 at the DATA diskgroup. Then I was able to add the datafile.
Thanks in advance,
Samuelhi,
i got this result from yr query
ASMDB_GROUP1 0 ASMDB_GROUP1_0000 20480 0 /dev/rhdisk4 UNKNOWN
ASMDB_GROUP1 1 ASMDB_GROUP1_0001 20480 0 /dev/rhdisk5 UNKNOWN
ASMDB_GROUP3 0 ASMDB_GROUP3_0000 20480 0 /dev/rhdisk8 UNKNOWN
ASMDB_GROUP3 1 ASMDB_GROUP3_0001 20480 0 /dev/rhdisk9 UNKNOWN
ASMDB_GROUP4 0 ASMDB_GROUP4_0000 20480 0 /dev/rhdisk10 UNKNOWN
ASMDB_GROUP4 1 ASMDB_GROUP4_0001 20480 0 /dev/rhdisk11 UNKNOWN
it means i dont have any free space, so can i add space in group1 ?
can i add space from another diskgroup to diskgroup1 ?
DO I NEED TO shudown my databsae ?
THXS -
R/3 User Data store in Portal....?
Hi All,
Can anybody tell me what is the necessity of using R/3 as a user data store (UME) in Portal ?
What are advantages of it over Portal UME ? In what scenarios can we use R/3 as a user datastore in Portal and how can we make well advantage of it ?
Any help in this regards would be highly appreciated. Full points would be rewarded for usefull answers.
Regards,
Anil KumarHi,
Imagine a scenario when most users need data only from r/3 what a bother replicating same users again in Portal - Look at it from administration point of view! and role assignment in sap backend , so many issues.
Consider that you have built a huge list of users, then you add Portal. How tough to keep portal user profile uptodate with backend.
This is an important reason in my view, single storage of users - optimum use of database space and efficieny and administrative conveniency.
Regards,
Harish -
Hi,
I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
I encounter this issue in Windows XP, and application gets connection from jboss data source.
url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
username=test
password=test
Error information:
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
I am confused that if I use jdbc, there is no such error.
Connection conn = DriverManager.getConnection("url", "username", "password");
Regards,
NestaI think error 8 is
net helpmsg 8
Not enough storage is available to process this command.
If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
"Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
You can use tools like the free "Process Explorer" to see the used address ranges in your process.
Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses." -
Getting Error : Cannot attach data store shared-memory segment,
HI Team,
I am trying to integrate Timesten IMDB in my application.
Machine details
Windows 2003, 32 bit, 4GB RAM.
IMDB DB details
Permanent size 500MB, temp size 40MB.
If I try to connect to database using ttisql it get connected. But If I try to connect in my Java application I get following exception.
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.3.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 7966, procedure "sbDbCreate"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3269)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3418)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3383)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:787)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1800)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:207)
Maximum permanent size that works with Java application is 100MB. But it would not be enough for our use.
Could anybody let me know the way to resolve/reason for getting this error? Any response would be appreciated.
Thanks in Advance,
Regards,
atulThis is a very common problem on 32-bit Windows. A TimesTen datastore is a single region of 'shared memory' allocated as a shared mapping from the paging file. In 'direct mode', when the application process(in your case either ttIsql or the JVM) 'connects' to the datastore the datastore memory region is mapped into the process address space. In order for this to happen it is necessary for there to be a free region in the process adddress space that is at least the size of the datastore. This region must be contiguous (i.e. a single region). Unfortunately, the process memory map in 32-bit Windows is typically highly fragmented and the more DLLs that a process uses the worse this is. Also, JVMs typically use a lot of memory, depending on configuration.
Your options to solve this are really limited to:
1. Significantly reduce the memory used by the JVM (may not be possible).
2. Use a local client/server connection from Java instead of a direct mode connection. To minismise the performance overhead make sure you use the optimised ShmIpc connectivity rather than TCP/IP. Even with this there is likely to be a >50% reduction in performance compared to direct mode.
3. Switch to 64-bit Windows, 64-bit TimesTen and 64-bit Java. Even without adding any extra memory to your machine thsi will very likely fix the problem.
Option (3) is by far the best one.
Regards,
Chris -
Cannot reconnect to Timesten Data Store
Hi,
I'm using TimesTen 6.0.1 on Linux Red Hat AS 4.
My application connects well to the data store.
Then when I do:
/sbin/service tt_tt60 restart
My application is disconnected from the DS (normal), but after the data store is back, my application tries to connect again, and it does not work. I see this error log:
Sep 21 17:05:26 intel1 TimesTen Data Manager 6.0.1.tt60[11638]: ODBC_ERROR: sqldbthread 0: 3- ERROR -1 in sqlDBv2/sqldb_api.c, line 154: Error in connecting to the driver
Sep 21 17:05:26 intel1 TimesTen Data Manager 6.0.1.tt60[11638]: ODBC_ERROR: sqldbthread 0: 4- [TimesTen][TimesTen 6.0.1 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 12 -- file "db.c", lineno 8623, procedure "sbDbConnect()" *** ODBC Error/Warning= 08001, Additional Error/Warning= 837
errno 12 is pointing to not enough memory
PermSize in sys.odbc.ini is 1800
If I change PermSize to 1000, it works well, the application reconnects well to the data store. But reducing PermSize would severely affect the max number of entries we can put in the database.
So, is there a way to recover from such a case without changing PermSize?
ChristopheI think we need more information to understand exactly what is happening here. Here are a few observations and suggestions:
1. When the datastore gets 'invalidated' due to the main daemon being stopped (restarted), and your application gets the error (846 or 994), what does it do? It must (a) issue a rollback on all its connections ot the DS and (b) issue a disconnect on all its connections to the DS> Until this is done, the shared memory segment for the DS will remain in existence and 'attached' to the application process. Only when the above has been done will the DS segment be released. You could use the O/S ipcs command before and after the daemon restart to see if the DS segment remains or not.
2. The O/S is denying the request to attach a new segment to the application process. The error is ENOMEM which does not necessarily mean not enough actual memory. Most likely some kernel parameter or process limit parameter is set too low such that when you trya and attach the second segment (since I suspect, as described above, that the first segment is still attached to the application) it fails. Or it may truly be that you do not have enough address space left (is this 32-bit or 64-bit) to attach the new segment.
I would recommend investigating the issues I describe in (1) above as a first step.
Chris -
Cannot attach data store shared-memory segment using JDBC (TT0837)
I'm currently evaluating TimesTen during which I've encountered some problems.
All of the sudden my small Java app fails to connect to the TT data source.
Though I can still connect to the data source using ttisql.
Everything worked without problems until I started poking around in the ODBC administrator (Windows 2K).
I wanted to increase permanent data size so I changed some of the parameters.
After that my Java app fails to connect with the following message:
DriverManager.getConnection("jdbc:timesten:direct:dsn=rundata_tt60;OverWrite=0;threadsafe=1;durablecommits=0")
trying driver[className=com.timesten.jdbc.TimesTenDriver,com.timesten.jdbc.TimesTenDriver@addbf1]
SQLException: SQLState(08001) vendor code(837)
java.sql.SQLException: [TimesTen][TimesTen 6.0.4 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 8846, procedure "sbDbConnect()"
The TT manual hasn't really provided any good explanation what the error code means.
Obviusly I'v already tried restoring the original ODBC parameters without any luck.
Ideas..anyone?
/PeterPeter,
Not sure if you have resolved this issue or not. In any case, here are some information to look into.
- On Windows 32-bit, the allocation of shared data segment doesn't work the same way like on Unix and Linux. As a result, the maximum TimesTen database size one can allocate is much smaller on the Windows platform than on other platforms.
- Windows error 8 means ERROR_NOT_ENOUGH_MEMORY: not enough storage is available to process this command.
- TimesTen TT0837 says the system was unable to attach a shared memory segment during a data store creation or data store connection operation.
- What was the largest successful perm-size and temp-size you used when allocating the TimesTen database?
* One explanation for why you were able to connect using ttIsql is that it doesn't use much of the DLLs, whereas your Java application typically has a lot more DLLs.
* As a troubleshooting step, you can try reduce your Temp-size to a very small size and just see if you can connect to the data store. Eventually, you may need to reduce your perm-size to get Windows to fit the shared data segment in the process space.
By the way the TimesTen documentation has been modified to document this error as follows:
Unable to attach to a shared memory segment during a data store creation or data store connection operation.
You will receive this error if a process cannot attach to the shared memory segment for the data store.
On UNIX or Linux systems, the shmat call can fail due to one of:
- The application does not have access to the shared memory segment. In this case the system error code is EACCESS.
- The system cannot allocate memory to keep track of the allocation, or there is not enough data space to fit the segment. In this case the system error code is ENOMEM.
- The attach exceeds the system limit on the number of shared memory segments for the process. In this case the system error code is EMFILE.
It is possible that some UNIX or Linux systems will have additional possible causes for the error. The shmat man page lists the possibilities.
On Windows systems, the error could occur because of one of these reasons:
- Access denied
- The system has no handles available.
- The segment cannot be fit into the data section
Hope this helps.
-scheung -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
How create data store with PermSize 512MB on WIN32?
Hi!
How create data store with PermSize > 512MB on WIN32? If I set PermSize > 512MB on WIN32, then data store becomes invalid.Thanks for the details. As I mentioned, due to issues with the way Windows manages memory and address space it is generally not possible to create a datastore larger than around 700 Mb on WIN32. Sometimes you may be lucky and get close to 1 GB but usually not. The issue is as follows; on Windows, a TimesTen datastore is a shared mapping created from memory backed by the paging file. This shared mapping must be mapped into the process address space as a contiguous range of addresses. So, if you have a 1 GB datastore then your process needs to have a contiguous 1 GB range of addresses free in order to be able to connect to (map) the datastore. Unfortunately the default behaviour of Windows is to map DLLs into a process address space all over the place and any process that uses any significant number of DLLs is very unlikely to have a contiguous free address range larger than 500-700 Mb.
This problem does not exist with other O/S such as Unix or Linux nor does it exist with 64-bit Windows. So, if you need to use a cache or datastore larger than around 700 Mb you need to use either 64-it Windows or another O/S. Note that even on 32-bit Linux/Unix TimesTen datastores are limited to a maximum size of 2 GB. If you need more than 2 GB you need to use a 64-bit O/S.
Chris -
Got diskgroup space exhausted in DR DERVER
Dear Gurus,
Today morning i have added a datafile in production database which is on ASM, but in dr that space was not there in corresponding diskgroup .
i got following error in DR ALERT LOG.
ORA-01237: cannot extend datafile 34
ORA-01110: data file 34: '+ASMDB/drgcprod/datafile/ins.342.764351039'
ORA-17505: ksfdrsz:1 Failed to resize file to size 1920000 blocks
ORA-15041: diskgroup space exhausted
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Thu Mar 22 00:14:02 2012
SUCCESS: diskgroup ASMIND was dismounted
Thu Mar 22 00:14:02 2012
Errors in file /oracle/app/oracle/admin/drgcprod/bdump/drgcprod_mrp0_585764.trc:
ORA-01237: cannot extend datafile 34
ORA-01110: data file 34: '+ASMDB/drgcprod/datafile/ins.342.764351039'
ORA-17505: ksfdrsz:1 Failed to resize file to size 1920000 blocks
ORA-15041: diskgroup space exhausted
Thu Mar 22 00:14:02 2012
DBVERSION : 10.2.0.4
OS : IBM AIX 5.3
Please help me resolve this error.......
Regards,
Vamsi....It will be good if you assign space to standby system immediatly and apply redo recovery in standby.
Otherwise use following Blog to resolve:
http://gavinsoorma.com/2009/06/resize-standby-datafile-if-disk-runs-out-of-space/
Also from MOS - Recovery failed with error 1274 ORA-19502 ORA-27063 skgfospo SVR4 Error 28 [ID 221130.1]
Add Datafile To Standby Database Fails With ORA-19502 [ID 224735.1]
Hope this help
Also close threads
Handle: 844795
Status Level: Newbie (5)
Registered: Mar 16, 2011
Total Posts: 170
Total Questions: 84 (74 unresolved)
https://forums.oracle.com/forums/ann.jspa?annID=718
Edited by: 909592 on Mar 22, 2012 1:01 PM -
Error while activating Data Store Object
Hi Guru's,
When I try to activate a data store object i get the error message :
The creation of the export DataSource failed
No authorization to logon as trusted sys tem (Trusted RC=2).
No authorization to logon as trusted sys tem (Trusted RC=2).
Error when creating the export DataSource and dependent Program ID 4SYPYCOPQ94IXEGA3739L803Z retrieved for DataStore object ZODS_PRAHi,
you are facing a issue with your source system 'myself', check and repair it. Also check if the communication user (normally ALEREMOTE) has all permissions needed.
kind regards
Siggi -
Want the data store values to be displayed in input field of form
Hi,
I wanted to know wether is there possibility of displaying the data in a input field instead of expression field.
In our model i have used a form which has material type,plant and Vendor connected to data service1 in turn gives the out put in chart and i also have the second input form which has To(0CALMONTH) combo box which is connected to Data service2 and gives the data in table.
But what i wanted is to i also want to use the input fields of first form to second Data service2 and get the data based on on inputs of form1 and form2i.e also To field.
We can connect the port from form1 to DS2 but problem is we need to click the submit of form1 and form2 and it doesnt gives the output according the input of form1 and form2 as it gives the output of that form when we click that submit button.
I have followed help.sap for data tore procdure.
So i have used a datstore where i will store the values of form1 and call it in expression field which i will add and hide it in form2(hide because user should not see that input fields).
Formula used in data store for expression field.
IF(CONTAINS(STORE@matltype,@Material_Type),STORE@matltype,STORE@matltype &(IF(LEN(STORE@matltype)>0,'; ',''))&@Material_Type)
But whats happening is the value is getting concatenated when i goon change the values in input field, so i wanted to get the values to be replaced as soon as i change input field of form1(if use replace function its not working) and also it would be more preferrable if we use input field instead of expression field.
I would also like to know is there any alternate solution for the above requirement instead of datastore.
Thank You
K.SrinivasHi,
I have Form1 connected to Data service1 displays the data in a Chart and i have another Form2 connected to Dataservice2 displays the data in table.
In form1 there are Material Type,plant, and Vendor and form2 i have To (0CALMONTH)SO now i want also Form1 inputs for Table which gets teh data from Dataservice2.
What i have said earlier is connecting Form1 to Dataservice doesnt fetch the data correectly,its because if i click submit in form1 i get the data of those 3 inputs and i need to click the submit button in form2 after giving input which shows the data accordingly where it doesnt fulfill the requirement.
So i wanted some solution for that.For that reason i have used the data store and the procedure i have followed the help.sap as i said in my above mesage.
If Data store is also suggestable than i want to display the data in Text input Field instead of Expression field which should replace the previous values as soon as values change in data store.
Hope i have tried to be clear if still it is not i am ready to explain again.
Thank You
K.Srinivas
Maybe you are looking for
-
How to provide a LINK on normal TEXT field in read only mode
Hi, I have a TEXT item and that is used for only url purpose. so a user can enter something like http://www.abc.com in that field. So when I am in read only mode for that item, I want a link also on that text.. how can I do that. thanks, Deepak
-
Two OS, Two HD, Two LR, One Computer Conumdrum
Disclaimer: I am very new to LR so please bear with me. Originally I had set up a Windows XP operating system on a single HD on my computer on which was and is installed LR3.3. I added a second HD running Windows 7 and installed a second copy of LR3
-
Tag '1' in "Programs" category
Hello. I have no updates, but tag is on "Programs" category, how can i fix/remove it?
-
Moving contexts between ASA firewalls
Is there a recommended process to move ASA contexts from one firewall to another with the minimal amount of downtime? Can the configuration file be moved from one firewall to another, and the context created on the destination firewall specifying the
-
ITunes file was received damaged
A file I purchased on ITunes was damaged. Sound breaks up for ~1/4 second at 1:24 into the track. How do I get a replacement?