TtRestoring a Data Store w/ a Large PermSize to a Smaller PermSize
If have a backup of a data store created by ttBackup. I'd like to restore that data store into a data store with a smaller PermSize. The original data should fit in the new data store. However, I'm encountering errors doing this.
I've done the following:
- Ran ttBackup on the source data store.
- The source data store has a PermSize of 16 Gb, but it's backup file is about 4.5 Gbytes.
- On the target machine, I created an entry in the odbc.ini for the new data store, but with a PermSize=6144
- On the target machine, I then run ttIsql target_dsn_name
- This works okay and ipcs shows a shared memory segment with a size that correlates to the PermSize=6144.
- Then that data store is ttDestroy'ed to get it out of the way.
- Next I ran: ttRestore -fname source_dsn_name -connstr "DSN=target_dsn_name;Preallocate=1;PermSize=6144;TempSize=120" -dir . -noconn
- This succeeds, but only because "-noconn" was specified.
- When I 1st try to connect to the datastore by running: ttIsql "DSN=target_dsn_name;PermSize=6144;TempSize=120"
- It fails with the following error:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
- The tterror.log contains:
19:59:05.54 Err : : 3810: TT14000: TimesTen daemon internal error: Error 22 creating shared segment, KEY 0x0401a8b2
19:59:05.54 Err : : 3810: -- OS reports invalid shared segment size
19:59:05.54 Err : : 3810: -- Confirm that SHMMAX kernel parameter is set > datastore size
19:59:05.54 Err : : 3820: subd: Error identified in [sub.c: line 3188]
19:59:05.54 Err : : 3820: subd: (Error 836): TT0836: Cannot create data store shared-memory segment, error 22 -- file "db.c", lineno 9342, procedure "sbDbConnect"
19:59:05.54 Err : : 3820: file "db.c", lineno 9342, procedure "sbDbConnect"
19:59:05.54 Warn: : 3820: subd: connect trouble, rc 1, reason 836
19:59:05.54 Err : : 3820: Err 836: TT0836: Cannot create data store shared-memory segment, error 22 -- file "db.c", lineno 9342, procedure "sbDbConnect"
19:59:05.54 Err : : 3810: TT14000: TimesTen daemon internal error: Could not send 'manage' request to subdaemon rc 400 err1 703 err2 836
19:59:06.45 Warn: : 3810: 3820 ------------------: subdaemon process exited
Note that on the target machine total shared memory is configured for only 10 Gb, which is smaller than the size of the original data store.
Hi Brian,
ttRestore cannot be used for this purpose. The restored datastroe will always have the PermSize that was in effect when it was backed up. You cannot shrink an existing datastore. If you need to move the tables and data to a store with a smaller PermSize (assuming they will fit of course) then you need to use ttMigrate instead.
Chris
Similar Messages
-
How create data store with PermSize 512MB on WIN32?
Hi!
How create data store with PermSize > 512MB on WIN32? If I set PermSize > 512MB on WIN32, then data store becomes invalid.Thanks for the details. As I mentioned, due to issues with the way Windows manages memory and address space it is generally not possible to create a datastore larger than around 700 Mb on WIN32. Sometimes you may be lucky and get close to 1 GB but usually not. The issue is as follows; on Windows, a TimesTen datastore is a shared mapping created from memory backed by the paging file. This shared mapping must be mapped into the process address space as a contiguous range of addresses. So, if you have a 1 GB datastore then your process needs to have a contiguous 1 GB range of addresses free in order to be able to connect to (map) the datastore. Unfortunately the default behaviour of Windows is to map DLLs into a process address space all over the place and any process that uses any significant number of DLLs is very unlikely to have a contiguous free address range larger than 500-700 Mb.
This problem does not exist with other O/S such as Unix or Linux nor does it exist with 64-bit Windows. So, if you need to use a cache or datastore larger than around 700 Mb you need to use either 64-it Windows or another O/S. Note that even on 32-bit Linux/Unix TimesTen datastores are limited to a maximum size of 2 GB. If you need more than 2 GB you need to use a 64-bit O/S.
Chris -
How create data store with PermSize = 4096MB on HP-UX 64-bit?
Hi!
I use TimesTen 7.0.2:
TimesTen Release 7.0.2.0.0 (64 bit HPUX/IPF) (tt70_1:17001) 2007-05-02T05:22:15Z
Instance admin: root
Instance home directory: /opt/TimesTen/tt70_1
Daemon home directory: /var/TimesTen/tt70_1
Access control enabled.
I set PermSize = 4096MB for my new data store. Then I tryid to create it:
ttIsql -connStr "DSN=tt_rddb1;UID=ttsys;PWD=ttsys;OraclePWD=ttsys;Overwrite=1" -e "exit;"
But operation was failed:
836: Cannot create data store shared-memory segment, error 22.
Can I create data store with such size on HP-UX 64-bit???Is largefiles enabled? I believe you can check with fsadm -F vxfs /filesystem
Also please understand that 'PermSize' is not the only attribute affecting the size of the timesten shared memory segment. The actual resulting size is
PermSize + TempSize + LogBuffSize + Overhead
So you would need to configure shmmax to be > 4g. Have you tried setting it to (say) 8g (just for testing purposes to see if it eliminates the error). -
Hi,
I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
I encounter this issue in Windows XP, and application gets connection from jboss data source.
url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
username=test
password=test
Error information:
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
I am confused that if I use jdbc, there is no such error.
Connection conn = DriverManager.getConnection("url", "username", "password");
Regards,
NestaI think error 8 is
net helpmsg 8
Not enough storage is available to process this command.
If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
"Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
You can use tools like the free "Process Explorer" to see the used address ranges in your process.
Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses." -
Redhat: TT0837: Cannot attach data store shared-memory segment, error 12
Customer has two systems, one Solaris and one Linux. We have six DSNs with one dsn PermSize at 1.85G. Both OS systems are 32-bit. After migrating from TT6.0 to 11.2, I can not get replication working on the Linux system for the 1.85G dsn. The Solaris system is working correctly. I've been able to duplicate the issue in out lab also. If I drop the PermSize down to 1.0G, replication is started. I've tried changing multiple parameters including setting up HugePages.
What else could I be missing? Decreasing the PermSize is not an option for this customer. Going to a full 64-bit system is on our development roadmap but is at least a year away due to other commitments.
This is my current linux lab configuration.
ttStatus output for the failed Subscriber DSN and a working DynamicDB DSN. As you can see, the policy is set to "Always" but it has no Subdaemon or Replication processes running.
Data store /space/Database/db/Subscriber
There are no connections to the data store
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Data store /space/Database/db/DynamicDB
There are 14 connections to the data store
Shared Memory KEY 0x5602000c ID 1826586625 (LARGE PAGES, LOCKED)
Type PID Context Connection Name ConnID
Replication 88135 0x56700698 LOGFORCE 4
Replication 88135 0x56800468 REPHOLD 3
Replication 88135 0x56900468 TRANSMITTER 5
Replication 88135 0x56a00468 REPLISTENER 2
Subdaemon 86329 0x08472788 Manager 2032
Subdaemon 86329 0x084c5290 Rollback 2033
Subdaemon 86329 0xd1900468 Deadlock Detector 2037
Subdaemon 86329 0xd1a00468 Flusher 2036
Subdaemon 86329 0xd1b00468 HistGC 2039
Subdaemon 86329 0xd1c00468 Log Marker 2038
Subdaemon 86329 0xd1d00468 AsyncMV 2041
Subdaemon 86329 0xd1e00468 Monitor 2034
Subdaemon 86329 0xd2000468 Aging 2040
Subdaemon 86329 0xd2200468 Checkpoint 2035
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Summary of Perm and Temp Sizes of each system.
PermSize=100
TempSize=50
PermSize=100
TempSize=50
PermSize=64
TempSize=32
PermSize=1850 => Subscriber
TempSize=35 => Subscriber
PermSize=64
TempSize=32
PermSize=200
TempSize=75
[SubscriberDir]
Driver=/opt/SANTone/msc/active/TimesTen/lib/libtten.so
DataStore=/Database/db/Subscriber
AutoCreate=0
DurableCommits=0
ExclAccess=0
LockLevel=0
PermWarnThreshold=80
TempWarnThreshold=80
PermSize=1850
TempSize=35
ThreadSafe=1
WaitForConnect=1
Preallocate=1
MemoryLock=3
###MemoryLock=0
SMPOptLevel=1
Connections=64
CkptFrequency=300
DatabaseCharacterSet=TIMESTEN8
TypeMode=1
DuplicateBindMode=1
msclab3201% cat ttendaemon.options
-supportlog /var/ttLog/ttsupport.log
-maxsupportlogsize 500000000
-userlog /var/ttLog/userlog
-maxuserlogsize 100000000
-insecure-backwards-compat
-verbose
-minsubs 12
-maxsubs 60
-server 16002
-enableIPv6
-linuxLargePageAlignment 2
msclab3201# cat /proc/meminfo
MemTotal: 66002344 kB
MemFree: 40254188 kB
Buffers: 474104 kB
Cached: 19753148 kB
SwapCached: 0 kB
HugePages_Total:
2000
HugePages_Free:
2000
HugePages_Rsvd:
0
HugePages_Surp:
0
Hugepagesize:
2048 kB
## Before loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 1
0x79010649 24444930 root 666 404 0
## After loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 2
0x7f020012 1825964033 ttadmin 660 236978176 2
0x79010649 24444930 root 666 404 0
msclab3201#
msclab3201# sysctl -a | grep huge
vm.nr_hugepages = 2000
vm.nr_hugepages_mempolicy = 2000The size of these databases is very close to the limit for 32-bit systems and you are almost certainly running into address space issues given that 11.2 has a slightly larger footprint than 6.0. 32-bit is really 'legacy' nowadays and you should move to a 64-bit platform as soon as you are able. That will solve your problems. I do not think there is any other solution (other than reducing the size of the database).
Chris -
703: Subdaemon connect to data store failed with error TT9999
All,
I'm getting the following error whilst trying to connect to a TimesTen DB:
connect "DSN=my_cachedb";
703: Subdaemon connect to data store failed with error TT9999
In the tterrors.log:
16:39:24.71 Warn: : 2568: 3596 ------------------: subdaemon process exited
16:39:24.71 Warn: : 2568: 3596 exited while connected to data store '/u01/ttdata/datastores/my_cachedb' shm 33554529 count=1
16:39:24.71 Warn: : 2568: daRecovery: subdaemon 3596, managing data store, failed: invalidate (failcode=202)
16:39:24.71 Warn: : 2568: Invalidating the data store (failcode 202, recovery for 3596)
16:39:24.72 Err : : 2568: TT14000: TimesTen daemon internal error: Could not send 'manage' request to subdaemon rc -2 err1 703 err2 9999
16:39:24.72 Warn: : 2568: 3619 Subdaemon reports creation failure
16:39:24.72 Err : : 2568: TT14000: TimesTen daemon internal error: Deleting 3619/0x1558650/'/u01/ttdata/datastores/my_cachedb' - from association table - not found
16:39:24.72 Err : : 2568: TT14004: TimesTen daemon creation failed: Could not del from dbByPid internal table
16:39:24.81 Warn: : 2568: child process 3596 terminated with signal 11
16:39:25.09 Err : : 2568: TT14000: TimesTen daemon internal error: daRecovery for 3619: No such data store '/u01/ttdata/datastores/my_cachedb'
I've checked and the datastore does exist and is owned by the timesten UNIX user.
ttversion:
TimesTen Release 11.2.2.2.0 (64 bit Linux/x86_64) (tt1122:53396) 2011-12-23T09:26:28Z
Instance admin: timesten
Instance home directory: /home/timesten/TimesTen/tt1122
Group owner: timesten
Daemon home directory: /home/timesten/TimesTen/tt1122/info
PL/SQL enabled.
Datastore definition from sys.odbc.ini:
[my_cachedb]
Driver=/home/timesten/TimesTen/tt1122/lib/libtten.so
DataStore=/u01/ttdata/datastores/my_cachedb
LogDir=/u01/ttdata/logs
PermSize=40
TempSize=32
DatabaseCharacterSet=AL32UTF8
OracleNetServiceName=testdb
Kernel parameters from sysctl -a:
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
Memory / SWAP:
MemTotal: 2050784 kB
SWAP: /dev/mapper/VolGroup00-LogVol01 partition 4095992
I'm new to TimesTen and I'm planning on evaluationg it to see if it could solve an issue we're having. Any suggestions would be much appreciated.
Thanks,
Ian.Hi Ian,
Can you please answer the following / provide the following information:
1. What are your kernel parameters relating to semaphores set to? Is anything else on the mahcine using significant numbers of semaphores?
2. Please provide the output of the following shell commands:
ls -ld /u01
ls -ld /u01/ttdata
ls -ld /u01/ttdata/datastores
ls -ld /u01/ttdata/logs
3. Please provide an excerpt of the detailed message log (ttmesg.log) between around 16:38 and 16:40 (i.e. from a little while before the problem until after the problem).
Thanks,
Chris -
Cannot reconnect to Timesten Data Store
Hi,
I'm using TimesTen 6.0.1 on Linux Red Hat AS 4.
My application connects well to the data store.
Then when I do:
/sbin/service tt_tt60 restart
My application is disconnected from the DS (normal), but after the data store is back, my application tries to connect again, and it does not work. I see this error log:
Sep 21 17:05:26 intel1 TimesTen Data Manager 6.0.1.tt60[11638]: ODBC_ERROR: sqldbthread 0: 3- ERROR -1 in sqlDBv2/sqldb_api.c, line 154: Error in connecting to the driver
Sep 21 17:05:26 intel1 TimesTen Data Manager 6.0.1.tt60[11638]: ODBC_ERROR: sqldbthread 0: 4- [TimesTen][TimesTen 6.0.1 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 12 -- file "db.c", lineno 8623, procedure "sbDbConnect()" *** ODBC Error/Warning= 08001, Additional Error/Warning= 837
errno 12 is pointing to not enough memory
PermSize in sys.odbc.ini is 1800
If I change PermSize to 1000, it works well, the application reconnects well to the data store. But reducing PermSize would severely affect the max number of entries we can put in the database.
So, is there a way to recover from such a case without changing PermSize?
ChristopheI think we need more information to understand exactly what is happening here. Here are a few observations and suggestions:
1. When the datastore gets 'invalidated' due to the main daemon being stopped (restarted), and your application gets the error (846 or 994), what does it do? It must (a) issue a rollback on all its connections ot the DS and (b) issue a disconnect on all its connections to the DS> Until this is done, the shared memory segment for the DS will remain in existence and 'attached' to the application process. Only when the above has been done will the DS segment be released. You could use the O/S ipcs command before and after the daemon restart to see if the DS segment remains or not.
2. The O/S is denying the request to attach a new segment to the application process. The error is ENOMEM which does not necessarily mean not enough actual memory. Most likely some kernel parameter or process limit parameter is set too low such that when you trya and attach the second segment (since I suspect, as described above, that the first segment is still attached to the application) it fails. Or it may truly be that you do not have enough address space left (is this 32-bit or 64-bit) to attach the new segment.
I would recommend investigating the issues I describe in (1) above as a first step.
Chris -
925: Cannot create data store semaphores (Invalid argument)
I'm trying to connect to Timesten, but I'm getting this error.
I have looked at other similar discussions, but yet I could not solve the problem.
[timesten@atd info]$ ttisql "dsn=tpch"
Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "dsn=tpch";
925: Cannot create data store semaphores (Invalid argument)
703: Subdaemon connect to data store failed with error TT925
The command failed.
Done.
Here is my information.
[tpch]
Driver=/home_sata/timesten/TimesTen/tt1122/lib/libtten.so
DataStore=/home_sata/timesten/TimesTen/tt1122/tpch/tpch
LogDir=/home_sata/timesten/TimesTen/tt1122/tpch/logs
PermSize=1024
TempSize=512
PLSQL=1
DatabaseCharacterSet=US7ASCII
kernel.sem = 400 32000 512 5029
kernel.shmmax=68719476736
kernel.shmall=16777216
[timesten@atd info]$ cat /proc/meminfo
MemTotal: 297699764 kB
MemFree: 96726036 kB
Buffers: 582996 kB
Cached: 155831636 kB
SwapCached: 0 kB
Active: 115729396 kB
Inactive: 78767560 kB
Active(anon): 44040440 kB
Inactive(anon): 8531544 kB
Active(file): 71688956 kB
Inactive(file): 70236016 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 112639992 kB
SwapFree: 112639992 kB
Dirty: 160 kB
Writeback: 0 kB
AnonPages: 38082348 kB
Mapped: 15352480 kB
Shmem: 14489676 kB
Slab: 3993152 kB
SReclaimable: 3826768 kB
SUnreclaim: 166384 kB
KernelStack: 18344 kB
PageTables: 245352 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 261457104 kB
Committed_AS: 74033552 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 903384 kB
VmallocChunk: 34205870424 kB
HardwareCorrupted: 0 kB
AnonHugePages: 35538944 kB
HugePages_Total: 32
HugePages_Free: 32
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6384 kB
DirectMap2M: 2080768 kB
DirectMap1G: 299892736 kB
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 67108864
max total shared memory (kbytes) = 67108864
min seg size (bytes) = 1The error message suggests that the system is running out of semaphores although plenty seem to be configured:
kernel.sem = 400 32000 512 5029
Could it be that there are other programs on this machine as this user using semaphores?
Have you made changes to the kernel parameters and haven't made them permanent with
# /sbin/sysctl -p
or a re-boot?
If you've done a # /sbin/sysctl -p have you recycled the TT daemon
$ ttDaemonAdmin -restart
So TT takes up the new settings?
Tim -
838 Cannot get data store shared memory segment error in Timesten
Hi Chris,
This is shalini. I have mailed u for last two days reg this issue. You asked me about few details. Here the answer is,
1, Have u modified anything after timesten installation? - No. I didnt change anything .
2, What are the three values under physical memory? - Total - 2036 MB ,Cached - 1680 MB ,Free - 12 MB
3, ttmesg.log and tterrors.log? -
tterrors.log::
12:48:58.83 Warn: : 1016: 3972 Connecting process reports creation failure
ttmesg.log::
12:48:58.77 Info: : 1016: maind got #12.14, hello: pid=3972 type=library payload=%00%00%00%00 protocolID=TimesTen 11.2.1.3.0.tt1121_32 ident=%00%00%00%00
12:48:58.77 Info: : 1016: maind: done with request #12.14
12:48:58.77 Info: : 1016: maind got #12.15 from 3972, connect: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 user=InstallAdmin pass= dbdev= logdev= logdir=c:\timesten\tt1121_32\logs\my_ttdb grpname= access=%00%00%00%00 persist=%00%00%00%00 flags=@%00%00%01 newpermsz=%00%00%00%02 newtempsz=%00%00%00%02 newpermthresh=Z%00%00%00 newtempthresh=Z%00%00%00 newlogbufsz=%00%00%00%02 newsgasize=%00%00%00%02 newsgaaddr=%00%00%00%00 autorefreshType=%ff%ff%ff%ff logbufparallelism=%00%00%00%00 logflushmethod=%00%00%00%00 logmarkerinterval=%00%00%00%00 connections=%03%00%00%00 control1=%00%00%00%00 control2=%00%00%00%00 control3=%00%00%00%00 ckptrate=%06%00%00%00 connflags=%00%00%00%00 newlogfilesz=%00%00@%01 skiprestorecheck=%00%00%00%00 realuser=InstallAdmin conn_name=my_ttdb ckptfrequency=X%02%00%00 ckptlogvolume=%14%00%00%00 recoverythreads=%03%00%00%00 reqid=* plsql=%ff%ff%ff%ff receiverThreads=%00%00%00%00
12:48:58.77 Info: : 1016: 3972 0266E900: Connect c:\timesten\tt1121_32\data\my_ttdb\my_ttdb a=0x0 f=0x1000040
12:48:58.77 Info: : 1016: permsize=33554432 tempsize=33554432
12:48:58.77 Info: : 1016: logbuffersize=33554432 logfilesize=20971520
12:48:58.77 Info: : 1016: permwarnthresh=90 tempwarnthresh=90 logflushmethod=0 connections=3
12:48:58.77 Info: : 1016: ckptfrequency=600 ckptlogvolume=20 conn_name=my_ttdb
12:48:58.77 Info: : 1016: recoverythreads=3 logbufparallelism=0
12:48:58.77 Info: : 1016: plsql=0 sgasize=33554432 sgaaddr=0x00000000
12:48:58.77 Info: : 1016: control1=0 control2=0 control3=0
12:48:58.79 Info: : 1016: ckptrate=6 receiverThreads=0
12:48:58.79 Info: : 1016: 3972 0266E900: No such data store
12:48:58.79 Info: : 1016: daDbConnect failed
12:48:58.79 Info: : 1016: return 1 833 'no such data store!' arg1='c:\timesten\tt1121_32\data\my_ttdb\my_ttdb' arg1type='S' arg2='' arg2type='S'
12:48:58.79 Info: : 1016: maind: done with request #12.15
12:48:58.79 Info: : 1016: maind got #12.16 from 3972, create: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 user=InstallAdmin pass= dbdev= logdev= logdir=c:\timesten\tt1121_32\logs\my_ttdb grpname= persist=%00%00%00%00 access=%00%00%00%00 flags=@%00%00%01 permsize=%00%00%00%02 tempsize=%00%00%00%02 permthresh=Z%00%00%00 tempthresh=Z%00%00%00 logbufsize=%00 %00%02 logfilesize=%00%00@%01 shmsize=8%f4%c9%06 sgasize=%00%00%00%02 sgaaddr=%00%00%00%00 autorefreshType=%01%00%00%00 logbufparallelism=%00%00%00%00 logflushmethod=%00%00%00%00 logmarkerinterval=%00%00%00%00 connections=%03%00%00%00 control1=%00%00%00%00 control2=%00%00%00%00 control3=%00%00%00%00 ckptrate=%06%00%00%00 connflags=%00%00%00%00 inrestore=%00%00%00%00 realuser=InstallAdmin conn_name=my_ttdb ckptfrequency=X%02%00%00 ckptlogvolume=%14%00%00%00 recoverythreads=%03%00%00%00 reqid=* plsql=%00%00%00%00 receiverThreads=%00%00%00%00
12:48:58.79 Info: : 1016: 3972 0266E900: Create c:\timesten\tt1121_32\data\my_ttdb\my_ttdb p=0x0 a=0x0 f=0x1000040
12:48:58.79 Info: : 1016: permsize=33554432 tempsize=33554432
12:48:58.79 Info: : 1016: logbuffersize=33562624 logfilesize=20971520
12:48:58.80 Info: : 1016: shmsize=113898552
12:48:58.80 Info: : 1016: plsql=0 sgasize=33554432 sgaaddr=0x00000000
12:48:58.80 Info: : 1016: permwarnthresh=90 tempwarnthresh=90 logflushmethod=0 connections=3
12:48:58.80 Info: : 1016: ckptfrequency=600 ckptlogvolume=20 conn_name=my_ttdb
12:48:58.80 Info: : 1016: recoverythreads=3 logbufparallelism=0
12:48:58.80 Info: : 1016: control1=0 control2=0 control3=0
12:48:58.80 Info: : 1016: ckptrate=6, receiverThreads=1
12:48:58.80 Info: : 1016: creating DBI structure, marking in flux for create by 3972
12:48:58.82 Info: : 1016: daDbCreate: about to call createShmAndSem, trashed=-1, panicked=-1, shmSeq=1, name c:\timesten\tt1121_32\data\my_ttdb\my_ttdb
12:48:58.82 Info: : 1016: marking in flux for create by 3972
12:48:58.82 Info: : 1016: create.c:338: Mark in-flux (now reason 1=create pid 3972 nwaiters 0 ds c:\timesten\tt1121_32\data\my_ttdb\my_ttdb) (was reason 1)
12:48:58.82 Info: : 1016: maind: done with request #12.16
12:48:58.83 Info: : 1016: maind got #12.17 from 3972, create complete: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 connid=%00%08%00%00 success=N
12:48:58.83 Info: : 1016: 3972 0266E900: CreateComplete N c:\timesten\tt1121_32\data\my_ttdb\my_ttdb
12:48:58.83 Warn: : 1016: 3972 Connecting process reports creation failure
12:48:58.83 Info: : 1016: About to destroy SHM 560
12:48:58.83 Info: : 1016: maind: done with request #12.17
12:48:58.83 Info: : 1016: maind 12: socket closed, calling recovery (last cmd was 17)
12:48:58.85 Info: : 1016: Starting daRecovery for 3972
12:48:58.85 Info: : 1016: 3972 ------------------: process exited
12:48:58.85 Info: : 1016: Finished daRecovery for pid 3972.
I think "Connecting process reports creation failure" is the error shown.
4, DSN Attributes? -
earlier, i used my_ttdb which is the SYSTEM DSN , and i tried by creating a user DSN, still it is not working. I will give the my_ttdb dsn attributes.
I am unable to attach the screenshots in this message. Is there anyway to attach it. I should not send the reply through mail from my company, so only sent this message. You can reply to my mailbox id.
Chris or Anybody can please reply to this below message.
I am having a table called Employee. I have created a view called emp_view.
create view emp_view as
select /*+ INDEX (Employee emp_no) */
emp_no,emp_name
from Employee;
we have another index on Employee table called emp_name.
I need to use the emp_name index in emp_view like below.
select /*+ INDEX (<from employee tables> emp_name) */
emp_no
from emp_view
where emp_name='SSS';
in the index i tried /*+ INDEX (emp_view.Employee emp_name) */ But it is still taking the index used in emp_view view ie emp_no.. Anyone can u please help me to resolve this issue.
Edited by: user12154813 on Nov 3, 2009 4:21 AMDSN Attributes for the two paths u gave are mentioned below.
HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\mt_ttdb:
(Default) - (value not set)
AssertDlg - 1
AutoCreate -1
AutorefreshType -1
CacheGridEnable -0
CacheGridMsgWait -60
CatchupOverride -0
CkptFrequency -600
CkptLogVolume -20
CkptRate -6
ConnectionCharacterSet -US7ASCII
Connections -3
Control1 -0
Control2 -0
Control3 -0
DatabaseCharacterSet -US7ASCII
DataStore - C:\TimesTen\tt1121_32\Data\my_ttdb\my_ttdb
DDLCommitBehavior -0
Description - My Timesten Data store
Diagnostics -1
Driver - C:\TimesTen\tt1121_32\bin\ttdv1121.dll
DuplicateBindMode -0
DurableCommits -0
DynamicLoadEnable -1
DynamicLoadErrorMode -0
ExclAccess -0
ForceConnect -0
InRestore -0
Internal1 -0
Isolation -1
LockLevel -0
LockWait -10.0
LogAutoTruncate -1
LogBuffSize -0
LogBufMB -32
LogBufParallelism -0
LogDir -C:\TimesTen\tt1121_32\Logs\my_ttdb
LogFileSize -20
LogFlushMethod -0
Logging -1
LogMarkerInterval -0
LogPurge -1
MatchLogOpts -0
MaxConnsPerServer -4
NLS_LENGTH_SEMANTICS -BYTE
NLS_NCHAR_CONV_EXCP -0
NLS_SORT -BINARY
NoConnect -0
Overwrite -0
PassThrough -0
PermSize -32
PermWarnThreshold -0
PLSQL - value (-1)
PLSQL_CODE_TYPE -INTERPRETED
PLSQL_CONN_MEM_LIMIT - 100
PLSQL_MEMORY_SIZE - 32
PLSQL_OPTIMIZE_LEVEL - 2
PLSQL_TIMEOUT - 30
Preallocate -0
PrivateCommands -0
QueryThreshold - value (-1)
RACCallback -1
ReadOnly -0
ReceiverThreads -0
RecoveryThreads -3
SkipRestoreCheck -0
SMPOptLevel - value(-1)
SQLQueryTimeout - value(-1)
Temporary -0
TempSize -32
TempWarnThreshold - 0
ThreadSafe -1
TransparentLoad- value(-1)
TypeMode -0
WaitForConnect -1
XAConnection -0
HKEY_CURRENT_USER\SOFTWARE\ODBC\ODBC.INI\my_test:
existing values changed from the above dsn attributes:
TypeMode - value(-1)
RecoveryThreads -0
MaxConnsPerServer -5
LogFileSize -4
LogDir -C:\TimesTen\tt1121_32\Logs\my_test
DataStore - C:\TimesTen\tt1121_32\data\my_test\my_test
DatabaseCharacterSet -AL32UTF8
ConnectionCharacterSet -AL32UTF8
CkptFrequency - value(-1)
CkptLogVolume - value(-1)
CkptRate -value(-1)
Connections -0
CacheGridEnable -1
newly added values:
ServerStackSize - 10
ServersPerDSN -2
other attributes are same for both the dsn's.
Please reply as soon as possible.
Thanks,
Shalini.
Edited by: user12154813 on Nov 3, 2009 11:34 PM
Edited by: user12154813 on Nov 3, 2009 11:37 PM -
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
Could not start cache agent for the requested data store
Hi,
This is my first attempt in TimesTen. I am running TimesTen on the same Linux host (RHES 5.2) that running Oracle 11g R2. The version of TimesTen is:
TimesTen Release 11.2.1.4.0
Trying to create a simple cache.
The DSN entry for ttdemo1 in .odbc.ini is as follows:
+[ttdemo1]+
Driver=/home/oracle/TimesTen/timesten/lib/libtten.so
DataStore=/work/oracle/TimesTen_store/ttdemo1
PermSize=128
TempSize=128
UID=hr
OracleId=MYDB
DatabaseCharacterSet=WE8MSWIN1252
ConnectionCharacterSet=WE8MSWIN1252
Using ttisql I connect
Command> connect "dsn=ttdemo1;pwd=oracle;oraclepwd=oracle";
Connection successful: DSN=ttdemo1;UID=hr;DataStore=/work/oracle/TimesTen_store/ttdemo1;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=WE8MSWIN1252;DRIVER=/home/oracle/TimesTen/timesten/lib/libtten.so;OracleId=MYDB;PermSize=128;TempSize=128;TypeMode=0;OracleNetServiceName=MYDB;
(Default setting AutoCommit=1)
Command> call ttcacheuidpwdset('ttsys','oracle');
Command> call ttcachestart;
*10024: Could not start cache agent for the requested data store. Could not initialize Oracle Environment Handle.*
The command failed.
The following is shown in the tterrors.log:
15:41:21.82 Err : ORA: 9143: ora-9143--1252549744-xxagent03356: Datastore: TTDEMO1 OCIEnvCreate failed. Return code -1
15:41:21.82 Err : : 7140: oraagent says it has failed to start: Could not initialize Oracle Environment Handle.
15:41:22.36 Err : : 7140: TT14004: TimesTen daemon creation failed: Could not spawn oraagent for '/work/oracle/TimesTen_store/ttdemo1': Could not initialize Oracle Environment Handl
What are the reasons that the daemon cannot spawn another agent? FYI the environment variables are set as:
ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
ANT_HOME=/home/oracle/TimesTen/ttdemo1/3rdparty/ant
CLASSPATH=/home/oracle/TimesTen/ttdemo1/lib/ttjdbc5.jar:/home/oracle/TimesTen/ttdemo1/lib/orai18n.jar:/home/oracle/TimesTen/ttdemo1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/ttdemo1/3rdparty/jms1.1/lib/jms.jar:.
oracle@rhes5:/home/oracle/TimesTen/ttdemo1/info% echo $LD_LIBRARY_PATH
/home/oracle/TimesTen/ttdemo1/lib:/home/oracle/TimesTen/ttdemo1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
CheersSure thanks.
Here you go:
Daemon environment:
_=/bin/csh
DISABLE_HUGETLBFS=1
SYSTEM=TEST
INIT_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/init+ASM.ora
GEN_APPSDIR=/home/oracle/dba/bin
LD_LIBRARY_PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
HOME=/home/oracle
SPFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
TNS_ADMIN=/u01/app/oracle/product/11.2.0/db_1/network/admin
INITFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
HTMLDIR=/home/oracle/+ASM/dba/html
HOSTNAME=rhes5
TEMP=/oradata1/tmp
PWD=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin
HISTSIZE=1000
PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/oci:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/jdbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc_drivermgr:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/proc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1/sdk:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant/bin:/usr/kerberos/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/bin/X11:/usr/X11R6/bin:/usr/platform/SUNW,Ultra-2/sbin:/u01/app/oracle/product/11.2.0/db_1:/u01/app/oracle/product/11.2.0/db_1/bin:.
GEN_ADMINDIR=/home/oracle/dba/admin
CONTROLFILE_DIR=/u01/app/oracle/backup/+ASM/controlfile_dir
ETCDIR=/home/oracle/+ASM/dba/etc
GEN_ENVDIR=/home/oracle/dba/env
DATAFILE_DIR=/u01/app/oracle/backup/+ASM/datafile_dir
BACKUPDIR=/u01/app/oracle/backup/+ASM
RESTORE_ARCFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_arcfiles.txt
TMPDIR=/oradata1/tmp
CVS_RSH=ssh
ARCLOG_DIR=/u01/app/oracle/backup/+ASM/arclog_dir
REDOLOG_DIR=/u01/app/oracle/backup/+ASM/redolog_dir
INPUTRC=/etc/inputrc
LOGDIR=/home/oracle/+ASM/dba/log
DATAFILE_LIST=/u01/app/oracle/backup/+ASM/datafile_dir/datafile.list
LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:
PS1=rhes5:($ORACLE_SID)$
G_BROKEN_FILENAMES=1
SHELL=/bin/ksh
PASSFILE=/home/oracle/dba/env/.ora_accounts
LOGNAME=oracle
ORA_NLS10=/u01/app/oracle/product/11.2.0/db_1/nls/data
ORACLE_SID=mydb
APPSDIR=/home/oracle/+ASM/dba/bin
ORACLE_OWNER=oracle
RESTOREFILE_DIR=/u01/app/oracle/backup/+ASM/restorefile_dir
SQLPATH=/home/oracle/dba/bin
TRANDUMPDIR=/tran
RESTORE_SPFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_spfile.txt
RESTORE_DATAFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_datafiles.txt
ENV=/home/oracle/.kshrc
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
SSH_CONNECTION=50.140.197.215 62742 50.140.197.216 22
LESSOPEN=|/usr/bin/lesspipe.sh %s
TERM=xterm
GEN_ETCDIR=/home/oracle/dba/etc
SP_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/spfile+ASM.ora
ORACLE_BASE=/u01/app/oracle
ASTFEATURES=UNIVERSE - ucb
ADMINDIR=/home/oracle/+ASM/dba/admin
SSH_CLIENT=50.140.197.215 62742 22
TZ=GB
SUPPORT=oracle@linux
ARCHIVE_LOG_LIST=/u01/app/oracle/backup/+ASM/arclog_dir/archive_log.list
USER=oracle
RESTORE_TEMPFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_tempfiles.txt
MAIL=/var/spool/mail/oracle
EXCLUDE=/home/oracle/+ASM/dba/bin/exclude.lst
GEN_LOGDIR=/home/oracle/dba/log
SSH_TTY=/dev/pts/2
RESTORE_INITFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_initfile.txt
HOSTTYPE=i386-linux
VENDOR=intel
OSTYPE=linux
MACHTYPE=i386
SHLVL=1
GROUP=dba
HOST=rhes5
REMOTEHOST=vista
EDITOR=vi
ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
ODBCINI=/home/oracle/.odbc.ini
TT=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/
SHLIB_PATH=/u01/app/oracle/product/11.2.0/db_1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1//lib
ANT_HOME=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant
CLASSPATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/ttjdbc5.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/orai18n.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/jms1.1/lib/jms.jar:.
TT_AWT_PLSQL=0
NLS_LANG=AMERICAN_AMERICA
NLS_COMP=ANSI
NLS_SORT=BINARY
NLS_LENGTH_SEMANTICS=BYTE
NLS_NCHAR_CONV_EXCP=FALSE
NLS_CALENDAR=GREGORIAN
NLS_TIME_FORMAT=hh24:mi:ss
NLS_DATE_FORMAT=syyyy-mm-dd hh24:mi:ss
NLS_TIMESTAMP_FORMAT=syyyy-mm-dd hh24:mi:ss.ff9
ORACLE_HOME=
DaemonCWD = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info
DaemonLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/tterrors.log
DaemonOptionsFile = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttendaemon.options
Platform = Linux/x86/32bit
SupportLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttmesg.log
Uptime = 136177 seconds
Backcompat = no
Group = 'dba'
Daemon pid 8111 port 53384 instance ttimdb1
End of report -
Error while activating Data Store Object
Hi Guru's,
When I try to activate a data store object i get the error message :
The creation of the export DataSource failed
No authorization to logon as trusted sys tem (Trusted RC=2).
No authorization to logon as trusted sys tem (Trusted RC=2).
Error when creating the export DataSource and dependent Program ID 4SYPYCOPQ94IXEGA3739L803Z retrieved for DataStore object ZODS_PRAHi,
you are facing a issue with your source system 'myself', check and repair it. Also check if the communication user (normally ALEREMOTE) has all permissions needed.
kind regards
Siggi -
Creation of data packages due to large amount of datasets leads to problems
Hi Experts,
We have build our own generic extractor.
When data packages (due to large amount of datasets) are created, different problems occur.
For example:
Datasets are now doubled and appear twice, one time in package one and a second time in package two. Since those datsets are not identical, information are lost while uploading those datasets to an ODS or Cube.
What can I do? SAP will not help due to generic datasource.
Any suggestion?
BR,
ThorstenHi All,
Thanks a million for your help.
My conclusion from your answers are the following.
a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
b) Uploading a huge amount of datasets is possible in two ways:
b1) with selction criteria in InfoPackage and several uploads
b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
c) both ways should have the same result within the ODS
Ok. Thanks for that.
So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
Guess this is normal technical behaviour of BI.
I am fine when results in ODS are the same for b1 and b2.
Have a nice day.
BR,
Thorsten -
Want the data store values to be displayed in input field of form
Hi,
I wanted to know wether is there possibility of displaying the data in a input field instead of expression field.
In our model i have used a form which has material type,plant and Vendor connected to data service1 in turn gives the out put in chart and i also have the second input form which has To(0CALMONTH) combo box which is connected to Data service2 and gives the data in table.
But what i wanted is to i also want to use the input fields of first form to second Data service2 and get the data based on on inputs of form1 and form2i.e also To field.
We can connect the port from form1 to DS2 but problem is we need to click the submit of form1 and form2 and it doesnt gives the output according the input of form1 and form2 as it gives the output of that form when we click that submit button.
I have followed help.sap for data tore procdure.
So i have used a datstore where i will store the values of form1 and call it in expression field which i will add and hide it in form2(hide because user should not see that input fields).
Formula used in data store for expression field.
IF(CONTAINS(STORE@matltype,@Material_Type),STORE@matltype,STORE@matltype &(IF(LEN(STORE@matltype)>0,'; ',''))&@Material_Type)
But whats happening is the value is getting concatenated when i goon change the values in input field, so i wanted to get the values to be replaced as soon as i change input field of form1(if use replace function its not working) and also it would be more preferrable if we use input field instead of expression field.
I would also like to know is there any alternate solution for the above requirement instead of datastore.
Thank You
K.SrinivasHi,
I have Form1 connected to Data service1 displays the data in a Chart and i have another Form2 connected to Dataservice2 displays the data in table.
In form1 there are Material Type,plant, and Vendor and form2 i have To (0CALMONTH)SO now i want also Form1 inputs for Table which gets teh data from Dataservice2.
What i have said earlier is connecting Form1 to Dataservice doesnt fetch the data correectly,its because if i click submit in form1 i get the data of those 3 inputs and i need to click the submit button in form2 after giving input which shows the data accordingly where it doesnt fulfill the requirement.
So i wanted some solution for that.For that reason i have used the data store and the procedure i have followed the help.sap as i said in my above mesage.
If Data store is also suggestable than i want to display the data in Text input Field instead of Expression field which should replace the previous values as soon as values change in data store.
Hope i have tried to be clear if still it is not i am ready to explain again.
Thank You
K.Srinivas
Maybe you are looking for
-
Cannot open iPhoto after upgrading to Yosemite
Since upgrading to Yosemite, I am unable to open iPhoto. When I click on it a message appears saying 'this version of iPhoto is incompatible with Yosemite', and directs me to the App store to upgrade. I then get message saying that 'the product yo
-
2 HUGE Issues with iTunes Match that I am having...
Okay, so I have now had iTunes Match for one week and am having two MAJOR issues which I cannot seem to sort out. Apple is at least investigating the first, but unless anybody here has any suggestions I am out of luck on the second for now... Issue
-
Write Message in Job Log from FM
Hi everyone, I´m having an issue trying to find the way to write a message in job log. I´ve read a lot of solutions but I can't find anyone that describes how to do it from a function module. What i'm saying is that all the answers focus on reports a
-
Why can I only get the option to print pdf files
Hi, I have a website app that uses CR13sp2 and VS2010sp2. The reports were created with CR10 and are retrieved to be printed via the VS2010 CR13 app. I get receive a window which reads: The viewer must export to PDF to print. Choose the Print option
-
Where can I buy a replacement stand for external iSight?
I've found the Sightflex/iFlex on Amazon.com, but the product cannot be shipped to Canada. I just got an iSight off eBay, but can't use it until I get a stand