Limitation of Oracle Timesten Database
Hi,
We want to deploy the Oracle Timesten11.2 on 64bit RHEL Platform.
Total Physical RAM Size in Server is 32 GB.
There are 40 Millions rows in single subscriber table(45 Columns) in Oracle Database
We want to configure the READ-ONLY-LOCAL CACHE in Oracle Timesten for this subscriber table.
What should be the size of Oracle Timesten Database for support the above configuration.
Regards
Hitgon
Hi hitgon,
You can use the ttSize utility (http://docs.oracle.com/cd/E21901_01/doc/timesten.1122/e21643/util.htm#CHDEHFIE) for prediction the size.
Example: ttSize -tbl tableName -rows 40000000 your_DSN
Additionally, you can use ttComputeTabSizes function and tablesize clause for getting the actual table size
Command> call ttComputeTabSizes('emp');
Command> tablesize emp;
Sizes of GENA.EMP:
INLINE_ALLOC_BYTES: 0
NUM_USED_ROWS: 0
NUM_FREE_ROWS: 0
AVG_ROW_LEN: Not computed
OUT_OF_LINE_BYTES: 0
METADATA_BYTES: 784
TOTAL_BYTES: 784
LAST_UPDATED: 2012-01-19 19:37:07.000000
1 table found.
Command>
Command> insert into emp values (1,sysdate,'gena', 'gena', 'gena', 1);
1 row inserted.
Command> call ttComputeTabSizes('emp');
Command> tablesize emp;
Sizes of GENA.EMP:
INLINE_ALLOC_BYTES: 52080
NUM_USED_ROWS: 1
NUM_FREE_ROWS: 255
AVG_ROW_LEN: 206
OUT_OF_LINE_BYTES: 0
METADATA_BYTES: 784
TOTAL_BYTES: 52864
LAST_UPDATED: 2012-01-19 19:37:54.000000
1 table found.
Command>Regards,
Gennady
Similar Messages
-
Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache
Hi,
What is difference in Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache.
For 32 bit on windows OS i am not able to insert data's more than 500k rows with 150 columns (with combinations of CHAR,BINARY_DOUBLE,BINARY_FLOAT, TT_BIGINT,REAL,DECIMAL,NUMERIC etc).
[TimesTen][TimesTen 11.2.2.2.0 ODBC Driver][TimesTen]TT0802: Database permanent space exhausted -- file "blk.c", lineno 3450, procedure "sbBlkAlloc"
I have set Perm size as 700 mb,Temp size as 100mb
What is the max size we can given for PermSize,TempSize,LogBufMB for 32 bit on windows OS.
What is the max size we can given for PermSize,TempSize,LogBufMB for 64 bit on windows OS.
What is the Max configuration of TT for 32 bit what i can set for Perm size Temp size.
Thanks!They are the same product but they are licensed differently and the license limits what functionality you can use.
TimesTen In-Memory Database is a product in its own right allows you to use TimesTen as a standalone database and also allows replication.
IMDB Cache is an Oracle DB Enterprise Edition option (i.e. it can only be licensed as an option to an Oracle DB EE license). This includes all the functionality of TImesTen In-Memory Database but adds in cache functionality (cache groups, cache grid etc.).
32-bit O/S are in general a poor platform to try and create an in-memory database of any significant size (32-bit O/S are very limited in memory addressing capability) and 32-bit Windows is the worst example. The hard coded limit for total datastore size on 32-bit O/S is 2 GB but in reality you probably can;'t achieve that. On Windows the largest you can get is 1.1 GB and most often less than that. If you need something more than about 0.5 Gb on Windows then you really need to use 64-bit Windows and 64-bit TimesTen. There are no hard coded upper limit to database size on 64-bit TimesTen; the limit is the amount of free physical memory (not virtual memory) in the machine. I have easily created a 12 GB database on a Win64 machine with 16 GB RAM. On 64-bit Unix machines we have live database of over 1 TB...
Chris -
Oracle TimesTen In-Memory Database Risk Matrix
Hi,
From the following web-site I can see two vulnerabilities listed against TimesTen --- CVE-2010-0873 and CVE-2010-0910
http://www.oracle.com/technetwork/topics/security/cpujul2010-155308.html
================================================================
Oracle TimesTen In-Memory Database Risk Matrix
CVE# Component Protocol Package and/or Privilege Required Remote Exploit without Auth.? CVSS VERSION 2.0 RISK (see Risk Matrix Definitions) Last Affected Patch set (per Supported Release) Notes
Base Score Access Vector Access Complexity Authentication Confidentiality Integrity Availability
CVE-2010-0873 Data Server TCP None Yes 10.0 Network Low None Complete Complete Complete 7.0.6.0 See Note 1
CVE-2010-0910 Data Server TCP None Yes 5.0 Network Low None None None Partial+ 7.0.6.0, 11.2.1.4.1 See Note 1
===========================================================================
Please let me know if I need to take any action on my current TimesTen deployment.
Im using TimesTen Release 11.2.1.8.4 and 7.0.5.16.0 in our customer sites.
Request you to respond with your valuable comments.
Regards
PratheejHi Pratheej,
These vulnerabilities were fixed in 11.2.1.6.1 and 7.0.6.2.0. As you are on 11.2.1.8.4 you are okay for 11.2.1 but the 7.0.5.16.0 release does contain the vulnerability. If you are concerned then you should upgrade those to 7.0.6.2.0 or later (check for the latest applicable 7.0 release in My Oracle Support).
Chris -
Timesten Database caching multiple oracle instances
Hi All,
I need to cache two oracle users which are running on different IP. Can Timesten support this ?
How do i cache another datastore , if one user is already cache in timesten .This means One timesten can cache one single oracle database. ?
CJ>> Correct. One TimesTen database can cache data from one Oracle database. Note that a TimesTen instance (installation) supports multiple TimesTen databases.
Is there any other way to shrink permsize ?
CJ>> There is no other way to shrink PermSize. And you can only shrink it if the currently configured size is larger than you really need.
How can i deploy timesten on RAC?
CJ>> The same way as you would for a single instance database. A RAC database is just a single database. TimesTen fully supports RAC.
Is there any similar feature available like RAC ?
CJ>> TimesTen provides 'Cache Grid' where you can have multiple TimesTen caches which are caching the same Oracle database work together to act as a single distributed cache. -
Application crashes with timesten database over OCI connection
Application crashes with timesten database over OCI connection
#0 0x405e09f8 in kpudsany () from /home/oracle/TimesTen/ratingtt1121/ttoracle_home/instantclient_11_1/libclntsh.so.10.1
#1 0x405728d0 in OCIDescribeAny () from /home/oracle/TimesTen/ratingtt1121/ttoracle_home/instantclient_11_1/libclntsh.so.10.1
#2 0x43748a5a in GDBociDriver::StoredProcArgumentList () from /home/omni/library/libgfr_dboci.so
#3 0x4374e865 in GDBociDriver::StoredProcCallFormat () from /home/omni/library/libgfr_dboci.so
#4 0x4376ec9a in GDBociDriver::ExecuteStmtBind () from /home/omni/library/libgfr_dboci.so
#5 0x4376fc69 in GDBociDriver::Execute () from /home/omni/library/libgfr_dboci.so
#6 0x436ba240 in GDBstatement::Execute () from /home/omni/library/libgfr_dblayer.so
#7 0x43736106 in GDBociDriver::SetModule () from /home/omni/library/libgfr_dboci.so
#8 0x4366cb22 in GDBconnectionPool::GetConnection () from /home/omni/library/libgfr_dblayer.so
#9 0x4367a9db in GDBdriver::LoginConnectionGet () from /home/omni/library/libgfr_dblayer.so
#10 0x4373dbaa in GDBociDriver::Login () from /home/omni/library/libgfr_dboci.so
#11 0x4377380f in CreateOCIDriver () from /home/omni/library/libgfr_dboci.so
#12 0x4367bb2a in GDBdriverLoader::CreateDriver () from /home/omni/library/libgfr_dblayer.so
#13 0x436d423d in IGDBdriverMgr::GetDriverAndLogin () from /home/omni/library/libgfr_dblayer.so
#14 0x436d4da1 in IGDBdriverMgr::GetDriver () from /home/omni/library/libgfr_dblayer.so
#15 0x436a5071 in GDBserverDef::InitialLogin () from /home/omni/library/libgfr_dblayer.so
#16 0x433247cc in CcapAdbAccess::Init (this=0x9e85a88) at CcapAdbAccess.C:161
#17 0x43331c44 in CcapMain::Run (this=0x434eeb80, sdpId=2, processName=0x9dae29c "URE61_O2") at CcapMain.C:132
#18 0x080947d1 in ccap_thread (arg=0x0) at main.C:1588
#19 0x080e7bd2 in ThreadManager::WrapperRoutine (ipArg=0x9e84fc0) at ThreadManager.C:176
#20 0x0035f44a in start_thread () from /lib/libpthread.so.0
Timesten has some limitations and it says:
Describing objects with OCIDescribeAny() is supported only by name. Describing PL/SQL objects is not supported.
home/omni/library/libgfr_dblayer.so
Cloud you please tell us whit might be the reason for thisAre you able to provide exact details of the OCIDescribeAny call being made by the application (including parameter values)?
Chris -
2GB OR NOT 2GB - FILE LIMITS IN ORACLE
제품 : ORACLE SERVER
작성날짜 : 2002-04-11
2GB OR NOT 2GB - FILE LIMITS IN ORACLE
======================================
Introduction
~~~~~~~~~~~~
This article describes "2Gb" issues. It gives information on why 2Gb
is a magical number and outlines the issues you need to know about if
you are considering using Oracle with files larger than 2Gb in size.
It also
looks at some other file related limits and issues.
The article has a Unix bias as this is where most of the 2Gb issues
arise but there is information relevant to other (non-unix)
platforms.
Articles giving port specific limits are listed in the last section.
Topics covered include:
Why is 2Gb a Special Number ?
Why use 2Gb+ Datafiles ?
Export and 2Gb
SQL*Loader and 2Gb
Oracle and other 2Gb issues
Port Specific Information on "Large Files"
Why is 2Gb a Special Number ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Many CPU's and system call interfaces (API's) in use today use a word
size of 32 bits. This word size imposes limits on many operations.
In many cases the standard API's for file operations use a 32-bit signed
word to represent both file size and current position within a file (byte
displacement). A 'signed' 32bit word uses the top most bit as a sign
indicator leaving only 31 bits to represent the actual value (positive or
negative). In hexadecimal the largest positive number that can be
represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
This is ONE less than 2Gb.
Files of 2Gb or more are generally known as 'large files'. As one might
expect problems can start to surface once you try to use the number
2147483648 or higher in a 32bit environment. To overcome this problem
recent versions of operating systems have defined new system calls which
typically use 64-bit addressing for file sizes and offsets. Recent Oracle
releases make use of these new interfaces but there are a number of issues
one should be aware of before deciding to use 'large files'.
What does this mean when using Oracle ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The 32bit issue affects Oracle in a number of ways. In order to use large
files you need to have:
1. An operating system that supports 2Gb+ files or raw devices
2. An operating system which has an API to support I/O on 2Gb+ files
3. A version of Oracle which uses this API
Today most platforms support large files and have 64bit APIs for such
files.
Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
but the situation is very dependent on platform, operating system version
and the Oracle version. In some cases 'large file' support is present by
default, while in other cases a special patch may be required.
At the time of writing there are some tools within Oracle which have not
been updated to use the new API's, most notably tools like EXPORT and
SQL*LOADER, but again the exact situation is platform and version specific.
Why use 2Gb+ Datafiles ?
~~~~~~~~~~~~~~~~~~~~~~~~
In this section we will try to summarise the advantages and disadvantages
of using "large" files / devices for Oracle datafiles:
Advantages of files larger than 2Gb:
On most platforms Oracle7 supports up to 1022 datafiles.
With files < 2Gb this limits the database size to less than 2044Gb.
This is not an issue with Oracle8 which supports many more files.
In reality the maximum database size would be less than 2044Gb due
to maintaining separate data in separate tablespaces. Some of these
may be much less than 2Gb in size.
Less files to manage for smaller databases.
Less file handle resources required
Disadvantages of files larger than 2Gb:
The unit of recovery is larger. A 2Gb file may take between 15 minutes
and 1 hour to backup / restore depending on the backup media and
disk speeds. An 8Gb file may take 4 times as long.
Parallelism of backup / recovery operations may be impacted.
There may be platform specific limitations - Eg: Asynchronous IO
operations may be serialised above the 2Gb mark.
As handling of files above 2Gb may need patches, special configuration
etc.. there is an increased risk involved as opposed to smaller files.
Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
Important points if using files >= 2Gb
Check with the OS Vendor to determine if large files are supported
and how to configure for them.
Check with the OS Vendor what the maximum file size actually is.
Check with Oracle support if any patches or limitations apply
on your platform , OS version and Oracle version.
Remember to check again if you are considering upgrading either
Oracle or the OS in case any patches are required in the release
you are moving to.
Make sure any operating system limits are set correctly to allow
access to large files for all users.
Make sure any backup scripts can also cope with large files.
Note that there is still a limit to the maximum file size you
can use for datafiles above 2Gb in size. The exact limit depends
on the DB_BLOCK_SIZE of the database and the platform. On most
platforms (Unix, NT, VMS) the limit on file size is around
4194302*DB_BLOCK_SIZE.
Important notes generally
Be careful when allowing files to automatically resize. It is
sensible to always limit the MAXSIZE for AUTOEXTEND files to less
than 2Gb if not using 'large files', and to a sensible limit
otherwise. Note that due to <Bug:568232> it is possible to specify
an value of MAXSIZE larger than Oracle can cope with which may
result in internal errors after the resize occurs. (Errors
typically include ORA-600 [3292])
On many platforms Oracle datafiles have an additional header
block at the start of the file so creating a file of 2Gb actually
requires slightly more than 2Gb of disk space. On Unix platforms
the additional header for datafiles is usually DB_BLOCK_SIZE bytes
but may be larger when creating datafiles on raw devices.
2Gb related Oracle Errors:
These are a few of the errors which may occur when a 2Gb limit
is present. They are not in any particular order.
ORA-01119 Error in creating datafile xxxx
ORA-27044 unable to write header block of file
SVR4 Error: 22: Invalid argument
ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
ORA-27070 skgfdisp: async read/write failed
ORA-02237 invalid file size
KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
file limit exceed.
Unix error 27, EFBIG
Export and 2Gb
~~~~~~~~~~~~~~
2Gb Export File Size
~~~~~~~~~~~~~~~~~~~~
At the time of writing most versions of export use the default file
open API when creating an export file. This means that on many platforms
it is impossible to export a file of 2Gb or larger to a file system file.
There are several options available to overcome 2Gb file limits with
export such as:
- It is generally possible to write an export > 2Gb to a raw device.
Obviously the raw device has to be large enough to fit the entire
export into it.
- By exporting to a named pipe (on Unix) one can compress, zip or
split up the output.
See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
- One can export to tape (on most platforms)
See "Exporting to tape on Unix systems" <Note:30428.1>
(This article also describes in detail how to export to
a unix pipe, remote shell etc..)
Other 2Gb Export Issues
~~~~~~~~~~~~~~~~~~~~~~~
Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
with EXPORT on many releases of Oracle such that if you export a large table
and specify COMPRESS=Y then it is possible for the NEXT storage clause
of the statement in the EXPORT file to contain a size above 2Gb. This
will cause import to fail even if IGNORE=Y is specified at import time.
This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
An export will typically report errors like this when it hits a 2Gb
limit:
. . exporting table BIGEXPORT
EXP-00015: error on row 10660 of table BIGEXPORT,
column MYCOL, datatype 96
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully
There is a secondary issue reported in <Bug:185855> which indicates that
a full database export generates a CREATE TABLESPACE command with the
file size specified in BYTES. If the filesize is above 2Gb this may
cause an ORA-2237 error when attempting to create the file on IMPORT.
This issue can be worked around be creating the tablespace prior to
importing by specifying the file size in 'M' instead of in bytes.
<Bug:490837> indicates a similar problem.
Export to Tape
~~~~~~~~~~~~~~
The VOLSIZE parameter for export is limited to values less that 4Gb.
On some platforms may be only 2Gb.
This is corrected in Oracle 8i. <Bug:490190> describes this problem.
SQL*Loader and 2Gb
~~~~~~~~~~~~~~~~~~
Typically SQL*Loader will error when it attempts to open an input
file larger than 2Gb with an error of the form:
SQL*Loader-500: Unable to open file (bigfile.dat)
SVR4 Error: 79: Value too large for defined data type
The examples in <Note:30528.1> can be modified to for use with SQL*Loader
for large input data files.
Oracle 8.0.6 provides large file support for discard and log files in
SQL*Loader but the maximum input data file size still varies between
platforms. See <Bug:948460> for details of the input file limit.
<Bug:749600> covers the maximum discard file size.
Oracle and other 2Gb issues
~~~~~~~~~~~~~~~~~~~~~~~~~~~
This sections lists miscellaneous 2Gb issues:
- From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
- DBV (the database verification file program) may not be able to scan
datafiles larger than 2Gb reporting "DBV-100".
This is reported in <Bug:710888>
- "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
specified in 'M' or 'K' to create files larger than 2Gb otherwise the
error "ORA-02237: invalid file size" is reported. This is documented
in <Bug:185855>.
- Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
reports
ORA-2187: invalid quota specification.
This is documented in <Bug:425831>.
The workaround is to grant users UNLIMITED TABLESPACE privilege if they
need a quota above 2Gb.
- Tools which spool output may error if the spool file reaches 2Gb in size.
Eg: sqlplus spool output.
- Certain 'core' functions in Oracle tools do not support large files -
See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
Even with this fix there may still be large file restrictions as not
all code uses these 'core' functions.
Note though that <Bug:749600> covers CORE functions - some areas of code
may still have problems.
Eg: CORE is not used for SQL*Loader input file I/O
- The UTL_FILE package uses the 'core' functions mentioned above and so is
limited by 2Gb restrictions Oracle releases which do not contain this fix.
<Package:UTL_FILE> is a PL/SQL package which allows file IO from within
PL/SQL.
Port Specific Information on "Large Files"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Below are references to information on large file support for specific
platforms. Although every effort is made to keep the information in
these articles up-to-date it is still advisable to carefully test any
operation which reads or writes from / to large files:
Platform See
~~~~~~~~ ~~~
AIX (RS6000 / SP) <Note:60888.1>
HP <Note:62407.1>
Digital Unix <Note:62426.1>
Sequent PTX <Note:62415.1>
Sun Solaris <Note:62409.1>
Windows NT Maximum 4Gb files on FAT
Theoretical 16Tb on NTFS
** See <Note:67421.1> before using large files
on NT with Oracle8
*2 There is a problem with DBVERIFY on 8.1.6
See <Bug:1372172>I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
Step 1
Write a simple Java program like the one listed:
import java.io.File;
public class fileCheckUtl {
public static int fileExists(String FileName) {
File x = new File(FileName);
if (x.exists())
return 1;
else return 0;
public static void main (String args[]) {
fileCheckUtl f = new fileCheckUtl();
int i;
i = f.fileExists(args[0]);
System.out.println(i);
Step 2 Load this into the Oracle data using LoadJava
loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
The output should be something like this:
creating : source fileCheckUtl
loading : source fileCheckUtl
creating : fileCheckUtl
resolving: source fileCheckUtl
Step 3 - Create a PL/SQL wrapper for the Java Class:
CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
LANGUAGE JAVA
NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
Step 4 Test it:
SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
2 /
FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
1 -
What are the API's and OS Supported by Oracle TimesTen
1.) What are all the API supported by oracle TimesTen?
is the below are correct and whether other than this is there any other API supports oracle TimesTen
JDBC,
ODBC,
OLAP,
ADO.net,
C++...............
2.) What are the Platform supports?
is the below are correct and whether other than this is there any other OS supports oracle TimesTen
Linux x86-32 and x86-64:
Oracle Linux 4 and 5
Red Hat Enterprise Linux 4 and 5
SUSE Enterprise Server 10 and 11
MontaVista Linux CGE 5.0 and 6.0
Asianux 3.0
Microsoft Windows x86-32 and x86-64:
Windows XP, Windows Vista, Windows Server 2003, Windows Server 2003 Release 2, Windows Server 2008, Windows 7
Solaris SPARC 64-bit:
Oracle Solaris 10
Solaris x86-64:
Oracle Solaris 10
IBM AIX 64-bit:
AIX 6.1 and 7.1
Solaris SPARC 32-bit (client only):
Oracle Solaris 10
IBM AIX 32-bit (client only):
AIX 6.1 and 7.1
3.) What is the latest Version in Oracle TimesTen?
4.) Maximum number of rows in a table. 2 Power 28 = 268,435,256 for 32 Bit / (2 power 31-1) = 2,147,483,647 for 64 Bit
if the Row value exceeds more than the specified value what will happen ? whether we need to have multiple tables
Say TableA reaches 268,435,256 values and if few more rows are added then the value can be kept in new table TableB and so on..... or how?
ThanksDear 933663,
1. What are all the API supported by oracle TimesTen?
JDBC
ODBC
ADO.net
OCI
PRO*C
+
PL/SQL
SQL
2. What are the Platform supports?
TimesTen 11.2.2.2.0 supports - Windows (32-bit, 64-bit), Linux x86 (32-bit, 64-bit), Solaris Sparc (64-bit), Solaris x86 (64-bit), IBM AIX Power (64-bit) (http://www.oracle.com/technetwork/products/timesten/downloads/index.html)
The detailed information I could find only in 11.2.1 documentation (http://docs.oracle.com/cd/E18283_01/timesten.112/e13063/install.htm):
Microsoft Windows 2000, Windows XP, Windows Vista and Windows Server 2003 and 2008 for Intel IA-32 and EM64T and AMD64 CPUs.
Asianux 2.0 and 3.0 for Intel IA-32 and EM64T and AMD64 CPUs.
SuSE LINUX Enterprise Server 10 for Intel IA-32 and EM64T and AMD64 CPUs.
SuSE LINUX Enterprise Server 10 for Itanium2 CPUs
Solaris 9 and 10 for UltraSparc CPUs
Solaris 10 for AMD64 CPUs
Red Hat Enterprise Linux 4 and 5 for Intel Itanium2 CPUs.
Red Hat Enterprise Linux 4 and 5 for Intel IA-32 and EM64T and AMD64 CPUs.
Oracle Enterprise Linux 4 and 5 for Intel IA-32 and EM64T and AMD64 CPUs.
MontaVista Linux Carrier Grade Edition Release 4.0 and 5.0 for Intel IA-32, EM64T and AMD64 CPUs.
HP-UX 11i v2 and 11iv3 for PA-RISC
HP-UX 11i v2 and 11iv3 for Itanium2
AIX 5L 5.3 and 6.1 for POWER CPUs
3.) What is the latest Version in Oracle TimesTen?
11.2.2.2.0 (http://www.oracle.com/technetwork/products/timesten/downloads/index.html)
4) Maximum number of rows in a table. 2 Power 28 = 268,435,256 for 32 Bit / (2 power 31-1) = 2,147,483,647 for 64 Bit
Actually, I couldn't find any information about rows limits for TimesTen tables and I've never faced with this problem.
Best regards,
Gennady -
Best way to load initial TimesTen database
I have a customer that wants to use TimesTen as a pure in-memory database. This IMDB has about 65 tables some having data upwards of 6 million rows. What is the best way to load this data? There is no cache-connect option being used. I am thinking insert is the only option here. Are there any other options?
thanskYou can also use the TimesTen ttbulkcp command line utility, this tool is similar to SQL*Loader except it handles both import and export of data.
For example, the following command loads the rows listed in file foo.dump into a table called foo in database mydb, placing any error messages into the file foo.err.
ttbulkcp -i -e foo.err dsn=mydb foo foo.dump
For more information on the ttbulkcp utility you can refer to the Oracle TimesTen API Reference Guide. -
Oracle 10g - Database does not respond to the application users
Hi all,
I am using ORACLE 10g database with 75 users connections 50 local and 20 remote users using ADSL and dial up modem. Operating system on server is
Windows 2003.
All of a sudden users making the entry get disconnected getting the error as
ORA-12516: TNS:listener could not find available handler with matching protocol stack
Other case users when using the appliaction get their PC hanged and connection
to oracle fails again. they cannot login to the application that time, the error is
ORA-04031: unable to allocate %s bytes of shared memory ("%s","%s","%s","%s")
Twice a day the users trying to login cannot access the database server.
Front end application is in VB using exe file. When the users cannot logon
to the database. I manually stop the ORACLE services(including DB console) and
then stop the listener.
This practice is going on since last two months.
Server Scenario
After Installing Oracle Standard Edition 10g.
System tablespace utilization is 98.5% which is 465 MB out of total 470 MB.
There is only one Rollback Segment named as SYSTEM.
when we install 9i 4 rollback segments as RO1, RO2, RO3, RO4 are created
by default.
Do I need to add the datafile to SYSTEM table space as its current usage is 99.15% and its auto extend is ticked on using auto extent as 10 MB. I tried adding another datafile during weekend downtime. when I resumed on saturday morning I found it did not add any datafile which I had put on thursday eveninig as system02.dbf.
Do I need to add a new datafile to the SYSTEM tablespace ?
Do I need to create new rollback segments apart from SYSTEM rollback segment ?
My current schema size 15360Mb with its usage as 9.32% (1432.25 Mb).
Please reply !!!Hi Paul
My Server configuration is
Compaq ML 370
3.4 Ghz Processor
3.5 GB RAM
SQL> show sga
Total System Global Area 171966464 bytes
Fixed Size 787988 bytes
Variable Size 145488364 bytes
Database Buffers 25165824 bytes
Redo Buffers 524288 bytes
SQL> select * from v$sgastat;
POOL NAME BYTES
fixed_sga 787988
buffer_cache 25165824
log_buffer 524288
shared pool subheap 55600
shared pool KQR L SO 218160
shared pool KQR M PO 1329244
shared pool KQR M SO 605364
shared pool KQR S PO 164156
shared pool KQR S SO 6144
shared pool KTI-UNDO 1235304
shared pool sql area 5368208
shared pool KGLS heap 1597560
shared pool joxs heap 6004
shared pool row cache 3707272
shared pool parameters 17520
shared pool repository 19396
shared pool ASH buffers 4194304
shared pool free memory 16586072
shared pool PL/SQL DIANA 679456
shared pool KSPD key heap 4220
shared pool PL/SQL MPCODE 1167992
shared pool library cache 11368964
shared pool miscellaneous 25942980
shared pool pl/sql source 88
shared pool PLS non-lib hp 29816
shared pool XDB Schema Cac 3594144
shared pool alert threshol 3460
shared pool joxlod exec hp 355820
shared pool table definiti 5880
shared pool temporary tabl 4932
shared pool trigger defini 12848
shared pool trigger inform 1892
shared pool type object de 20256
shared pool private strands 1198080
shared pool event statistics per sess 4384640
shared pool fixed allocation callback 304
large pool free memory 8388608
java pool joxs heap 233856
java pool free memory 44743296
java pool joxlod exec hp 5354496
40 rows selected.
SQL> select segment_name, owner, status
2 from dba_rollback_segs;
SEGMENT_NAME OWNER STATUS
SYSTEM SYS ONLINE
_SYSSMU1$ PUBLIC ONLINE
_SYSSMU2$ PUBLIC ONLINE
_SYSSMU3$ PUBLIC ONLINE
_SYSSMU4$ PUBLIC ONLINE
_SYSSMU5$ PUBLIC ONLINE
_SYSSMU6$ PUBLIC ONLINE
_SYSSMU7$ PUBLIC ONLINE
_SYSSMU8$ PUBLIC ONLINE
_SYSSMU9$ PUBLIC ONLINE
_SYSSMU10$ PUBLIC ONLINE
SEGMENT_NAME OWNER STATUS
_SYSSMU11$ PUBLIC ONLINE
_SYSSMU12$ PUBLIC ONLINE
_SYSSMU13$ PUBLIC ONLINE
_SYSSMU14$ PUBLIC ONLINE
_SYSSMU15$ PUBLIC OFFLINE
_SYSSMU16$ PUBLIC OFFLINE
_SYSSMU17$ PUBLIC OFFLINE
_SYSSMU18$ PUBLIC OFFLINE
_SYSSMU19$ PUBLIC OFFLINE
_SYSSMU20$ PUBLIC OFFLINE
_SYSSMU21$ PUBLIC OFFLINE
SEGMENT_NAME OWNER STATUS
_SYSSMU22$ PUBLIC OFFLINE
_SYSSMU23$ PUBLIC OFFLINE
_SYSSMU24$ PUBLIC OFFLINE
_SYSSMU25$ PUBLIC OFFLINE
_SYSSMU26$ PUBLIC OFFLINE
27 rows selected.
Currently AUTOEXTEND is ticked as on for SYSTEM tablespace and its size is showing as 99.16% (466.06 mb) used which is very much nearing to its full capacity of 470 mb.
Currently 75 users - 50 Local users and 20 Remote Users are connected to the server. Is there any limit that Server should have limited connections.
Server has Windows 2003 Standard Edition. Has Windows 2003 anything to do
with the no. of users getting connected ? -
Limitations of Oracle 10G XE as against Oracle 10G Standard Edition
Can you please give the information regarding....
1. Capabilities of Oracle 10G XE (Express Edition)
2. Limitations of Oracle 10G XE as against Oracle 10G Standard EditionHave a look at the License Information:
http://download-uk.oracle.com/docs/cd/B25329_01/doc/license.102/b25456/toc.htm#BABJBGGA
There's also a feature matrix in pdf-format:
http://www.oracle.com/technology/products/database/oracle10g/pdf/twp_general_10gdb_product_family.pdf
C.
Message was edited by:
cd -
Can we install oracle 10g database on Windows Vista Home Edition?
Can we install oracle 10g database on Windows Vista Home Edition?
You didn't specify which Oracle10 Edition.
In the case of SE/SE one/EE, with a paid license Supported has a mean impact on production environments with paid licenses, since you wouldn't run on an unsupported platform facing the risk of being left without technical support from Oracle in case of an emerging issue.
For the XE Edition, this is not a certified combination, since the only certified editions are the Business, Enterprise, and the Ultimate editions, but you can try it, you don't have nothing to loose, supported, certified vs. unsupported in this case makes no difference since this XE is not supported either way (no patchsets, no bug fixes, no technical support from metalink), just an AS IS free software.
On the license agreement terms displayed when you download the XE software it states:
"No Technical Support
Our technical support organization will not provide technical support, phone support, or updates to you for the programs licensed under this agreement."
and
"THE PROGRAMS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. WE FURTHER DISCLAIM ALL WARRANTIES, EXPRESS AND IMPLIED, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE."
In case someone would be able to win a trial against oracle due to damage from the use of this software, the amount paid at maximum is 1,000.00 USD, not enough to even pay the lawyer's lunches.
For the XE users the primary source of 'kind of support' is the oracle forums, either the installation forum, the general database or the specialized XE forum.
Of course if this is for your own personal use for learning and testing and there are no third parties you could damage by doing this and if you don't have a paid license for this installation you may try it, but don't expect the Enterprise Manager to behave as defined by the manual, most of it won't work since the Home Edition doesn't have the privileges required for EM to work.
If this is your personal computer and when you bought it the computer provider installed from factory Home Edition, and if you want something to work there, you can try to install a Virtual Machine and install Oracle 10g on a supported guest OS.
~ Madrid -
Four GB of user data Limitation in Oracle 10g XE
Hi Mates!
There is one limitation in oracle XE which is:
It stores up to four GB of user data.
I am planning to migrate my database from Oracle 9i to Oracle 10g XE. My queries are below:
1. Is it possible to migrate from Oracle 9i to Oracle 10g XE?
2. I am confused regarding Four GB of user data Limitation. I don't understand nothing much about this. Presently If I prepare a dump (export) file in oracle 9i the size becomes around 250MB. If I switch to oracle 10g XE, how long will I be able to use my database?
Rgds,
LutherHi Mates!
There is one limitation in oracle XE which is:
It stores up to four GB of user data.
I am planning to migrate my database from Oracle 9i
to Oracle 10g XE. My queries are below:
1. Is it possible to migrate from Oracle 9i to Oracle
10g XE?this is a simple test case, you can install XE, export from 9i and import to XE. I did it before, it worked for me, but this does not mean it will for you, testing is only way to be sure.
There are lots of limitation of XE mentioned in the Licensing Information Guide.
http://www.oracle.com/pls/xe102/homepage?remark=tahiti
2. I am confused regarding Four GB of user data
Limitation. I don't understand nothing much about
this. Presently If I prepare a dump (export) file in
oracle 9i the size becomes around 250MB. If I switch
to oracle 10g XE, how long will I be able to use my
database?My XE is ~3,3 GB total, datafile + tempfile size, apex applications are working without problem. But consider all syatem + user data as a total, so 250 MB user data won't be a problem, most probably.
Rgds,
Luther -
Not able to connect to oracle 11g database source through OBI Server
Hi,
I am having an issue regarding the database connectivity.
Iam unable to connect to the oracle 11g database through OBI server. However its connecting through SQL Plus with same username/password@hoststring. Its giving the error as :
[nQSError: 17001]Oracle Error code:1017,message:ORA-01017: invalid username/password; logon denied at OCI call OCILogon
[nQSError: 17014]Could not connect to Oracle database.
Please help me out in resolving the issue.If it's working in one place and not the other then it's obviously not an OBIEE problem.
Are you typing it in right??
Do you have multiple tnsnames.ora files pointing to different databases, maybe sqlplus is picking up one and toad & OBIEE the other and so the username/pw is wrong for that database?
There's a limited amount that can be said to help, as the error is 100% clear what the problem is :) it's just working out whether you're connecting to the database that you think you are connecting to, and then whether the username & PW you're entering is correct for that DB.
BTW OBIEE 10.1.3.4 is supported with 11g, and we're using it without problem here. -
Error on "Oracle Beehive Database Configuration - Beehive Schemas"
hi
i got the error "Errors with activating subs ORA-31425: subscription does not exist -31425" in the "Oracle Beehive Database Configuration - Beehive Schemas" stage.
the log file shows this:
2008-11-06_22-52-37 info sql command took 5 seconds
2008-11-06_22-52-37 info db logfilename dblog.2008-11-06_22-52-37
2008-11-06_22-52-37 info oracle home C:\oracle\product\1.4.1.0.0\beehive_2
2008-11-06_22-52-37 info Connecting BEE_code/*****@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=paty.xxxx)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=bee)))
2008-11-06_22-52-37 info Going to run SQL command @C:\oracle\product\1.4.1.0.0\beehive_2\beehive\db\search\src\sqlcode\plsql\ss_cdc_sub_activate.sql
2008-11-06_22-52-37 info DEFINE BDB=C:\oracle\product\1.4.1.0.0\beehive_2\beehive\db\
2008-11-06_22-52-37 info Output from SQL process:
Connected.
Errors with activating subs ORA-31425: subscription does not exist -31425
PL/SQL procedure successfully completed.
2008-11-06_22-52-37 info sql command took 0 seconds
2008-11-06_22-52-37 error schema install failed
Tool "Oracle Beehive Database Configuration - Beehive Schemas" returned Failed status.
=============================================================
i am using oracle 11g as database server.
No other warnings or error are visible.
i try the install on Linux and Windows and the error is the same.
any advice?
thanksHello,
Do you have applied all the recommended patches on top of your 11.1.0.6?
Please check the value of the parameter processes and be sure it is enough high (>500)
For CDC issue, please check the following metalink note as well, and apply the recommendations - Note 737948.1
As well, have a look on your limits.conf file to be sure it maches with the kernel recommendations.
Are you installing Beehive and the database on the same machine?
Thanks
Fred -
Need to get into Oracle 8i database and don't have credentials
I need to get into an old Oracle 8i database running on Windows 2000 server. I have very limited experience with Oracle. No one who worked with this system is still employed by the company. I'm in the admin & Oracle DBA groups and have tried using
sqlplus <username>/<password> as sysdba
but I get TNS:protocol adaptor error. Oracle services are all running. If I could get in and export the data to a SQL Server machine, I'd be ok. Can anybody assist?
Thank you!You must be using Remote Desktop of Mickeysoft to connect to the server.
You need to start a cmd window and issue
set local=<servicename in tnsnames.ora>
sqlplus "/ as sysdba"
and this should work.
Sybrand Bakker
Senior Oracle DBA
Maybe you are looking for
-
Hi Buddies, I explain the scenario, I have an dunning letter statement report which basically run for the three remainders (1,2,3) to the customer. If the remainder 3 runs , i want to display the pervious remainder dates of 1 and 2 in my report. If t
-
Write to Serial Port without splitting string data
Hi, all, thank you for your help in advance. Problem: I am trying to write to the serial port with VISA write module. Somehow the string I tried to send was splitted before it was sent. For example, if I want to send "127", it sends "7', "2", "1". I
-
My music won't show up on my iPhone
I purchased music on a 4 about a week ago, shared it to my iPad (which worked just one) through iCloud. I fell when I was on a run and busted my phone so had to get a new one and got a 4s. I went to share my music and it appeared to be downloading to
-
Model type for DB stored procedures back end?
I have to use a database via stored procedures (#>100) as a back end system in a web dynpro application. And I wonder which type of model is the most appropriate for that usage szenario. Of course, I have to create some kind of DAO to access the db a
-
Error in compiling Flex application - After changing my query in BW
Hi All!! I'm facing the following problem: I have created a VC model based on a BW Query View... I have changed the query in BW (added a new key figure) and then I've refreshed the Query View in my VC model... After the refresh, I get the error "Erro