Oracle 12c in- memory database option
I did not go to oow 2013, but watched Larry Ellison's video on Orcle 12c in memory option. Seems amazing tecnology, hard to beleive everything can be done behind the scene once one sets some parameters in init.ora. I alreday have 12.1 intsalled on OEL 6.4 on my VMWare, wbat to play with it but do not see any download links on technet.oracle.com.
Show trimmed content
It's just announced so wait for some time and I believe, it's not going to take "just few parameters" only.
Aman....
Similar Messages
-
Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache
Hi,
What is difference in Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache.
For 32 bit on windows OS i am not able to insert data's more than 500k rows with 150 columns (with combinations of CHAR,BINARY_DOUBLE,BINARY_FLOAT, TT_BIGINT,REAL,DECIMAL,NUMERIC etc).
[TimesTen][TimesTen 11.2.2.2.0 ODBC Driver][TimesTen]TT0802: Database permanent space exhausted -- file "blk.c", lineno 3450, procedure "sbBlkAlloc"
I have set Perm size as 700 mb,Temp size as 100mb
What is the max size we can given for PermSize,TempSize,LogBufMB for 32 bit on windows OS.
What is the max size we can given for PermSize,TempSize,LogBufMB for 64 bit on windows OS.
What is the Max configuration of TT for 32 bit what i can set for Perm size Temp size.
Thanks!They are the same product but they are licensed differently and the license limits what functionality you can use.
TimesTen In-Memory Database is a product in its own right allows you to use TimesTen as a standalone database and also allows replication.
IMDB Cache is an Oracle DB Enterprise Edition option (i.e. it can only be licensed as an option to an Oracle DB EE license). This includes all the functionality of TImesTen In-Memory Database but adds in cache functionality (cache groups, cache grid etc.).
32-bit O/S are in general a poor platform to try and create an in-memory database of any significant size (32-bit O/S are very limited in memory addressing capability) and 32-bit Windows is the worst example. The hard coded limit for total datastore size on 32-bit O/S is 2 GB but in reality you probably can;'t achieve that. On Windows the largest you can get is 1.1 GB and most often less than that. If you need something more than about 0.5 Gb on Windows then you really need to use 64-bit Windows and 64-bit TimesTen. There are no hard coded upper limit to database size on 64-bit TimesTen; the limit is the amount of free physical memory (not virtual memory) in the machine. I have easily created a 12 GB database on a Win64 machine with 16 GB RAM. On 64-bit Unix machines we have live database of over 1 TB...
Chris -
Oracle TimesTen In-Memory Database Risk Matrix
Hi,
From the following web-site I can see two vulnerabilities listed against TimesTen --- CVE-2010-0873 and CVE-2010-0910
http://www.oracle.com/technetwork/topics/security/cpujul2010-155308.html
================================================================
Oracle TimesTen In-Memory Database Risk Matrix
CVE# Component Protocol Package and/or Privilege Required Remote Exploit without Auth.? CVSS VERSION 2.0 RISK (see Risk Matrix Definitions) Last Affected Patch set (per Supported Release) Notes
Base Score Access Vector Access Complexity Authentication Confidentiality Integrity Availability
CVE-2010-0873 Data Server TCP None Yes 10.0 Network Low None Complete Complete Complete 7.0.6.0 See Note 1
CVE-2010-0910 Data Server TCP None Yes 5.0 Network Low None None None Partial+ 7.0.6.0, 11.2.1.4.1 See Note 1
===========================================================================
Please let me know if I need to take any action on my current TimesTen deployment.
Im using TimesTen Release 11.2.1.8.4 and 7.0.5.16.0 in our customer sites.
Request you to respond with your valuable comments.
Regards
PratheejHi Pratheej,
These vulnerabilities were fixed in 11.2.1.6.1 and 7.0.6.2.0. As you are on 11.2.1.8.4 you are okay for 11.2.1 but the 7.0.5.16.0 release does contain the vulnerability. If you are concerned then you should upgrade those to 7.0.6.2.0 or later (check for the latest applicable 7.0 release in My Oracle Support).
Chris -
What features says Oracle 12c, a cloud database?
Hi all,
I'm new cloud computing world and would like to what features bring 12c as a cloud ready database? Is it multitenant and/or are there any more things considering?
thanks
PradeepaThere are many more features that were put in the 12c release specifically to make it a first class citizen for cloud deployments. Please take a look at the following links:
- Oracle White Paper:
http://www.oracle.com/technetwork/database/plug-into-cloud-wp-12c-1896100.pdf
- independent 4-page review:
http://www.infoworld.com/article/2611000/database/oracle-database-12c-review--finally--a-true-cloud-database.html
Hope this is helpful. -
Oracle 12c disable EM Database Express
It seems less obvious now that db console (database express) is stored within the database how to disable it. We already have EM 12c Cloud, is there any benefit to have database express also run standalone? If not, is there a recommended way to disable it altogether?
ngilbert wrote:
It seems less obvious now that db console (database express) is stored within the database how to disable it. We already have EM 12c Cloud, is there any benefit to have database express also run standalone? If not, is there a recommended way to disable it altogether?
You can disable all access to DB Express by stopping the listener service. For example,
exec dbms_xdb_config.sethttpport(0) -
Oracle In-Memory Database Cache
Hi,
I was reading about Oracle in Memory database cache and i am wondering is this option available on 10g, from what i read it is only on 11g and it is extra option and have to pay for it.
Any more info, will be great.
thanksFrom here
The In-Memory Database Cache option of Oracle Database Enterprise Edition is based on Oracle TimesTen In-Memory Database.TimesTen is also available for 10g. -
Oracle 12c vs. TimesTen
With Oracle 12c In-Memory capability is there a reason to still use TimesTen? Is Oracle going to try and phase out TimesTen? Would be interested in hearing thoughts.
I believe 12cR1 has already been released...see link here --> Oracle Database Software Downloads | Oracle Technology Network | Oracle
Would be to bad if Oracle put TimesTen on the shelf so to speak.
Victor -
Hi,
I've a couple of tables that has a million records or more and occupying some 300MB of space.I export these tables with no data (using the ROWS=N option). When I try to import these tables from the .DMP file, the tables gets created with 300MB of space or more as available in the .DMP file. Is there any way in which I could ignore the storage clause available in the .DMP file. Please let me know at the earliest.
Thanks in advance.
Regards,
Jai.Oracle TimesTen In-Memory Database is a memory-optimized relational database that empowers applications with the instant responsiveness and very high throughput required by todays real-time enterprises and industries such as telecom, capital markets, and defense. Deployed in the application tier as a cache or embedded database, Oracle TimesTen In-Memory Database operates on data stores that fit entirely in physical memory using standard SQL interfaces.
http://www.oracle.com/technology/products/timesten/index.html -
With regard to this thread: Few interesting facts about database under UCM 10g what Oracle database options can be effectively used under UCM? The comprehensive overview of the options can be obtained here: http://www.oracle.com/us/products/database/options/index.html
Real Application Clusters* - this option can be used to increase the database performance and availability. It is fully transparent to applications.
Partitioning* - this option can affect performance, enable hierarchical storage management (using cheaper hardware to store large amount of data) and help with disaster recovery (backup/restore). I believe, if documents are stored in the database, this option is a must. Even if a project does not use HSM, partitioning of large tables such as FILESTORAGE will enable: a) faster backups - once a partition is "closed", it will not change - therefore, future backups can work only with "open" partitions and unpartitioned data; b) faster restores - large tables can only be partially restored - e.g. few "last months" and the system can be running whilst restoring the remaining data. Watch out for partitioning of metadata tables, though (DOCMETA, REVISIONS, DOCUMENTS)! At least, there are no clear criteria how these tables should be partitioned - and various checks and validations may actually require to have those tables restored fully before you may perform such basic operations like check-in.
Advance Security* and Database Vault* - these options may increase security, when content is stored in the database (no one, not even administrators might be able to reach the content unless authorized). The only drawback to that is that even if content is stored in the database, in initial phases it is anyway stored in the filesystem (vault), too, and the minimum retention period is 1 day
I will also mention two options that might look appetizing, but UCM probably does not benefit from them too much:
Advanced Compression* - compresses data in the database. This, and Hybric Columnar Compression used in Exadata, can do the real magic when working with structured data (just read a report from Turkcell, who compressed 600 TB to 50 TB, which means by 12). For unstructured data, such as PDF or JPEG, the effect might be very small, though. Still, if you have a chance, give it a try.
Active Data Guard* - Data Guard is a technology for disaster recovery. Advantage of Active Data Guard is that it allows using of the secondary location for read only operations, rather than leaving it idle (stand-by); this means, you might decrease sizing of both locations. With UCM, also do not forget about CONTENT TRACKER (which might require a "write" operation even for otherwise read only ones, such as DOC_INFO, GET_SEARCH_RESULTS, or retrieving a content item), but db gurus know how to handle even that. Unfortunately, Active Data Guard cannot be used with UCM at the moment, because not all the data is stored in the database and the secondary location might not be fully synchronized.
In my opinion, other options are not so relevant for a UCM solution.Compression and Deduplication of SecureFiles LOBs, which is part of the Advanced Compression Option, can potentially deliver huge space savings, and performance benefits. If the content is primarily Office documents, or XML documents, or character-based (email?), then it will likely compress very well. Also, if the same file is stored multiple times, deduplication will cause the Oracle database to store only one copy, rather than storing the same document multiple times. There's more info on Advanced Compression here: http://www.oracle.com/us/products/database/options/advanced-compression/index.html
-
"In-Memory Database Cache" option for Oracle 10g Enterprise Edition
Hi,
In one of our applications, we are using TimesTen 5.1.24 and Oracle 9i
databases (platform - Solaris 9i).
TimesTen holds application information which needs to be accessed quickly
and Oracle 9i is a master application database.
Now we are looking at an option of migrating from Oracle 9i to Oracle 10g
database. While exploring about Oracle 10g features, came to know about
"In-Memory Database Cache" option for Oracle Enterprise Edition. This made
me to think about using Oracle 10g Enterprise Edition with "In-Memory
Database Cache" option for our application.
Following are the advantages that I could visualize by adopting the
above-mentioned approach:
1. Data reconciliation between Oracle and TimesTen is not required (i.e.
data can be maintained only in Oracle tables and for caching "In-Memory
Database Cache" can be used)
2. Data maintenance is easy and gives one view access to data
I have following queries regarding the above-mentioned solution:
1. What is the difference between "TimesTen In-Memory Database" and
"In-Memory Database Cache" in terms of features and licensing model?
2. Is "In-Memory Database Cache" option integrated with Oracle 10g
installable or a separate installable (i.e. TimesTen installable with only
cache feature)?
3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
Connect to Oracle" option in TimesTen In-Memory Database?
4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
access will happen only through Oracle sqlplus or OCI calls. Am I right here
in making this statement?
5. Is it possible to cache the result set of a join query in "In-Memory
Database Cache"?
In "Options and Packs" chapter in Oracle documentation
(http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/options.htm
#CIHJJBGA), I encountered the following statement:
"For the purposes of licensing Oracle In-Memory Database Cache, only the
processors on which the TimesTen In-Memory Database component of the
In-Memory Database Cache software is installed and/or running are counted
for the purpose of determining the number of licenses required."
We have servers with the following configuration. Is there a way to get the
count of processors on which the Cache software could be installed and/or
running? Please assist.
Production box with 12 core 2 duo processors (24 cores)
Pre-production box with 8 core 2 duo processors (16 cores)
Development and test box with 2 single chip processors
Development and test box with 4 single chip processors
Development and test box with 6 single chip processors
Thanks & Regards,
VijayHi Vijay,
regarding your questions:
1. What is the difference between "TimesTen In-Memory Database" and
"In-Memory Database Cache" in terms of features and licensing model?
==> Product has just been renamed and integrated better with the Oracle database - Times-Ten == In-Memory-Cache-Database
2. Is "In-Memory Database Cache" option integrated with Oracle 10g
installable or a separate installable (i.e. TimesTen installable with only
cache feature)?
==> Seperate Installation
3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
Connect to Oracle" option in TimesTen In-Memory Database?
==> Please have a look here: http://www.oracle.com/technology/products/timesten/quickstart/cc_qs_index.html
This explains the differences.
4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
access will happen only through Oracle sqlplus or OCI calls. Am I right here
in making this statement?
==> Please see above mentioned papers
5. Is it possible to cache the result set of a join query in "In-Memory
Database Cache"?
==> Again ... ;-)
Kind regards
Mike -
Pre-loading Oracle text in memory with Oracle 12c
There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
What I found as work-around is to build the index with the following storage options:
ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
1. create the table
drop table test;
CREATE TABLE test
(ID NUMBER(9,0) NOT NULL ENABLE,
XML_DATA XMLTYPE
XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
2. insert a few records
insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
3. create the text index
drop index i_test;
exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
begin
CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP',
section_name => 'SData_02',
tag => 'SData_02',
datatype => 'varchar2');
end;
exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
exec ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
exec ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
exec ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
exec ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
create index I_TEST
on TEST (XML_DATA)
indextype is ctxsys.context
parameters('
section group "TEST_SGP"
storage "TEST_STO"
') parallel 2;
4. check the index size
select ctx_report.index_size('I_TEST') from dual;
it says :
TOTALS FOR INDEX TEST.I_TEST
TOTAL BLOCKS ALLOCATED: 104
TOTAL BLOCKS USED: 72
TOTAL BYTES ALLOCATED: 851,968 (832.00 KB)
TOTAL BYTES USED: 589,824 (576.00 KB)
4. optimize the index
exec ctx_ddl.optimize_index('I_TEST','REBUILD');
and now recompute the size, it says
TOTALS FOR INDEX TEST.I_TEST
TOTAL BLOCKS ALLOCATED: 1112
TOTAL BLOCKS USED: 1080
TOTAL BYTES ALLOCATED: 9,109,504 (8.69 MB)
TOTAL BYTES USED: 8,847,360 (8.44 MB)
which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
alter table DR$I_TEST$I storage (buffer_pool keep);
alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
create or replace procedure loadTokenInfo is
type c_type is ref cursor;
c2 c_type;
s varchar2(2000);
b blob;
buff varchar2(100);
siz number;
off number;
cntr number;
begin
s := 'select token_info from DR$i_test$I';
open c2 for s;
loop
fetch c2 into b;
exit when c2%notfound;
siz := 10;
off := 1;
cntr := 0;
if dbms_lob.getlength(b) > 0 then
begin
loop
dbms_lob.read(b, siz, off, buff);
cntr := cntr + 1;
off := off + 4096;
end loop;
exception when no_data_found then
if cntr > 0 then
dbms_output.put_line('4K chunks fetched: '||cntr);
end if;
end;
end if;
end loop;
end;
Rgds, PierreI have been working a lot on that issue recently, I can give some more info.
First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
What kind of performance do you have with your application ?
In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */
TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID
FROM DR$idxname$I
WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype
ORDER BY TOKEN_TEXT, TOKEN_TYPE, TOKEN_FIRST
which is continuously done.
I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
What worked:
first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
alter session set events '10949 trace name context forever, level 1';
alter table DR#idxname0001$I cache;
alter table DR#idxname0002$I cache;
alter table DR#idxname0003$I cache;
SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT), SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT), SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT), SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
SELECT /*+ INDEX(ITAB) CACHE(ITAB) */ SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
SELECT /*+ INDEX(ITAB) CACHE(ITAB) */ SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
SELECT /*+ INDEX(ITAB) CACHE(ITAB) */ SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
gqve the following
ERROR at line 1:
ORA-20000: Oracle Text error:
DRG-50857: oracle error in drftoptrebxch
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.CTX_DDL", line 1141
ORA-06512: at line 1
Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with dbms_pclxutil.build_part_index procedure (this enables enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
Other points of attention with the text index creation (stuff that surprised me at first !) ;
- if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
- this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
Regards, Pierre -
Not Able to Create database objects(Tables, etc) in Oracle 12c
Hello Sir,
Recently, I have installed oracle 12c in my PC. And I am able to connect with the ANONYMOUS user and connection name ORCL.
But I am not able to create any objects in database like tables creation, it's just showing the error message like- you don't have sufficient privileges.
Could you please help on this? How to start the work on oracle 12c database as I have worked on Oracle 11g with the SCOTT user and connection name ORCL.It was working fine. But SCOTT user is not present in 12c. Is there any other USER in 12c with the default tables like EMP table in 11g in the SCOTT USER schema?
Please suggest, what to do?
Thanks In Advance!!Hi Nishant ,
Thanks for the reply.
I have done all the steps as you mentioned above. I am not able to create HR user. Please check the below errors and
please guide me on this.
SQL*Plus: Release 12.1.0.1.0 Production on Sat Oct 5 23:46:38 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Enter user-name: anonymous
Enter password:
Last Successful login time: Sat Oct 05 2013 23:46:58 +05:30
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> connect sys as sysdba;
Enter password:
Connected.
SQL> CREATE USER hr IDENTIFIED BY Password#123
2 DEFAULT TABLESPACE hr_users
3 TEMPORARY TABLESPACE hr_temp
4 QUOTA 5000k ON hr_users
5 QUOTA unlimited ON hr_temp
6 PROFILE enduser ;
CREATE USER hr IDENTIFIED BY Password#123
ERROR at line 1:
ORA-65096: invalid common user or role name
SQL> SELECT NAME, CDB FROM V$DATABASE;
NAME CDB
ORCL YES
SQL> SHO CON_ID CON_NAME
CON_ID
1
CON_NAME
CDB$ROOT
SQL> SET LINE 150
SQL> SELECT NAME, OPEN_MODE, OPEN_TIME FROM V$PDBS;
NAME OPEN_MODE OPEN_TIME
PDB$SEED READ ONLY 04-OCT-13 08.57.50.461 PM
PDBORCL MOUNTED
SQL> CONN HR/HR@PDBORCL
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified
SQL> SHO CON_ID CON_NAME
SP2-0640: Not connected
SP2-0641: "SHOW CONTAINER" requires connection to server
Thanks in advance!!
Regards,
Dharmendra Verma -
Using In-Memory Database Cache option need help
Hi,
I need some help:
I am using Oracle 10g Server Release 2
For Clientele activilty I am using Oracle Client where the Application resides.
For Better performance I want to use the In-Memory Database Cache option Times-Ten Database.
Is it possible to do so where there is Oracle Database Server Relaease 2 and in the Client there is Times-Ten In-Memory Database Cache?
Any help will be needful for meIn-Memory Cache is a server-side cache. I can not see where there would be any value putting it on the client side though given the license cost per CPU core I am sure the entire Oracle sales force would gladly disagree with me.
-
Connecting to oracle 12c multitenant database
My question, using coldfusion 11 enterprise datasource gui,
how do you connect to oracle 12c multitenant database,
when you need to log into a pluggable database, beneath
the container database?
What's the connection string, or is it even possible to connect(?) because earlier versions of oracle
didn't use multitenant architecture, you just make a user/password,
grant connect privileges, and connect.
Using Oracle Linux 6.5 64bit.
This is assuming I can connect to the oracle CDB (container database) and PDB (pluggable database) from the shell using sqlplus.
<!------------------------ I can tsnping the pluggable database name pdb1 ---------------------------------------------->
[oracle@localhost ~]$ tnsping pdb1
TNS Ping Utility for Linux: Version 12.1.0.1.0 - Production on 27-JUN-2014 12:13:12
Copyright (c) 1997, 2013, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.localhost.localdomain)))
OK (0 msec)
<!------------------------------ I can connect to pdb1 using sqlplus shell ---------------------------------------------------->
[oracle@localhost ~]$ sqlplus user1/password20@pdb1
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 27 12:21:06 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Last Successful login time: Fri Jun 27 2014 09:45:05 -07:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>Having the same issue, can't get the connection string right with the PDB?
Anyone out there? Kurrttt, did you ever get this figured out? -
What are MD_ tables in Oracle 12c database? What is their purpose?
I just set up an Oracle 12c database with the purpose of migrating an existing Oracle 11.2.0.2 database. Our application developers are a bit confused when it comes to the schemas suddenly having MD_ tables they did not create. Any help is appreciated.
I just set up an Oracle 12c database with the purpose of migrating an existing Oracle 11.2.0.2 database. Our application developers are a bit confused when it comes to the schemas suddenly having MD_ tables they did not create. Any help is appreciated.
Did you do this using sql developer and the migration workbench to create a repository?
See if anything in this article rings a bell:
http://oraexplorer.com/2008/06/oracle-sql-developer-migration-workbench/#sthash.gNFtpafS.dpbs
Next, you will need to create a repository. An database account, which has CREATE SESSION, RESOURCE, and CREATE VIEW must be created first. Then logon into SQL Developer as that account. From the tool, create a repository from Migration menu > Repository Management > Create Repository. This process creates a bunch of MD* and MIGR* tables and packages.
I ask because you said you 'set up an Oracle 12c database' but then implied your developers are accessing it.
Most people experimenting with 12c create a multitenant database which has a CDB and one PDB that contains the sample schemas.
You typically would NOT allow developers access to the CDB; that is for admin purposes only. So, hopefully, if you developers access anything it is ONLY the sample PDB or a PDB that you have created from the seed.
Make sure you, and your developers RTFM about the new multitenant architecture or you will all get horribly confused when you try to do simple things like create users or issue grants. All of that works VERY differently in 12c.
See chapters 17 and 18 of the Database Concepts doc
http://docs.oracle.com/cd/E16655_01/server.121/e17633/cdbovrvw.htm
Pay particular attention to the discussion of 'common' and 'local' users. And a hidden 'gotcha' is that the PDBs will NOT be started/opened by default when you do a 'startup' of the database. If you create common users those users will NOT be created in PDBs that are not open; so there is the potential to have to do perform a lot of manual maintenance if you need to add those users to PDBs that weren't open at the time you added the users.
Maybe you are looking for
-
When I try to enter codes for digital movie downloads it tells me "code redemption is temporarily unavailable. Try again later." Customer support has been no help. Anyone else have this problem? Am sick of going back and forth with what has turned ou
-
How to convert concurrent program out files .out file to .txt files in
Hi Trying to know if there is a way to convert the concurrent programs' output files witn '.out' extension to files with extension '.txt' thanks kp
-
How to Modify a Landed Cost Document
is it possible to modify the landed cost document after journal entry is posted. We would like to add a new landed cost component to the document after the journal entry was posted. Is there a way to add a new component / cost to the document after t
-
Is there Java support for Indian languages other than Hindi?
I'd like to use Tamil as a test language for some translated java files. I don't know whether it is supported or not. Alternatives are Bangla, Gujararati, Kannada, Marathi, Oriya, Punjabi, Telugu. Are any of these supported? I'd prefer not to use Hin
-
I am looking to buy the 17 minute version of Iron Buttefly's In-A-Gadda-Da-Vida, but can only find the 3 minute edit. This is an abomination. What's up?