Data automatically become null in Oracle 11g DB
We setup an Oracle 11g database for our application. Yesterday I have noted that in a table Contract_Owner the effective date for the owner FQYX1 is 30-Oct-12. But all of a sudden it has become NULL today. Im not sure how it has changed from 30-Oct-12 to NULL.
Any ideas/thoughts will be appreciated greatly ....
Edited by: 959598 on Apr 18, 2013 2:03 AM
Oracle wouldn't change a value to a NULL unless it is told to do so.
It could have been a user , a developer, a power user / super user.
It could have been application code.
It could have been a trigger.
It could have been a mistakenly-written update.
It could have been a scheduled job or a one-off job.
Hemant K Chitale
Similar Messages
-
Automatic table partitioning in Oracle 11g
Hi All,
I need to implement automatic table partitioning in Oracle 11g version, but partitioning interval should be on daily basis(For every day).
I was able to perform this for Monthly and Yearly but not on daily basis.
create table part
(a date)PARTITION BY RANGE (a)
INTERVAL (NUMTOYMINTERVAL(1,'*MONTH*'))
(partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
Table created
create table part
(a date)PARTITION BY RANGE (a)
INTERVAL (NUMTOYMINTERVAL(1,'*YEAR*'))
(partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
Table createdBut if i use DD or DAY instead of YEAR or MONTH it fails......Please suggest me how to perform this on daily basis.
SQL>
1 create table part
2 (a date)PARTITION BY RANGE (a)
3 INTERVAL (NUMTOYMINTERVAL(1,'*DAY*'))
4 (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
5* )
SQL> /
INTERVAL (NUMTOYMINTERVAL(1,'DAY'))
ERROR at line 3:
ORA-14752: Interval expression is not a constant of the correct type
SQL> create table part
(a date)PARTITION BY RANGE (a)
INTERVAL (NUMTOYMINTERVAL(1,'*DD*'))
(partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
); 2 3 4 5
INTERVAL (NUMTOYMINTERVAL(1,'DD'))
ERROR at line 3:
ORA-14752: Interval expression is not a constant of the correct typePlease suggest me to resolve this ORA-14752 error for using DAY or DD or HH24
-YasserYes, for differenct partitions for different months.
interval (numtoyminterval(1,'MONTH'))
store in (TS1,TS2,TS3)
This code will store data in partitions in tablespaces TS1, TS2, and TS3 in a round robin manner.
for Day wise day yes you can store
INTERVAL (NUMTODSINTERVAL(1,'day')) or
INTERVAL (NUMTODSINTERVAL(2,'day')) or
INTERVAL (NUMTODSINTERVAL(3,'day')) or
INTERVAL (NUMTODSINTERVAL(4,'day')) or
INTERVAL (NUMTODSINTERVAL(5,'day')) or
INTERVAL (NUMTODSINTERVAL(n,'day')) -
Oracle SQL Developer 1.2 – Automatic - is compatible with Oracle 11g?
Oracle SQL Developer 1.2 – Automatic - is compatible with Oracle 11g?
Thanks in advance!
-BabuSo I'm taking it the question is -
Is 1.2 Sql Developer compatible with 11g DB?
The short answer is Yes. We are constantly adding new functionality and better support for the newer Databases. For Example to use the real time Sql monitoring that became available in 11g you will have to be on at least 2.1 Sql Dev. Is there some reason you don't want to upgrade to the latest Sql Dev? If not the current EA version then at least the current production version?
Thanks,
Syme -
How to insert data from *.dmp file to oracle 11g using Oracle SQL Develope
hi
i backup my database using PL/SQL developer and made *.dmp file
how to insert data from *.dmp file to oracle 11g using Oracle SQL Developer 2.1.1.64
and how to make *.dmp file from sql*plus ?
thanks in advancePl/Sql developer has a config window, there you choose the exec to do the import/export.
Find it and his home version, it may be exp or expdp, the home version is the version of the client where the exp executable is.
Then use the same version of imp or impdp to execute the import, you do not need to use Oracle SQL Developer 2.1.1.64. If you want to use it, you must have the same version in the oracle home that exp/imp of sql developer use. -
On making call to Oracle procedures from Java, Value becomes null on oracle
We are using some user defined Oracle data types in my Java/J2EE application
and some of them are Oracle collections(ex. VARRAY).
We are making a call to Procedures/Functions from Java, there are some
parameters of user defined data types declared in the
procedures/functions, from java the values are properly setting to these
user defined data type parameters and sending to Procedures.
We are not getting any exception at Java side and Oracle side and values
are becoming blank/null at oracle procedure side for the parameters of
user defined data types.
But when do the count of collection of user defined data type then it is
properly giving the size of collection(VARRAY).
When we are trying to read the values from the collection(VARRAY) it is
giving blank/null value and there is no exception.
Please let me know if you have any suggestion on this?user7671994 wrote:
When we are trying to read the values from the collection(VARRAY) it is
giving blank/null value and there is no exception.If you are talking about VARCHAR2 parameters of the objects - then you should add orai18n.jar to classpath. -
Import Data over network link in oracle 11g
We want to take export of the OND schema in production database and
import it to the OND schema in UAT database over a network
link by using data pump,in Oracle 11g.Kindly share the steps.Scenario:
Directly importing the TEST01 schema in the production database (oraodrmu) to test database oraodrmt, over
a network by using database link and data pump in Oracle 11g.
Note: When you perform an import over a database link, the import source is a database, not a dump file set, and the data is imported to the connected database instance.
Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.
=================================================================
STEP-1 (IN PRODUCTION DATABASE - oraodrmu)
=================================================================
[root@szoddb01]>su - oraodrmu
Enter user-name: /as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> grant resource to test01;
Grant succeeded.
SQL> grant imp_full_database to test01;
Grant succeeded.
SQL> select owner,object_type,status,count(*) from dba_objects where owner='TEST01' group by owner,object_type,status;
OWNER OBJECT_TYPE STATUS COUNT(*)
TEST01 PROCEDURE VALID 2
TEST01 TABLE VALID 419
TEST01 SEQUENCE VALID 3
TEST01 FUNCTION VALID 8
TEST01 TRIGGER VALID 3
TEST01 INDEX VALID 545
TEST01 LOB VALID 18
7 rows selected.
SQL>
SQL> set pages 999
SQL> col "size MB" format 999,999,999
SQL> col "Objects" format 999,999,999
SQL> select obj.owner "Owner"
2 , obj_cnt "Objects"
3 , decode(seg_size, NULL, 0, seg_size) "size MB"
4 from (select owner, count(*) obj_cnt from dba_objects group by owner) obj
5 , (select owner, ceil(sum(bytes)/1024/1024) seg_size
6 from dba_segments group by owner) seg
7 where obj.owner = seg.owner(+)
8 order by 3 desc ,2 desc, 1
9 /
Owner Objects size MB
OND 8,097 284,011
SYS 9,601 1,912
TEST01 998 1,164
3 rows selected.
SQL> exit
=================================================================
STEP-2 (IN TEST DATABASE - oraodrmt)
=================================================================
[root@szoddb01]>su - oraodrmt
[oraodrmt@szoddb01]>sqlplus
SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 3 18:40:16 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Enter user-name: /as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
ODRMT READ WRITE
SQL> create tablespace test_test datafile '/trn_u04/oradata/odrmt/test01.dbf' size 2048m;
Tablespace created.
SQL> create user test01 identified by test123 default tablespace test_test;
User created.
SQL> grant resource, create session to test01;
Grant succeeded.
SQL> grant EXP_FULL_DATABASE to test01;
Grant succeeded.
SQL> grant imp_FULL_DATABASE to test01;
Grant succeeded.
Note: ODRMU is the DNS hoste name.We can test the connect with: [oraodrmt@szoddb01]>sqlplus test01/test01@odrmu
SQL> create directory test_network_dump as '/dbdump/test_exp';
Directory created.
SQL> grant read,write on directory test_network_dump to test01;
Grant succeeded.
SQL> conn test01/test123
Connected.
SQL> create DATABASE LINK remote_test CONNECT TO test01 identified by test01 USING 'ODRMU';
Database link created.
For testing the database link we can try the below sql:
SQL> select count(*) from OA_APVARIABLENAME@remote_test;
COUNT(*)
59
SQL> exit
[oraodrmt@szoddb01]>impdp test01/test123 network_link=remote_test directory=test_network_dump remap_schema=test01:test01 logfile=impdp__networklink_grms.log;
[oraodrmt@szoddb01]>
Import: Release 11.2.0.2.0 - Production on Mon Dec 3 19:42:47 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TEST01"."SYS_IMPORT_SCHEMA_01": test01/******** network_link=remote_test directory=test_network_dump remap_schema=test01:test01 logfile=impdp_grms_networklink.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 318.5 MB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"TEST01" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "TEST01"."SY_TASK_HISTORY" 779914 rows
. . imported "TEST01"."JCR_JNL_JOURNAL" 603 rows
. . imported "TEST01"."GX_GROUP_SHELL" 1229 rows
Job "TEST01"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 19:45:19
[oraodrmt@szoddb01]>sqlplus
SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 3 19:46:04 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Enter user-name: /as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select owner,object_type,status,count(*) from dba_objects where owner='TEST01' group by owner,object_type,status;
OWNER OBJECT_TYPE STATUS COUNT(*)
TEST01 PROCEDURE VALID 2
TEST01 TABLE VALID 419
TEST01 SEQUENCE VALID 3
TEST01 FUNCTION VALID 8
TEST01 TRIGGER VALID 3
TEST01 INDEX VALID 545
TEST01 LOB VALID 18
TEST01 DATABASE LINK VALID 1
8 rows selected.
SQL>
SQL> set pages 999
SQL> col "size MB" format 999,999,999
SQL> col "Objects" format 999,999,999
SQL> select obj.owner "Owner"
2 , obj_cnt "Objects"
3 , decode(seg_size, NULL, 0, seg_size) "size MB"
4 from (select owner, count(*) obj_cnt from dba_objects group by owner) obj
5 , (select owner, ceil(sum(bytes)/1024/1024) seg_size
6 from dba_segments group by owner) seg
7 where obj.owner = seg.owner(+)
8 order by 3 desc ,2 desc, 1
9 /
Owner Objects size MB
OND 8,065 247,529
SYS 9,554 6,507
TEST01 999 1,164
13 rows selected.
=================================================================
STEP-3 FOR REMOVING THE DATABASE LINK
=================================================================
[oraodrmt@szoddb01]>sqlplus
SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 3 19:16:01 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Enter user-name: /as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> drop database link remote_test;
Database link dropped. -
Can't install Data Guard using DBCA in Oracle 11g Release 2
I have installed Oracle database 11g Release 2 successfully. I have installed Label Security using DBCA, now when I am installing Database Vault using DBCA, it gives message "ORA-01017: invalid username/password; logon denied" and exists back to DBCA.
What step I am missing? please suggest.
Regards and thx,Hi,
I had the same issue too. I installed it on my desktop (server class option). Everything else including EM is working fine. However I got around the issue to an extent by manually running the catalog scripts for database vault. It should be located under $ORACLE_HOME/rdbms/admin. You have to run the script catmac.sql. There is a catch though. You need to go through the contents of the script and execute the other scripts manually by supplying username and passwords (yes, it sucks!!) but I don't find any help on metalink for this issue.
Currently I am trying to create the realms but I am denied permission due to lack of "OPERATOR TARGET" privileges.
If someone can lead me to the correct place where I can look for what is missing, it would be great.
Thank you
Kumar Ramalingam -
Problem with Export User's data from database oracle 11g
i want export all user data and Its tables from oracle 11g database, I am using the comand exp, pero export only the tables with data, i Have Some tables and Not Without export data These tables,
can someone help me!problem with zero extent table
exp is de-supported in 11g. Use expdp instead to export tables without any rows.
Srini -
Oracle 11g decode issue with null
Hi,
we want to migrate from Oracle 10g to Oracle 11g and have an issue with decode.
The database has the following character set settings:
NLS_CHARACTERSET = AL32UTF8 in Oracle 11g and UTF8 in Oracle 10g
NLS_NCHAR_CHARACTERSET = AL16UTF16
If I try a select with decode which has null as first result argument I will get a wrong value.
select decode(id, null, null, name) from tab1;
("name" is a NVARCHAR2 field. Table tab1 has only one entry and "id" is not null.)
This select returns a value with characters which are splitted by 0 bytes.
In Oracle 10g the value without 0 bytes is delivered.
If I suround the decode with dump I get following results:
select dump(decode(id, null, null, name), 1016) from tab1;
Oracle 10g: Typ=1 Len=6 CharacterSet=AL32UTF8: 4d,61,72,74,69,6e
Oracle 11g: Typ=1 Len=12 CharacterSet=US7ASCII: 0,4d,0,61,0,72,0,74,0,69,0,6e
NLS_LANG has no effect on the character set of 'null' in Oracle 11g.
Non null literals work:
select dump(decode(id, null, 'T', name), 1016) from tab1;
Oracle 10g: Typ=1 Len=6 CharacterSet=UTF8: 4d,61,72,74,69,6e
Oracle 11g: Typ=1 Len=6 CharacterSet=AL32UTF8: 4d,61,72,74,69,6e
select dump(decode(id, null, N'T', name), 1016) from tab1;
Oracle 10g: Typ=1 Len=12 CharacterSet=AL16UTF16: 0,4d,0,61,0,72,0,74,0,69,0,6e
Oracle 11g: Typ=1 Len=12 CharacterSet=AL16UTF16: 0,4d,0,61,0,72,0,74,0,69,0,6e
Here the scripts for creating the table and the entry:
create table tab1 (
id NUMBER(3),
name NVARCHAR2(10)
insert into tab1 (id, name) values (1, N'Martin');
commit;
Is it possible to change the character set?
Could you please help me?
Regards
MartinThis doesn't have the problem.looks this doesn't solve the problem (of returning a value with characters which are splitted by 0 bytes):
SQL> select * from v$version where rownum = 1
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
1 row selected.
SQL> select dump(decode(id, null, null, name), 1016) from tab1
union all
select dump(case id when null then null else name end, 1016) cs from tab1
DUMP(DECODE(ID,NULL,NULL,NAME),1016)
Typ=1 Len=12 CharacterSet=US7ASCII: 0,4d,0,61,0,72,0,74,0,69,0,6e
Typ=1 Len=12 CharacterSet=AL16UTF16: 0,4d,0,61,0,72,0,74,0,69,0,6e
2 rows selected.You need to explicitly convert the third parameter to char:
SQL> select dump(decode(id, null, to_char(null), name), 1016) from tab1
DUMP(DECODE(ID,NULL,TO_CHAR(NULL),NAME),1016)
Typ=1 Len=6 CharacterSet=WE8MSWIN1252: 4d,61,72,74,69,6e
1 row selected. -
How to do integration between Oracle 11g and SAP Data services
HI All,
i want to load data from Oracle 11g data base to some other data bases. we have installed oracle 11g server in one machine and Data services server in one machine.we installed oracle 11g client in data services server machine . i created data store for oracle and when i was executing job i got the following error.
how to resolve this issue. do i need to do any configuration between two servers (Oracle 11g and data services), and do i need create ODBC for oracle in data services machine.
if any one know the solution please help me out ASAP.
Thanks,
RamanaHi,
we installed oracle client "win64_11gR2_client" version on DS Server.
but i need the steps after installing oracle client tool. meaning integration between those two servers(Oracle and DS Server).and what are the variable create on DS Server and paths for variable and how to create ODBC for Oracle on DS Server.
Thanks,
Ramana -
Oracle 11g R1 silent install, skip final input
Hello all,
I am attempting to automate the install of Oracle 11g R1 software. I have the silent install working correctly. Upon on completion it says:
"The installation of Oracle Database 11g was successful.
Please check '/u01/app/oracle/oraInventory/logs/silentInstall<date>.log' for more details."
It then sits and waits for the user to hit enter or any key to exit. Given that I am trying to do all of this and more via a shell script, I cannot give it an input manually. Is there a way to force the installation to exit at this point (via a parameter or some sort of shell command or however?) and continue with the next step in the shell script I have written?
Your help is appreciated, thank you.I found that there is a "-nowait" parameter that does exactly what I want it to do, but there is no linux counterpart. I have tried -noconsole also thinking that if there is no console allocated, that it wouldnt prompt, but it still does. Any help would be appreciated.
-
The danger of memory target in Oracle 11g - request for discussion.
Hello, everyone.
This is not a question, but kind of request for discussion.
I believe that many of you heard something about automatic memory management in Oracle 11g.
The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
UKJA@ukja116> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
-- Configuration
*.memory_target=350m
*.memory_max_target=350m
create table t1(c1 int, c2 char(100));
create table t2(c1 int, c2 char(100));
insert into t1 select level, level from dual connect by level <= 10000;
insert into t2 select level, level from dual connect by level <= 10000;
-- First 10053 trace
alter session set events '10053 trace name context forever, level 1';
select /*+ use_hash(t1 t2) */ count(*)
from t1, t2
where t1.c1 = t2.c1 and t1.c2 = t2.c2
alter session set events '10053 trace name context off';
-- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
declare
pat1 varchar2(1000);
pat2 varchar2(1000);
va number;
vc sys_refcursor;
vs varchar2(1000);
begin
select ksppstvl into pat1
from sys.xm$ksppi i, sys.xm$ksppcv v -- views for x$ table
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
for idx in 1 .. 10000000 loop
execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
into va;
if mod(idx, 1000) = 0 then
sys.dbms_system.ksdwrt(2, idx || 'th execution');
select ksppstvl into pat2
from sys.xm$ksppi i, sys.xm$ksppcv v -- views for x$ table
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
if pat1 <> pat2 then
sys.dbms_system.ksdwrt(2, 'yep, I got it!');
exit;
end if;
end if;
end loop;
end;
-- As to alert log file,
25000th execution
26000th execution
27000th execution
28000th execution
29000th execution
30000th execution
yep, I got it! <-- the pga target changed with 30000th hard parse
-- Second 10053 trace for same query
alter session set events '10053 trace name context forever, level 1';
select /*+ use_hash(t1 t2) */ count(*)
from t1, t2
where t1.c1 = t2.c1 and t1.c2 = t2.c2
alter session set events '10053 trace name context off';With above test case, I found that
1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
-- First 10053 trace file
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
_smm_max_size = 11468 KB
_smm_px_max_size = 28672 KB
optimizer_use_sql_plan_baselines = false
optimizer_use_invisible_indexes = true
-- Second 10053 trace file
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
_smm_max_size = 13107 KB
_smm_px_max_size = 32768 KB
optimizer_use_sql_plan_baselines = false
optimizer_use_invisible_indexes = true
Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
I believe that this is a desinged behavior, but the negative side effect is not negligible.
I just like to hear your opinions on this behavior.
Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
================================
Dion Cho - Oracle Performance Storyteller
http://dioncho.wordpress.com (english)
http://ukja.tistory.com (korean)
================================I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
*.memory_target=200m
*.memory_max_target=200m
create table t3(c1 int, c2 char(1000));
insert into t3 select level, level from dual connect by level <= 50000;
declare
pat1 varchar2(1000);
pat2 varchar2(1000);
va number;
begin
select ksppstvl into pat1
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
for idx in 1 .. 1000000 loop
-- try many patterns here!
execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
if mod(idx, 100) = 0 then
sys.dbms_system.ksdwrt(2, idx || 'th execution');
for p in (select ksppinm, ksppstvl
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
end loop;
select ksppstvl into pat2
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
if pat1 <> pat2 then
sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
exit;
end if;
end if;
end loop;
end;
/This test case showed expected and reasonable result, like following:
100th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
200th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
300th execution
__shared_pool_size = 88080384
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
400th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
500th execution
__shared_pool_size = 88080384
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
1100th execution
__shared_pool_size = 92274688
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
1200th execution
__shared_pool_size = 92274688
__db_cache_size = 37748736
__pga_aggregate_target = 58720256
yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
(I'm still in dark age on this automatic memory target management of 11g. More research in need!)
I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
================================
Dion Cho - Oracle Performance Storyteller
http://dioncho.wordpress.com (english)
http://ukja.tistory.com (korean)
================================ -
Export and Import issue in Oracle 11g
Hi All,
I have exported data using exp command in oracle 11g by keeping deferred_segment_creation FALSE.
Even then my zero records tables are not coming in export.
What could be the reason or what other thing i have to do to make it work.
Thanks in advance
Mohsin JTables that were created before you changed deferred_segment_creation to FALSE and do not have any rows would still be segment-less tables --- i.e. not exported.
You need to issue an "ALTER TABLE tablename MOVE" for each of such tables (after having set deferred_segment_creation to FALSE) to force a rebuild of these tables with segments before the export.
Thus, the sequence is
1. set deferred_segment_creation to FALSE
2. identify all tables with 0 rows
3. issue an ALTER TABLE tablename MOVE for each of such tables from step 2
4. Re-run your export
alternatively use conventional export (exp) instead of datapump export.
Hemant K Chitale -
Oracle 11g automatic archival // maintaining historic data
Hi Group,
We are planning to get Oracle 11g.
I want to know if Oracle 11g has the capability to do automatic archival?
My requirement is,
few of the tables in our application will have lot of records (data) and after some days/ months it might slog the performance cos of the table/db size, and also the old data will not be of much interest (but data cannot be purged/deleted until some years). Hence want to archive the old data (trigger conditions specified by the user) and will be needed to bring data back programatically whenever the user queries for the data belonging to that particular period(archived data).
Is there a feature available in Oracle to just specify the data to be archive and bring it back by giving a date/time input.Hi,
First, this looks like a job for Oracle partitioning.
What is your motive for archiving? Improved performance? Cheaper media?
I want to know if Oracle 11g has the capability to do automatic archival? Sure, but you would have to script it yourself. Many shops do time-based rollover archiving with partitioned tables:
http://www.dba-oracle.com/t_partitioned_tablespace_archiving.htm
it might slog the performance cos of the table/db sizeNo, not if you are properly indexed . . . However, table partitioning can speedup queries against large tables with "partition aware" SQL:
http://www.dba-oracle.com/t_partition_sql_index_ss_hint.htm
want to archive the old data (trigger conditions specified by the user) I would not use triggers, schedule a job instead:
http://www.dba-oracle.com/t_archiving_data_in_file_structures.htm
needed to bring data back Sorry, back from where? You are archving to disk, right? -
External tables in Oracle 11g database is not loading null value records
We have upgraded our DB from Oracle 9i to 11g...
It was noticed that data load to external tables in 9i is rejecting the records with null columns..However upgrading it to 11g,it allows the records with null values.
Is there are way to restrict loading the records that has few coulmns that are null..
Can you please share if this is the expected behaviour in Oracle 11g...
Thanks.Data isn't really loaded to an External Table. Rather, the external table lets you query an external data source as if it were a regular database table. To not see the rows with the NULL value, simply filter those rows out with your SQL statement:
SELECT * FROM my_external_table WHERE colX IS NOT NULL;
HTH,
Brian
Maybe you are looking for
-
How to map free goods scenario
how to map free goods scenario?
-
hi, we are using mac mini's for our developement purpose. connecting the same through using Real VNC. Mac mini's are late 2009 and 2010. Now we have upgraded them to 10.8.5. after upgrading having display issues after launching the simulators, we are
-
Image Capture launches every time I connect my iPhone or iPad to my MacBook Pro.
Is there a way to disable this? There doesn't seem to be a Preferences pane for it.
-
Import 1080p h.264 mov to editi with premiere pro?
i am a nikon d800 user , and i like to use adobe premiere pro to edit the 1080p video clips for sharing with my friends, but the compressed H.264 MOV videos from D800 can not be playbacked or edited smoothly on many editing softwares such as Adobe Pr
-
How do I get Ghostery off Firefox - I am sick of the purple box ?
I installed this, didn't like it and now I can't figure out how to get rid of it. It doesn't appear on the Control Panel list.