Expdp impdp - redo generation?
Friends...
Oracle 11gr2 SE
OS: Linux
I did tried to search in documentation but couldn't find answer.
I'm dropping some tables but before that exporting them for backup, db is in archivelog mode.
Before dropping tables, trying to find whether expdp job generates lots of redo or not since file system only have 20 GB of free space.
Question.
1. If I'm performing expdp of some tables with the size of 80 GB (table size.. index not included), does expdp will generate similar amount of redo or will it generate lots of redo?
2. Does impdp job also generate lots of redo or only during index creation in impdp?
thanks,
Mike
Export do NOT generate any redo. It's just dumping data into files in binary format. Very minimal redo just to maintain data export dump table but very very minimal. IF I have to guess no more than 1 MB.
Import does generate redo logs. Imports run 'Insert' statement behind the scenes and it does generate lot of redo logs. You can use sqlfile to create index definition and than run it manually with NOLOGGING which will reduce some redo. something like
impdp user/password exclude=indexes -- this will import everything but indexes.
impdp user/password include=index sqlfile=index.sql -- this will create a file called index.sql that will contain all of the create statements for the indexes
Similar Messages
-
EXP/IMP..of table having LOB column to export and import using expdp/impdp
we have one table in that having colum LOB now this table LOB size is approx 550GB.
as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
we are come to clusion that we need to take backup of this table then truncate this table and then start import
we need help on bekow ponts.
1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
2)once truncate done,does import will complete successfully..?
any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
current SGA 2GB
PGA 398MB
undo retention 1800
undo tbs 6GB
please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
thanks an advance.Hi,
From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
You might want to consider DBMS_REDEFINITION instead?
Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
Regards,
Harry -
Log file's format in expdp\impdp
Hi all,
I need to set log file format for expdp\impdp utility. I have this format for my dump file - filename=<name>%U.dmp which generates unique names for dump files. How can i generate unique names for log files? It'd better if dump file name and log file names will be the same.
Regards,
rustam_tjHi Srini, thanks for advice.
I read doc which you suggest me. The only thing which i found there is:
Log files and SQL files overwrite previously existing files.
So i cant keep previos log files?
My OS is HP-UX (11.3) and database version is 10.2.0.4
Regards,
rustam -
Expdp/impdp :: Constraints in Parent child relationship
Hi ,
I have one table parent1 and tables child1, child2 and chld3 have foreign key created on this parent1.
Now I want to do some deletion on parent1. But since number of records are very high on parent1 , we are going with expdp / impdp with querry option.
I have taken query level expdp on parent1. Now I dropped parent1 with cascade constraints option and all the foreign keys created by child1 ,2 and 3 which references parent1 are automatically dropped.
Now If i fire the impdp for the query level dump file , will these foreign key constraints will get created automatically on child1 ,2 and 3 or I need to manually re-create it ??
Regards,
AnuHi,
The FK's will not be in the dumpfile - see the example code below where i generate a sqlfile following pretty much the process you would have done. THis is because the FK is part of the DDL for the child table not the parent.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
OPS$ORACLE@EMZA3>create table a (col1 number);
Table created.
OPS$ORACLE@EMZA3>alter table a add primary key (col1);
Table altered.
OPS$ORACLE@EMZA3>create table b (col1 number);
Table created.
OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
Table altered.
OPS$ORACLE@EMZA3>
EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04": /******** include=TABLE:"='A'"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported "OPS$ORACLE"."A" 0 KB 0 rows
Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
/oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01": /******** sqlfile=a.sql
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
-- CONNECT OPS$ORACLE
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: SCHEMA_EXPORT/TABLE/TABLE
CREATE TABLE "OPS$ORACLE"."A"
( "COL1" NUMBER
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
-- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ENABLE;
-- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
DECLARE I_N VARCHAR2(60);
I_O VARCHAR2(60);
NV VARCHAR2(1);
c DBMS_METADATA.T_VAR_COLL;
df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
BEGIN
DELETE FROM "SYS"."IMPDP_STATS";
c(1) := 'COL1';
DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
DELETE FROM "SYS"."IMPDP_STATS";
END;
/Regards,
Harry
http://dbaharrison.blogspot.com/ -
Question about redo generation
select * from v$version;
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
"CORE 11.2.0.1.0 Production"
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - ProductionSetup for test
create table parent_1 (id number(12) NOT NULL);
alter table parent_1 add constraint parent_1_pk primary key (id);
create table parent_2 (id number(12) NOT NULL);
alter table parent_2 add constraint parent_2_pk primary key (id);
create table child_table (ref_id number(12) NOT NULL,ref_id2 number(12) NOT NULL, created_at timestamp(6));
alter table child_table add constraint child_table_pk primary key (ref_id, ref_id2);
alter table child_table add constraint child_table_fk1 foreign key (ref_id) references parent_1(id);
alter table child_table add constraint child_table_fk2 foreign key (ref_id2) references parent_2(id);
insert into parent_1 select rownum from all_objects;
insert into parent_2 values (1);
insert into parent_2 values (2);
insert into child_table (select id, 1, systimestamp from parent_1);
insert into child_table (select id, 2, systimestamp from parent_1);
commit;Code version 1:
declare
type t_ids is table of NUMBER(12);
v_ids t_ids;
start_redo NUMBER;
end_redo NUMBER;
cursor c_data is SELECT id FROM parent_1;
begin
select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
open c_data;
LOOP
FETCH c_data
BULK COLLECT INTO v_ids LIMIT 1000;
exit;
end loop;
CLOSE c_data;
for pos in v_ids.first..v_ids.last LOOP
BEGIN
insert into child_table values (v_ids(pos), 2, systimestamp);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
END;
END LOOP;
end;
/Version 2:
declare
type t_ids is table of NUMBER(12);
v_ids t_ids;
start_redo NUMBER;
end_redo NUMBER;
cursor c_data is SELECT id FROM parent_1;
ex_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
pos NUMBER;
l_error_count NUMBER;
begin
select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
open c_data;
LOOP
FETCH c_data
BULK COLLECT INTO v_ids LIMIT 1000;
exit;
end loop;
CLOSE c_data;
BEGIN
FORALL i IN v_ids.first .. v_ids.last SAVE EXCEPTIONS
insert into child_table values (v_ids(i), 2, systimestamp);
EXCEPTION
WHEN ex_dml_errors THEN
l_error_count := SQL%BULK_EXCEPTIONS.count;
FOR i IN 1 .. l_error_count LOOP
pos := SQL%BULK_EXCEPTIONS(i).error_index;
update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
END LOOP;
END;
select value into end_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
dbms_output.put_line('Created redo : ' || (end_redo-start_redo));
end;
/Version 1 output:
Created redo : 682644
Version 2 output:
Created redo : 7499364
Why is version 2 generating significant more redo ?As both pieces of code erroneously replace non-procedural code by procedural code, ignoring the power of a RDBMS to process sets, and are examples of slow by slow programming,
both pieces of code are undesirable, so the difference in redo generation doesn't matter.
Sybrand Bakker
Senior Oracle DBA -
System generated Index names different on target database after expdp/impdp
After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different names on the target database, which caused a major issue with GoldenGate. Could anyone provide any tricks on how to perform the expdp/impdp and have the same system index name generated to be the same on both source and target?
Thanks in advance.
JLWhile I do not agree with Sb choice of wording his solution is correct. I suggest you drop and recreate the objects using explicit naming then you will get the same names on the target database after import for constraints, indexes, and FK's.
A detailed description of the problem this caused with Golden Gate would be interesting. The full Oracle and Golden Gate versions is use might also be important to the solution, if one exists other than explicitl naming.
HTH -- Mark D Powell --
Edited by: Mark D Powell on May 30, 2012 12:26 PM -
High REDO Generation for enqueue and dequeue
Hi,
We have found high redo generation while enqueue and dequeue. Which is in-turn affecting our database performance.
Please find a sample test result below :
Create the Type:-
CREATE OR REPLACE
type src_message_type_new_1 as object(
no varchar(10),
title varchar2(30),
text varchar2(2000))
Create the Queue and Queue Table:-
CREATE OR REPLACE procedure create_src_queue
as
begin
DBMS_AQADM.CREATE_QUEUE_TABLE
(queue_table => 'src_queue_tbl_1',
queue_payload_type => 'src_message_type_new_1',
--multiple_consumers => TRUE,
compatible=>10.1,
storage_clause=>'TABLESPACE EDW_OBJ_AUTO_9',
comment => 'General message queue table created on ' ||
TO_CHAR(SYSDATE,'MON-DD-YYYY HH24:MI:SS'
commit;
DBMS_AQADM.CREATE_QUEUE
(queue_name => 'src_queue_1',
queue_table => 'src_queue_tbl_1',
comment => 'Test Queue Number 1'
commit;
dbms_aqadm.start_queue
('src_queue_1');
commit;
end;
Redo Log Size:-
select
n.name, t.value
from
v$mystat t join
v$statname n
on
t.statistic# = n.statistic#
where
n.name = 'redo size'
Output:-
595184
Enqueue Message into the Queue Table:-
CREATE OR REPLACE PROCEDURE enque_msg_ab
as
queue_options DBMS_AQ.ENQUEUE_OPTIONS_T;
message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
message_id raw(16);
my_message dev_hub.src_message_type_new_1;
begin
my_message:=src_message_type_new_1(
'1',
'This is a sample message',
'This message has been posted on');
DBMS_AQ.ENQUEUE(
queue_name=>'dev_hub.src_queue_1',
enqueue_options=>queue_options,
message_properties=>message_properties,
payload=>my_message,
msgid =>message_id);
commit;
end;
Redo Log Size:-
select
n.name, t.value
from
v$mystat t join
v$statname n
on
t.statistic# = n.statistic#
where
n.name = 'redo size'
Output:-
596740
Can any one tell us the reason for this high redo generation and how can this can be controlled?
Regards,
KoushikPlease find my answers below :
What full version of Oracle?
- 10.1.0.5
How large is the average message?
- in some byets only, at max 1-2 KB and not more than this.
What kind of performance problem is 300G of redo causing? How? Have you ran a statspack report? What did it show?
- Actually we are facing some performance issue as a overall prespective for our daily batch processing, which now causing a delay in the batch SLA. So we have produced an AWR report for our database and from there we have found that total redo generation is around 400 GB, amoung which 300 GB has been generated by enqueue-dequeue process.
What other activity is taking place on this instance? That is, is all this redo really being generated as the result of the AQ activity or is some of it the result of the messages being processed? How are the messages created?
- Normal batch process everyday. Batch process also generates REDO but the amount is low compare to enqueue-dequeue process.
Have you looked at providing a separate physical disk stripe for the online redo logs and for the archive log location from the database data file physical disk and IO channels?
- No, as we are not the production DBA so we don't have the direct access to production database.
What kind of file system and disk are you using?
- I am not sure about it. I will try to confirm it by production DBA. Is there any other way to find it out, whether it is on filesystem or raw device?
Can you please provide any help in this topic.
Regards,
Koushik -
Excessive Redo Generation After Upgrading on Oracle 10gR2
We had our production database hosted on Oracle 9.2.0. Few months back we have migrated it to Oracle 10.2.0.4.0.
After Migration I have noticed that redo generation has become very very high. In earlier case no. of log files generating in production hours were around 20 where as after migration it become around 200 files per day. I have run statspack report on this database. Statspack report is also saying that log file switch wait is become very high. Parameter timed_statistics has also been set to FALSE. Workload on the database is same before & after upgrade. Queries running in the sessions are also same before & after upgrade. All the parameters & memory structures are same after upgrade. Satatpack report is saying that db block change & disk write is become very high. I had used import export for upgrading the databases. Please provide a solution for this problem.
Thanks In advance for all your favours....Hi;
Please check below notes which could be helpful for your issue:
Diagnosing excessive redo generation [ID 199298.1]
Excessive Archives / Redo Logs Generation Troubleshooting [ID 832504.1]
Troubleshooting High Redo Generation Issues [ID 782935.1]
How to Disable (Temporary) Generation of Archive Redo Log Files [ID 177218.1]
Regard
Helios -
Reducing REDO generation from a current data refresh process
Hello,
I need to resolve an issue where a schema database is maintained with one delete followed by a tons of bulk insert. The problem is that the vast majority of deleted rows are reinserted as is. This process deletes and reinserts about 1 175 000 rows of data!
The delete clause is:
- delete from table where term >= '200705';
The data before '200705' is very stable and doesn't need to be refreshed.
The table is 9 709 797 rows big.
Here is an excerpt of cardinalities for each term code:
TERM NB_REGS
200001 117130
200005 23584
200009 123167
200101 115640
200105 24640
200109 121908
200201 117516
200205 24477
200209 125655
200301 120222
200305 26678
200309 129541
200401 123875
200405 27283
200409 131232
200501 124926
200505 27155
200509 130725
200601 122820
200605 27902
200609 129807
200701 121121
200705 27699
200709 129691
200801 120937
200805 29062
200809 130251
200901 122753
200905 27745
200909 135598
201001 127810
201005 29986
201009 142268
201101 133285
201105 18075This kind of operation is generating a LOT of redo logs: on average 25 GB per days.
What are the best options available to us to reduce redo generation without changing to much the current process?
- make tables in no logging ? (with mandatory use of append hint?)
- use of a global temporary table for staging and merging against the true table?
- use of partitions and truncate the reloaded one? this not reduce redo generated by subsequent inserts...?
This has not to be mandatory transactionnal.
We use 10gR2 on Windows 64 bits.
Thanks
Brunoyes, you got it, these are terms (Summer of 2007, beginning at May).
Is the perverse effect of truncating and then inserting in direct path mode pushing the high water mark up day after day while having unused space in truncated partitions? Maybe we should not REUSE STORAGE on truncation...
this data can be recovered easily from the datamart that pushes this data, this means we can use nologging and direct path mode without any «forever loss» of data.
Should I have one partition for each term, or having only one for the stable terms and one for the refreshed terms? -
Hi,
We have a problem with redo generation. Last few days,redo generation is high than normal.No changes in application level.I don't know where to start.I tried to compare AWR report.But i did not get.
1,Is it possilbe to find How much redo generated for a DML statement by Segment wise(table segment,index segment) when it's executed?
For Ex : The table M_MARCH has 19 colums and 6 indexes.Another tables M_Report has 59 columns and 5 indexes.the query combines both tables.
We need to find whether indexex are really is usable or not?
2,Is there any other way to reduce redo geneation?
Br,
RajeshHigh redo generation can be of two types:
1. During a specific duration of the day.
2. Sudden increase in the archive logs observed.
In both the cases, first thing to be checked is about any modifications done either at the database level(modifying any parameters, any maintenance operations performed,..) and application level (deployment of new application, modification in the code, increase in the users,..).
To know the exact reason for the high redo, we need information about the redo activity and the details of the load. Following information need to be collected for the duration of high redo generation.
1] To know the trend of log switches below queries can be used.
SQL> alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'; Session altered. SQL> select trunc(first_time, 'HH') , count(*) 2 from v$loghist 3 group by trunc(first_time, 'HH') 4 order by trunc(first_time, 'HH'); TRUNC(FIRST_TIME,'HH COUNT(*) -------------------- ---------- 25-MAY-2008 20:00:00 1 26-MAY-2008 12:00:00 1 26-MAY-2008 13:00:00 1 27-MAY-2008 15:00:00 2 28-MAY-2008 12:00:00 1 <- Indicate 1 log switch from 12PM to 1PM. 28-MAY-2008 18:00:00 1 29-MAY-2008 11:00:00 39 29-MAY-2008 12:00:00 135 29-MAY-2008 13:00:00 126 29-MAY-2008 14:00:00 135 <- Indicate 135 log switches from 2-3 PM. 29-MAY-2008 15:00:00 112
We can also get the information about the log switches from alert log (by looking at the messages 'Thread 1 advanced to log sequence' and counting them for the duration), AWR report.
1] If you are in 10g or higher version and have license for AWR, then you can collect AWR report for the problematic time else go for statspack report.
a) AWR Report
-- Create an AWR snapshot when you are able to reproduce the issue: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT (); -- After 30 minutes, create a new snapshot: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT (); -- Now run $ORACLE_HOME/rdbms/admin/awrrpt.sql
b) Statspack Report
SQL> connect perfstat/<Password> SQL> execute statspack.snap; -- After 30 minutes SQL> execute statspack.snap; SQL> @?/rdbms/admin/spreport
In the AWR/Statspack report look out for queries with highest gets/execution. You can check in the "load profile" section for "Redo size" and compare it with non-problematic duration.
2] We need to mine the archivelogs generated during the time frame of high redo generation.
-- Use the DBMS_LOGMNR.ADD_LOGFILE procedure to create the list of logs to be analyzed: SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<filename>',options => dbms_logmnr.new); SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<file_name>',options => dbms_logmnr.addfile); -- Start the logminer SQL> execute DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG); SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
Please refer to below article if there is any problem in using logminer.
Note 62508.1 - The LogMiner Utility
We can not get the Redo Size using Logminer but We can only get user,operation and schema responsible for high redo.
3] Run below query to know the session generating high redo at any specific time.
col program for a10 col username for a10 select to_char(sysdate,'hh24:mi'), username, program , a.sid, a.serial#, b.name, c.value from v$session a, v$statname b, v$sesstat c where b.STATISTIC# =c.STATISTIC# and c.sid=a.sid and b.name like 'redo%' order by value;
This will give us the all the statistics related to redo. We should be more interested in knowing "redo size" (Total amount of redo generated in bytes)
This will give us SID for problematic session.
In above query output look out for statistics against which high value is appeared and this statistics will give fair idea about problem. -
Expdp+Impdp: Does the user have to have DBA privilege?
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?
If he is not allowed: Which GRANT is necessary to be able to perform such expdp/impdp operations?
Peter
Edited by: user559463 on Feb 28, 2010 7:49 AMHello,
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?Yes, a User can always export its own objects.
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?Yes, if this User has EXP_FULL_DATABASE and IMP_FUL_DATABASE Roles.
So, you can create a User and GRANT it EXP_FULL_DATABASE and IMP_FULL_DATABASE Roles and, being connected
to this User, you could export/import any Object from / to any Schemas.
On databases, on which there're a lot of export/import operations, I always create a special User with these Roles.
NB: In DataPump you should GRANT also READ, WRITE Privileges on the DIRECTORY (if you use "dump") to the User.
Else, be accurate on the choice of your words, as previously posted, DBA is a Role not a Privilege which has another meaning.
Hope this help.
Best regards,
Jean-Valentin -
I am using oracle 9i and windows xp professional.I enabled autotrace for scott user. Now when i issue select, insert, update,delete it is showing me statistics, but when i issue ALTER TABLE y ADD (XYZ NUMBER(1)); it did'nt give me any statistics. What does it mean:-
1. DDLs do not generate REDO.
2. What to do to get the statistics of every command ?
Thanks & RegardsHello Sir,
1. alter session set sql_trace=true;
2. DMLs are not showing me information about redo generation
3. set autotrace traceonly statistics; now it is showing statistics.
but i wish to get the information regarding redo generated by DDLs. How ? I mean how much redo generated by DDL.
Thanks
I issued set autotrace traceonly statistics; -
Hi,
I am having problem with the redo generation rate. The size of redo logfiles in my DB is 400MB and log switch happens approximately after every 10 min which i think is very fast interval comparing to the size of my redo logfiles. I have even checked the log miner settings but i did not find any problem with it.
SQL> select SUPPLEMENTAL_LOG_DATA_MIN,SUPPLEMENTAL_LOG_DATA_PK,SUPPLEMENTAL_LOG_DATA_UI from v$database;
SUP SUP SUP
NO NO NO
Plz can anyone tell me what is wrong with the redo generation rate?First of all, it simply means your system is doing lots of work(generating 400MB redo per 10 minutes), well if you have work to do you need to do it. Besides, 400MB per 10 minutes is not that big for some busy system. So killing sessions may not be a good idea, your system just has so much work to do. If you have millions of rows need to be loaded, you just have to do it.
Secondly, you many query v$sesstat for statistics name called "redo size" periodically (i.e. every 10 minutes) and get some idea when these happen during the day period. And user SQL_TRACE to get those SQL statements in the tkprof trace report, find out if you can optimize SQL to generate less redo, or other alternative ways. Some common options to minimize redo :
insert /*+ append */ hint
create table as (select ...)
table nologging
if updating millions of rows, can it be dont by creating new tables
is it possible to use temp table feature (some applications use permanant tables but indeed they can use temp table)
Anyway, you have to know what your database is doing while generating tons of redo. Until you find out what SQLs are generating the large redo, you can not solve the problem at system level by killing sessions or so.
Regards,
Jianhui -
Hi,
My question is about redo generation when using append hint . i have a database which is in FORCE loggind mode for standby database.if i use append hint , will it generate any redo ? i wonder will the standby db be same as primary after append hint usage ?
thanks.Hi,
thanks for answer.
the sentence says
"if the database is in ARCHIVELOG and FORCE LOGGING mode, then direct-path SQL generate data redo for both LOGGING and NOLOGGING tables." . This is my case.
i have opened archive_log with dbms_logmnr but i could not find any redo . So i wonder standby db will not be in synchronize with primary ?
thanks. -
Hello,
i would like to use expdp and impdp.
As i installed XE11 on Linux, i unlocked the HR account:
ALTER USER hr ACCOUNT UNLOCK IDENTIFIED BY hr;
and use the expdp:
expdp hr/hr DUMPFILE=hrdump.dmp DIRECTORY=DATA_PUMP_DIR SCHEMAS=HR
LOGFILE=hrdump.log
This quits with:
ORA-39006: internal error
ORA-39213: Metadata processing is not available
The alert_XE.log reported:
ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"
ORA-06550: line 1, column 807:
PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
I read some entries here and did:
sqlplus sys/******* as sysdba @?/rdbms/admin/catnsnmp.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/catsnmp.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/catdpb.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/utlrp.sql
I restarted the database, but the result of expdp was the same:
ORA-39006: internal error
ORA-39213: Metadata processing is not available
What's wrong with that? What can i do?
Do i need "BSLN_MAINTAIN_STATS_JOB" or can this set ro FALSE?
I created the database today on 24.07. and the next run for "BSLN_MAINTAIN_STATS_JOB"
is on 29.07. ?
In the Windows-Version it is working correct, but not in the Linux-Version.
Best regardsHello gentlemen,
back to the origin:
'Is expdp/impdp working on XE11'
The answer is simply yes.
After a view days i found out that:
- no stylesheets installed are required for this operation
- a simple installation is enough
And i did:
SHELL:
mkdir /u01/app > /dev/null 2>&1
mkdir /u01/app/oracle > /dev/null 2>&1
groupadd dba
useradd -g dba -d /u01/app/oracle oracle > /dev/null 2>&1
chown -R oracle:dba /u01/app/oracle
rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
/etc/init.d/./oracle-xe configure responseFile=xe.rsp
./sqlplus sys/********* as sysdba @/u01/app/oracle/product/11.2.0/xe/rdbms/admin/utlfile.sql
SQLPLUS:
ALTER USER hr IDENTIFIED BY hr ACCOUNT UNLOCK;
GRANT CONNECT, RESOURCE to hr;
GRANT read, write on DIRECTORY DATA_PUMP_DIR TO hr;
expdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_exp.log
impdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_imp.log
This was carried out on:
OEL5.8, OEL6.3, openSUSE 11.4
For explanation:
We did the style-sheet-installation for XE10 to have the expdp/impd functionality.
Thanks for your assistance
Best regards
Achim
Edited by: oelk on 16.08.2012 10:20
Maybe you are looking for
-
Apple Reuse and Recycling Program
I'm not sure where this should live. Its not a question more of a warning. I recently took advantage of the Apple Reuse and Recycle Program in the UK to recycle three iPhones and obtain some vouchers in return. The Apple Partner for this service in t
-
Row.setAttribute generates Exception
Hi! I am trying to set an attribute of a new row. The ViewObject extends another ViewObject which is a default VO for that entity. The entity has a couple of associations. Now when I call setAttribute() I get: oracle.jbo.AttrSetValException: JBO-2702
-
Hello, My question is regarding the below thread related to standby question. Create a standby controlfile using cold backup First OP posted into database general, this morning moderators moved to dataguard. Now again it is moved back to database gen
-
Best approach to replicate the data.
Hello Every one, I want to know about the best approach to replicate the data. First i clear the senario , i have one oracle 10g enterprise edition database and 20 oracle 10g standerd edition databases.The enterprise edition will run at center and th
-
Is it possible to map iFS running on unix (Solaris) to WIN2K client? What do you use for //<server_name>? I don't see anything through Network Neigborhood or search.