Question about redo generation

select * from v$version;
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
"CORE     11.2.0.1.0     Production"
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - ProductionSetup for test
create table parent_1 (id number(12) NOT NULL);
alter table parent_1 add constraint parent_1_pk primary key (id);
create table parent_2 (id number(12) NOT NULL);
alter table parent_2 add constraint parent_2_pk primary key (id);
create table child_table (ref_id number(12) NOT NULL,ref_id2 number(12) NOT NULL, created_at timestamp(6));
alter table child_table add constraint child_table_pk primary key (ref_id, ref_id2);
alter table child_table add constraint child_table_fk1 foreign key (ref_id) references parent_1(id);
alter table child_table add constraint child_table_fk2 foreign key (ref_id2) references parent_2(id);
insert into parent_1 select rownum from all_objects;
insert into parent_2 values (1);
insert into parent_2 values (2);
insert into child_table (select id, 1, systimestamp from parent_1);
insert into child_table (select id, 2, systimestamp from parent_1);
commit;Code version 1:
declare
   type t_ids is table of NUMBER(12);
   v_ids t_ids;
   start_redo NUMBER;
   end_redo NUMBER;
  cursor c_data is SELECT id FROM parent_1;
begin
   select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
   open c_data;
   LOOP
    FETCH c_data
    BULK COLLECT INTO v_ids LIMIT 1000;
    exit;
   end loop;
  CLOSE c_data;
    for pos in v_ids.first..v_ids.last LOOP
  BEGIN
    insert into child_table values (v_ids(pos), 2, systimestamp);
    EXCEPTION
      WHEN DUP_VAL_ON_INDEX THEN
        update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
  END;
  END LOOP;
end;
/Version 2:
declare
   type t_ids is table of NUMBER(12);
   v_ids t_ids;
   start_redo NUMBER;
   end_redo NUMBER;
  cursor c_data is SELECT id FROM parent_1;
  ex_dml_errors EXCEPTION;
  PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
  pos NUMBER;
  l_error_count NUMBER;
begin
   select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
   open c_data;
   LOOP
    FETCH c_data
    BULK COLLECT INTO v_ids LIMIT 1000;
    exit;
   end loop;
  CLOSE c_data;
  BEGIN
    FORALL i IN v_ids.first .. v_ids.last SAVE EXCEPTIONS
    insert into child_table values (v_ids(i), 2, systimestamp);
  EXCEPTION
    WHEN ex_dml_errors THEN
      l_error_count := SQL%BULK_EXCEPTIONS.count;
      FOR i IN 1 .. l_error_count LOOP
        pos := SQL%BULK_EXCEPTIONS(i).error_index;
        update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
      END LOOP;
  END;
   select value into end_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
  dbms_output.put_line('Created redo : ' || (end_redo-start_redo));
end;
/Version 1 output:
Created redo : 682644
Version 2 output:
Created redo : 7499364
Why is version 2 generating significant more redo ?

As both pieces of code erroneously replace non-procedural code by procedural code, ignoring the power of a RDBMS to process sets, and are examples of slow by slow programming,
both pieces of code are undesirable, so the difference in redo generation doesn't matter.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • BC4J: Simple question about PK generation

    Hi
    I have a table which primary key is based in a sequence.
    So, I would like to make an BC4J app where the user could query by the PK column, but when inserting a new record the field must be disabled, and somehow the sequence should be queried to generate the PK before inserting the row.
    How can I achieve this using BC4J? I could not see any property to set...
    I think this is a very basic question, so there is some resources (like a FAQ) where I can find solutions to similar questions?
    Thanks!
    Luis Cabral

    luis
    one thing you could do is edit the entity object you created for that table, go to the Attribute Settings and select your primary key. where it says Type pull down that box and select DBSequence. then you can either create an on insert trigger that generates the primary key or (i think, haven't tried it) you can specifiy your sequence on the same screen as mentioned above.
    i know there is documentation out there, i just can't remember exactly where. maybe someone else can provide a link?
    here is a link to the "how-tos"
    http://otn.oracle.com/products/jdev/howtos/content.html

  • Append hint redo generation

    Hi,
    My question is about redo generation when using append hint . i have a database which is in FORCE loggind mode for standby database.if i use append hint , will it generate any redo ? i wonder will the standby db be same as primary after append hint usage ?
    thanks.

    Hi,
    thanks for answer.
    the sentence says
    "if the database is in ARCHIVELOG and FORCE LOGGING mode, then direct-path SQL generate data redo for both LOGGING and NOLOGGING tables." . This is my case.
    i have opened archive_log with dbms_logmnr but i could not find any redo . So i wonder standby db will not be in synchronize with primary ?
    thanks.

  • Questions about VK34 (create condition record with reference)

    Hi experts,
    When clicking the botton "copy condition" via VK34, an error message pops up. It says the error is due to no definition of select rule. I use the  standard condition record PR00.
    Q1: How to define "select rule"?
    Q2: Any chances to download condition record when using VK13 or VK33?

    Hello Colleague.                                                                               
    This Unicode problem occurs mainly with pricing reports (recognized by report names that start with /1SDBF12L/RV14AK).                        
    To regenerate these, call transaction V/LE or use transaction SE38 to execute the report RV14ALLE.                                           
    Leave the selection screen empty and execute (F8).                     
    See SAP note 497850.                                              
    If you have further questions about the generation in the condition maintenance, I recommend to read Note 886771.                
    I hope it can help you.
    Regards
    Ruy Castro

  • Redo Generation in DDL

    I am using oracle 9i and windows xp professional.I enabled autotrace for scott user. Now when i issue select, insert, update,delete it is showing me statistics, but when i issue ALTER TABLE y ADD (XYZ NUMBER(1)); it did'nt give me any statistics. What does it mean:-
    1. DDLs do not generate REDO.
    2. What to do to get the statistics of every command ?
    Thanks & Regards

    Hello Sir,
    1. alter session set sql_trace=true;
    2. DMLs are not showing me information about redo generation
    3. set autotrace traceonly statistics; now it is showing statistics.
    but i wish to get the information regarding redo generated by DDLs. How ? I mean how much redo generated by DDL.
    Thanks
    I issued set autotrace traceonly statistics;

  • HT5262 Hi, I have a question about iCloud. I have an ipod 4th generation and im getting a 5th generation. i have lots of games such as subway surfers and temple run 2. I worked and played really hard on earning stuff and have also made in app purchases. D

    Hi, I have a question about iCloud. I have an ipod 4th generation and im getting a 5th generation. i have lots of games such as subway surfers and temple run 2. I worked and played really hard on earning stuff and have also made in app purchases. Does iCloud save that kind of stuff?

    See:
    iOS: Transferring information from your current iPhone, iPad, or iPod touch to a new device
    However, not all in-app purchases will transfer to another device. For which ones ill not see:
    iTunes Store: About In-App Purchases

  • Several Questions about Aperture Problems

    Having used Aperture for some time, and being a Mac user since 1985, I have a list of questions about Aperture that I need help with.
    1. Periodically operating the sliders will make an image turn black. Sometimes this is early in a session, sometimes late. Various workarounds will bring the image back, but once this starts, quitting seems the only option. Can anyone help me with why this happens and how to stop it?
    2. About 20% of the RAW files from my supported camera display the Unsupported Image Format error screen. These files operate perfectly in the manufacturers software and in other image management software that does not use the OS RAW libraries. Can someone help me with the cause of this and the solution (not a "workaround" but a way to make it stop happening).
    3. ALL of my RAW files from my supported camera, when I try to lift metadata, return the error message that there is no metadata to lift. But in fact, the metadata inspector displays metadata. How can I stop this from happening and experience normal metadata lifting?
    4. When I use the DNG format from my supported camera, a great many EXIF fields do not display, such as lens data. Can someone help me with DNG files, since these never generate the UIF error screen (cf. #2 above) as the manufacturer's RAW files do. I'm forced to use DNGs to have all my shots, but the EXIF data is not fully displayed.
    5. Today I opened Aperture and no previews would display. Aperture froze while updating thumbnails. I'd not done any non-routine edits or imported any unusual files or formats. Aperture then would not quit. Is it safe to attempt to restart Aperture?
    6. At times Aperture slows to the point of not working at all. Long pauses simply in trying to enlarge the selection circles for redeye removal, for example. What would cause Aperture to slow down without warning at any point in the workflow? How can I experience a more consistent operating speed from Aperture.
    7. How do other image management programs like Lightroom compare on these points? Is Aperture typical or should I seek a change in my workflow, improvement in my hardware, or some adjustment in my installation?
    Info: MacBook Pro, 4 GB RAM (apple), 320 GB drive, 45 GB free on drive; library of 3800 images. Fewer than 12 projects.
    Thanks for your assistance.

    n #3. It looks like you're absolutely right on this. I went back and checked on photos I'd edited and there was the altered metadata. +Many thanks for dispelling that concern!+ I love being a happy camper. Check that one off the list!
    On 1, I've followed the black-screen issues and pretty much all we know is that a workaround exists--usually selecting the crop box restores the picture, but a lot of times it blacks out again. Having used Apple products over 25 years, all of which was in my adult professional life, I haven't seen Apple willing to just let users tolerate an irritating "workaround." I think this is something that needs fixing.
    On 6--I don't understand how the rotational speed would produce erratic performance issues. I can go a month of reasonable performance, and then suddenly things bog down. Also, if that is the reason, this really ought to be part of the System Requirements, or at least, a recommendation. Maybe it is already--I should check to be sure. I confess this is one aspect I had not thought about.
    Thanks so much for thinking about these. I love my Apple products and have owned almost every generation of Mac since the "Fat Mac" (512K RAM! 800Kb Floppies!) and hate to stare at the screen and think I've been given a truly poor product--not in my DNA--but these things break my heart.
    Message was edited by: LawsonStone
    Message was edited by: LawsonStone

  • Question about the "iPod radio remote"

    I assume the "radio remote" is the same for every country and is under software control. I want to buy in Canada but use in UK and Canada.
    Has anyone got one and seen the RDS work, in the UK almost every FM channel is RDS. Do they have RDS in Canada (Toronto) can anyone confirm?
    And of course it says FM so this again I assume no AM, not that I need AM it was just an interest in if it could do it.
    What happens about headphones, you can plug a pair into the remote, can you still plug into the bottom of the iPod or does the remote connector get in the way.
    What is the story with "line out" I use this all the time as I amp my headphones. So I assume you cant use line out? or is the remote a line out also.
    And I assume the remote cable and you headphones act as the aerial, how is the quality? Is it any good indoors, some little FM radios I have used drop to mono indoors. Does this Apple one frop to mono if the signal is weak?
    Someone that has one can perhaps answer
    Cheers
    Ray

    I can at least answer some of the questions.
    The RDS function on this radio/remote is poor in the
    UK. You have to set the radio function on the iPod
    to "Europe" and when you do this RDS info is not
    displayed on almost all radio stations such as
    Virgin, XFM, Capital, GLR, etc. It does display on
    Radio 2 however. At least, this is my finding. So
    it's unpredictable at best. Setting the radio to
    "USA" on the iPod increases the likelihood of getting
    the RDS to display, but the downside is that you can
    then only tune the radio in increments of 2mhz which
    is a bit awkward if your looking for 89.1mhz!
    There are three settings for the radio which you
    choose via the iPod. They are USA/Europe/Japan. No
    setting for Canada, although I'm sure you will still
    be able to use it over there as I can still use mine
    here in the UK even if it's set to USA.
    No, there is no AM radio.
    You can plug headphones into the remote or the
    headphone jack on the iPod, or both at the same time.
    I'm not sure I understand your question about the
    bottom of the iPod. The radio plugs into the iPod
    via the dock connector.
    There is no 'line out' on the iPod. If you mean the
    one on the dock, then no you can't use it as you
    cannot place the iPod into the dock whilst the
    radio/remote is connected as they use the same
    connector.
    Yes the cable of the remote acts as the aerial.
    Reception is surprisingly good where I am, but it
    will of course depend on where you use it. I've
    used it indoors without any drop in signal or it
    dropping to mono.
    Lastly, you need to update the software for both the
    nano or the 5th generation for the radio to work
    (after you do this a radio function appears in the
    menu), and some users have found that doing this
    screws up the iPod. I have mine working on both the
    nano and the 5th gen without any problems.
    There is a review of the radio/remote here. I like
    mine and consider it a worthwhile accessory, but of
    course there will be those who disagree.
    iPod
    Radio/Remote Review.
    Thank you for the reply.
    Think your right USA or Europe will work in Canada.
    Yes I mean the line out via the dock, I just wondered if the remote become a line out socket (I also use a pocket dock for line out for headphone amp as I use a pair of Sennheiser HD600 with my ipod) but of course that is silly becasue you have a volume control on the remote and line out is normaly fixed.
    Based on your views I shall get one in March when in Toronto
    Cheers
    Ray

  • High REDO Generation for enqueue and dequeue

    Hi,
    We have found high redo generation while enqueue and dequeue. Which is in-turn affecting our database performance.
    Please find a sample test result below :
    Create the Type:-
    CREATE OR REPLACE
    type src_message_type_new_1 as object(
    no varchar(10),
    title varchar2(30),
    text varchar2(2000))
    Create the Queue and Queue Table:-
    CREATE OR REPLACE procedure create_src_queue
    as
    begin
    DBMS_AQADM.CREATE_QUEUE_TABLE
    (queue_table => 'src_queue_tbl_1',
    queue_payload_type => 'src_message_type_new_1',
         --multiple_consumers => TRUE,
         compatible=>10.1,
         storage_clause=>'TABLESPACE EDW_OBJ_AUTO_9',
    comment => 'General message queue table created on ' ||
    TO_CHAR(SYSDATE,'MON-DD-YYYY HH24:MI:SS'
         commit;
    DBMS_AQADM.CREATE_QUEUE
    (queue_name => 'src_queue_1',
    queue_table => 'src_queue_tbl_1',
    comment => 'Test Queue Number 1'
         commit;
    dbms_aqadm.start_queue
    ('src_queue_1');
         commit;
    end;
    Redo Log Size:-
    select
    n.name, t.value
    from
    v$mystat t join
    v$statname n
    on
    t.statistic# = n.statistic#
    where
    n.name = 'redo size'
    Output:-
    595184
    Enqueue Message into the Queue Table:-
    CREATE OR REPLACE PROCEDURE enque_msg_ab
    as
    queue_options DBMS_AQ.ENQUEUE_OPTIONS_T;
    message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
    message_id raw(16);
    my_message dev_hub.src_message_type_new_1;
    begin
    my_message:=src_message_type_new_1(
    '1',
    'This is a sample message',
    'This message has been posted on');
    DBMS_AQ.ENQUEUE(
    queue_name=>'dev_hub.src_queue_1',
    enqueue_options=>queue_options,
    message_properties=>message_properties,
    payload=>my_message,
    msgid =>message_id);
    commit;
    end;
    Redo Log Size:-
    select
    n.name, t.value
    from
    v$mystat t join
    v$statname n
    on
    t.statistic# = n.statistic#
    where
    n.name = 'redo size'
    Output:-
    596740
    Can any one tell us the reason for this high redo generation and how can this can be controlled?
    Regards,
    Koushik

    Please find my answers below :
    What full version of Oracle?
    - 10.1.0.5
    How large is the average message?
    - in some byets only, at max 1-2 KB and not more than this.
    What kind of performance problem is 300G of redo causing? How? Have you ran a statspack report? What did it show?
    - Actually we are facing some performance issue as a overall prespective for our daily batch processing, which now causing a delay in the batch SLA. So we have produced an AWR report for our database and from there we have found that total redo generation is around 400 GB, amoung which 300 GB has been generated by enqueue-dequeue process.
    What other activity is taking place on this instance? That is, is all this redo really being generated as the result of the AQ activity or is some of it the result of the messages being processed? How are the messages created?
    - Normal batch process everyday. Batch process also generates REDO but the amount is low compare to enqueue-dequeue process.
    Have you looked at providing a separate physical disk stripe for the online redo logs and for the archive log location from the database data file physical disk and IO channels?
    - No, as we are not the production DBA so we don't have the direct access to production database.
    What kind of file system and disk are you using?
    - I am not sure about it. I will try to confirm it by production DBA. Is there any other way to find it out, whether it is on filesystem or raw device?
    Can you please provide any help in this topic.
    Regards,
    Koushik

  • About Automatic generation of Primary key

    I have created one Z table. Its composite primary keys are vkorg
    and Sales Representive Number. I have to assign Sales Representive number
    through system. How it should be done?
    Message was edited by:
            Nilesh Vakil

    Nilesh , You asked the Same question in ABAP General
    Please Refer the link.
    Re: About Automatic Generation of Primary key
    Regards
    Rusidar S
    Message was edited by:
            Rusidar Subramani

  • Reducing REDO generation from a current data refresh process

    Hello,
    I need to resolve an issue where a schema database is maintained with one delete followed by a tons of bulk insert. The problem is that the vast majority of deleted rows are reinserted as is. This process deletes and reinserts about 1 175 000 rows of data!
    The delete clause is:
    - delete from table where term >= '200705';
    The data before '200705' is very stable and doesn't need to be refreshed.
    The table is 9 709 797 rows big.
    Here is an excerpt of cardinalities for each term code:
    TERM      NB_REGS
    200001     117130
    200005      23584
    200009     123167
    200101     115640
    200105      24640
    200109     121908
    200201     117516
    200205      24477
    200209     125655
    200301     120222
    200305      26678
    200309     129541
    200401     123875
    200405      27283
    200409     131232
    200501     124926
    200505      27155
    200509     130725
    200601     122820
    200605      27902
    200609     129807
    200701     121121
    200705      27699
    200709     129691
    200801     120937
    200805      29062
    200809     130251
    200901     122753
    200905      27745
    200909     135598
    201001     127810
    201005      29986
    201009     142268
    201101     133285
    201105      18075This kind of operation is generating a LOT of redo logs: on average 25 GB per days.
    What are the best options available to us to reduce redo generation without changing to much the current process?
    - make tables in no logging ? (with mandatory use of append hint?)
    - use of a global temporary table for staging and merging against the true table?
    - use of partitions and truncate the reloaded one? this not reduce redo generated by subsequent inserts...?
    This has not to be mandatory transactionnal.
    We use 10gR2 on Windows 64 bits.
    Thanks
    Bruno

    yes, you got it, these are terms (Summer of 2007, beginning at May).
    Is the perverse effect of truncating and then inserting in direct path mode pushing the high water mark up day after day while having unused space in truncated partitions? Maybe we should not REUSE STORAGE on truncation...
    this data can be recovered easily from the datamart that pushes this data, this means we can use nologging and direct path mode without any «forever loss» of data.
    Should I have one partition for each term, or having only one for the stable terms and one for the refreshed terms?

  • Redo generation high

    Hi,
    We have a problem with redo generation. Last few days,redo generation is high than normal.No changes in application level.I don't know where to start.I tried to compare AWR report.But i did not get.
    1,Is it possilbe to find How much redo generated for a DML statement by Segment wise(table segment,index segment) when it's executed?
    For Ex : The table M_MARCH has 19 colums and 6 indexes.Another tables M_Report has 59 columns and 5 indexes.the query combines both tables.
    We need to find whether indexex are really is usable or not?
    2,Is there any other way to reduce redo geneation?
    Br,
    Rajesh

    High redo generation can be of two types:
    1. During a specific duration of the day.
    2. Sudden increase in the archive logs observed.
    In both the cases, first thing to be checked is about any modifications done either at the database level(modifying any parameters, any maintenance operations performed,..) and application level (deployment of new application, modification in the code, increase in the users,..).
    To know the exact reason for the high redo, we need information about the redo activity and the details of the load. Following information need to be collected for the duration of high redo generation.
    1] To know the trend of log switches below queries can be used.
    SQL> alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';  Session altered.  SQL> select trunc(first_time, 'HH') , count(*)   2  from   v$loghist   3  group by trunc(first_time, 'HH')   4  order by trunc(first_time, 'HH');   TRUNC(FIRST_TIME,'HH   COUNT(*) -------------------- ---------- 25-MAY-2008 20:00:00          1 26-MAY-2008 12:00:00          1 26-MAY-2008 13:00:00          1 27-MAY-2008 15:00:00          2 28-MAY-2008 12:00:00          1 <- Indicate 1 log switch from 12PM to 1PM. 28-MAY-2008 18:00:00          1 29-MAY-2008 11:00:00         39 29-MAY-2008 12:00:00        135 29-MAY-2008 13:00:00        126 29-MAY-2008 14:00:00        135 <- Indicate 135 log switches from 2-3 PM. 29-MAY-2008 15:00:00        112
    We can also get the information about the log switches from alert log (by looking at the messages 'Thread 1 advanced to log sequence' and counting them for the duration), AWR report.
    1] If you are in 10g or higher version and have license for AWR, then you can collect AWR report for the problematic time else go for statspack report.
    a) AWR Report
    -- Create an AWR snapshot when you are able to reproduce the issue: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();   -- After 30 minutes, create a new snapshot: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();  -- Now run $ORACLE_HOME/rdbms/admin/awrrpt.sql
    b) Statspack Report
    SQL> connect perfstat/<Password> SQL> execute statspack.snap;  -- After 30 minutes SQL> execute statspack.snap; SQL> @?/rdbms/admin/spreport
    In the AWR/Statspack report look out for queries with highest gets/execution. You can check in the "load profile" section for "Redo size" and compare it with non-problematic duration.
    2] We need to mine the archivelogs generated during the time frame of high redo generation.
    -- Use the DBMS_LOGMNR.ADD_LOGFILE procedure to create the list of logs to be analyzed:     SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<filename>',options => dbms_logmnr.new); SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<file_name>',options => dbms_logmnr.addfile);  -- Start the logminer  SQL> execute DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);       SQL> select operation,seg_owner,seg_name,count(*)  from v$logmnr_contents group by seg_owner,seg_name,operation;
    Please refer to below article if there is any problem in using logminer.
    Note 62508.1 - The LogMiner Utility
    We can not get the Redo Size using Logminer but We can only get user,operation and schema responsible for high redo.
    3] Run below query to know the session generating high redo at any specific time.
    col program for a10 col username for a10 select to_char(sysdate,'hh24:mi'), username, program , a.sid, a.serial#, b.name, c.value from v$session a, v$statname b, v$sesstat c where b.STATISTIC# =c.STATISTIC# and c.sid=a.sid and b.name like 'redo%' order by value;
    This will give us the all the statistics related to redo. We should be more interested in knowing "redo size" (Total amount of redo generated in bytes)
    This will give us SID for problematic session.
    In above query output look out for statistics against which high value is appeared and this statistics will give fair idea about problem.

  • Question about Skywire Documaker - where can it pull data from ?

    Does the Skywire Documaker product have an ability to pull data from a Relational Database into a form’s field? What other data sources are available to pull data, besides the extract data file (DAT), and xml file?

    In response to your question about where Oracle Documaker pulls data from:
    DB calls may be done as part of the document generation process to look up data. However it is not designed to maintain and open DB connection. This is also not a recommended best practice. Transall is shipped as part of the Doucmaker solution and it is designed to connect to multiple sources (including DB) and produce an input file for Documaker. Creating DB connections is considered an IT related task and does not fit in with positioning Documaker as a business user tool.

  • Basic question about Flashback Database

    Hi,
    I have a very generic question about using Flashback Database.
    On my testing systems, for performance testing and simulation purposes, I want to create a guaranteed restore point so I can test impact on batch when code change releases are done, before deployment in production.  My confusion is with respect to redo logs, as summarized in the questions below:
    1. Is it possible to change redo log files, when a guaranteed restore point has been configured?
    2. If yes, will the Flashback to restore point, also change the size of the redo logs?
    I could not find anything in the docs about this....hence my question....
    Appreciate your time taken in responding to these questions....
    Regards.

    HI,
    donneskold wrote:
    Hi,
    I have a very generic question about using Flashback Database.
    On my testing systems, for performance testing and simulation purposes, I want to create a guaranteed restore point so I can test impact on batch when code change releases are done, before deployment in production.  My confusion is with respect to redo logs, as summarized in the questions below:
    1. Is it possible to change redo log files, when a guaranteed restore point has been configured?
    1) Yes it is possible.
    2. If yes, will the Flashback to restore point, also change the size of the redo logs?
    2) It will not change the size of redo log file.  Cannot resize a redo log file..
    I could not find anything in the docs about this....hence my question....
    Appreciate your time taken in responding to these questions....
    Regards.
    Thank you

  • Partition table vs redo generation

    Hi all,
    I'm working with an enterprise 10.2.0.5 Oracle database
    I've a huge partitioned table and I'm interested in generate subpartitions.
    My question is if this process (create this subpartitions) will generate more redo or undo.
    Thanks,
    dbajug

    If you are only adding new, empty, SubPartitions, the redo generation will be minimal. If and when you load data into these SubPartitions, you will notice undo and redo generation, depending on how the data is loaded.
    If you SPLIT a non-empty Partition, Oracle has to "move" rows into the newly created partition. This will generate undo and redo.
    What you should do is to run a few tests of your planned actions and monitor the volume of undo and redo generated.
    Hemant K Chitale

Maybe you are looking for