Query help for archive log generation details

Hi All,
Do you have a query to know the archive log generation details for today.
Best regards,
Rafi.

Dear user13311731,
You may use below query and i hope you will find it helpful;
SELECT * FROM (
SELECT * FROM (
SELECT   TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
       , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
    FROM V$LOG_HISTORY
    WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
) WHERE ROWNUM < 8;Hope That Helps.
Ogan

Similar Messages

  • Heavy archive log generation

    Hi,
    Database Version: Oracle 11.1.0.6
    Platform: Enterprise Linux 5
    Can someone please tell me the troubleshooting steps_ in a situation where there is heavy inflow of archive log generation i mean, say around 3 files of around 50MB every minute and eating away the space on disk.
    1) How to find out what activity is causing such heavy archive log generation? I can run the below query to find out the currently running sql queries with status;
    select a.username, b.sql_text,a.status from v$session a inner join v$sqlarea b on a.sql_id=b.sql_id;
    But is there any other query or a better way to find out current db activity in this situation.
    Tried using DBMS_LGMNR Log Miner but failed because (i) utl_file_dir is not set in init param file (So, mining the archive log file on production is presently ruled out as I cannot take an outage)
    (ii) alter database add supplemental log data (all) columns query takes for ever because of the locks. (So, cannot mine the generated archive log file on another machine due to DBID mismatch.)
    2) How to deal with this situation? I read here in OTN discussion board that increasing the number of redo log groups or redo log members will help to manage this situation when there is lot of DML activity going on application side....But I didn't understand how it is going to help in controlling the rigorous archive logs generation
    Edited by: user10313587 on Feb 11, 2011 8:43 AM
    Edited by: user10313587 on Feb 11, 2011 8:44 AM

    Hi,
    Other than logminer, which will tell you exactly what the redo is by definition, you can run something like the following:
    select value/((sysdate-logon_time) * 1440) redo_per_minute,s.sid,serial#,logon_time,value
      from v$statname sn,
           v$sesstat ss,
           v$session s
      where s.sid = ss.sid
        and sn.statistic# = ss.statistic#
        and name = 'redo size'
    and value>0Then trace the "high" sessions above and it should jump out at you. If not, then run logmnr with something like...
    set serveroutput on size unlimited
    begin
      dbms_logmnr.add_logfile(logfilename => '&log_file_name');
      dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog + dbms_logmnr.no_rowid_in_stmt);
      FOR cur IN (SELECT *
                    FROM v$logmnr_contents) loop
        dbms_output.put_line(cur.sql_redo);
      end loop;
    end;
    /Note you don't need utl_file_dir for log miner if you use the online catalog.
    HTH,
    Steve

  • Archive log generation in every 7 minute interval

    One of the HP Unix 11.11 hosts two databases uiivc and uiivc1. It is found that there is heavy archive log generation in every 7 minute in both databases. Redo log size is 100mb and configured with 2 members each on three groups for these databases.Version of the database is 9.2.0.8. Can anyone help me to find out how to monitor the redo log file contents which is filling up more frequently making more archived redo to generate (filling up the mount point)?
    Current settings are
    fast_start_mttr_target integer 300
    log_buffer integer 5242880
    Regards
    Manoj

    You can try to find the sessions which are generating lots of redo logs, check metalink doc id: 167492.1
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
    how much blocks have been changed by the session. High values indicate a
    session generating lots of redo.
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 i.block_changes
    3 FROM v$session s, v$sess_io i
    4 WHERE s.sid = i.sid
    5 ORDER BY 5 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. This view contains information about the amount of
    undo blocks and undo records accessed by the transaction (as found in the
    USED_UBLK and USED_UREC columns).
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 t.used_ublk, t.used_urec
    3 FROM v$session s, v$transaction t
    4 WHERE s.taddr = t.addr
    5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
    the session.

  • "recover database until cancel" asks for archive log file that do not exist

    Hello,
    Oracle Release : Oracle 10.2.0.2.0
    Last week we performed, a restore and then an Oracle recovery using the recover database until cancel command. (we didn't use backup control files) .It worked fine and we able to restart the SAP instances. However, I still have questions about Oracle behaviour using this command.
    First we restored, an online backup.
    We tried to restart the database, but got ORA-01113,ORA-01110 errors :
    sr3usr.data1 needed media recovery.
    Then we performed the recovery :
    According Oracel documentation, "recover database until cancel recovery" proceeds by prompting you with the suggested filenames of archived redo log files.
    The probleme is it  prompts for archive log file that do not exist.
    As you can see below, it asked for SMAarch1_10420_610186861.dbf that has never been created. Therefore, I cancelled manually the recovery, and restarted the database. We never got the message "media recovery complete"
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10417_61018686
    Fri Sep  7 14:09:45 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf'
    Fri Sep  7 14:09:45 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_61018686
    Fri Sep  7 14:10:03 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf'
    Fri Sep  7 14:10:03 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_61018686
    Fri Sep  7 14:10:13 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf'
    Fri Sep  7 14:10:13 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
    Errors with log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
    ORA-308 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_61018686
    Fri Sep  7 14:15:19 2007
    ALTER DATABASE RECOVER CANCEL
    Fri Sep  7 14:15:20 2007
    ORA-1013 signalled during: ALTER DATABASE RECOVER CANCEL ...
    Fri Sep  7 14:15:40 2007
    Shutting down instance: further logons disabled
    When restaring the database we could see that, a recovery of online redo log has been performed automatically, is it the normal behaviour of a recovery using "recover database until cancel"  command ?
    Started redo application at
    Thread 1: logseq 10416, block 482
    Fri Sep  7 14:24:55 2007
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 10416 Reading mem 0
      Mem# 0 errs 0: /oracle/SMA/origlogB/log_g14m1.dbf
      Mem# 1 errs 0: /oracle/SMA/mirrlogB/log_g14m2.dbf
    Fri Sep  7 14:24:55 2007
    Completed redo application
    Fri Sep  7 14:24:55 2007
    Completed crash recovery at
    Thread 1: logseq 10416, block 525, scn 105140074
    0 data blocks read, 0 data blocks written, 43 redo blocks read
    Thank you very much for your help.
    Frod.

    Hi,
    Let me answer your query.
    =======================
    Your question: While performing the recovery, is it possible to locate which online redolog is needed, and then to apply the changes in these logs
    1.   When you have current controlfile and need complete data (no data loss),
          then do not go for until cancel recovery.
    2.   Oracle will apply all the redologs (including current redolog) while recovery
         process is    on.
    3.  During the recovery you need to have all the redologs which are listed in the    view    V$RECOVERY_LOG and all the unarchived and current redolog. By querying  V$RECOVERY_LOG  you    can find out about the redologs required.
    4. If the required sequence is not there in the archive destination, and if recovery process    asks for that sequence you can query V$LOG to see whether requested sequence is part of the    online redologs. If yes you can mention the path of the online redolog to complete the recovery.
    Hope this information helps.
    Regards,
    Madhukar

  • How to calculate storage space for archive log files and database backups?

    Hi all,
    I have a 1.8 terabyte Oracle 9i database and need to plan for how much additional disk space I will need to perform nightly backups and for archivelog files. Is there a script or formula available that can help me estimate how much required disk space I will need to hold a days worth of archived logs as well as a nightly export dump file and a full hot RMAN backup on disk?
    Thanks!

    I'm not sure how to estimate the size of your backups, especially if you use incrementals. However, the space required for archive logs will be equal to the amount of REDO your DB generates. I would count the number of log switches per day with a query like the following:
    select trunc(first_time), count(*)
    from v$log_history
    group by trunc(first_time)
    I would then take the average and multiply this count by the size of your redo log files (assuming they are all the same size).

  • Archive Log Generation in EBusiness Suite

    Hi,
    I am responsible for EBusiness suite 11.5.10.2 AIX Production server. Until last week (for the past 1.5 years), there were excessive archive log generation (200 MB for every 10 mins) which has been reduced to (200 MB for every 4.5 hours).
    I am unable to understand this behavior. The number of users still remain the same and the usage is as usual.
    Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
    Please suggest what can be done.
    (I have raised this issue in Metalink Forum also and awaiting a response)
    Thanks
    qA

    Log/archive logs generation is directly related to the level of activities on the database, so it is almost certain that the level of activities have dropped significantly.
    If possible, can you run this query and post the result:
    select trunc(FIRST_TIME), count(SEQUENCE#) from v$archived_log
    where to_char(trunc(FIRST_TIME),'MONYYYY') = 'SEP2007'
    group by trunc(first_time)
    order by 1
    --Adams                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Growth of Archive log generation

    Hi,
    In my case the the rate of archive log generation has been increased, so I want to know the query
    to find out the rate of archive log generation per hour.
    Regards
    Syed

    Hi Syed;
    What is your DB version? Also ebs and os?
    I use below query for my issue:
    select to_char(first_time,'MM-DD') day, to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') "00",
    to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') "01",
    to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') "02",
    to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') "03",
    to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') "04",
    to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') "05",
    to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') "06",
    to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') "07",
    to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') "08",
    to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') "09",
    to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') "10",
    to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') "11",
    to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') "12",
    to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') "13",
    to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') "14",
    to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') "15",
    to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') "16",
    to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') "17",
    to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') "18",
    to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') "19",
    to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') "20",
    to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') "21",
    to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') "22",
    to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') "23"
    from v$log_history group by to_char(first_time,'MM-DD')
    Regard
    Helios

  • Hourly archive log generation

    Hi,
    I am working in Oracle 10g RAC database on Hp-UX... in the standby environment..
    Instance name :
    R1
    R2
    R3
    for the above three instance... i need to find the hourly archive log generation in standby site.....
    Hours 1 2 3
    R1
    R2
    R3
    Total
    Share the query...

    set parameter archive_lag_target to required value. its a dynamic parameter and specified in secs.

  • Report for time log on detail for each employees in SAP ABAP-HR report

    hi experts,
          please help me .how to create a report for time log on detail for each employees in SAP ABAP-HR report.please help me.
                                                      thank you

    Hi,
    For Time Management Infotypes , If you want to read the data using macro you need to use the Macro called RP_READ_ALL_TIME_ITY
    Example:
    DATA: BEGDA LIKE P2001-BEGDA, ENDDA LIKE P2001-ENDDA.
       INFOTYPES:  0000, 0001, 0002, ...
                         2001 MODE N, 2002 MODE N, ...
         GET PERNR.
       BEGDA = '19900101'. ENDDA = '19900131'.
       RP_READ_ALL_TIME_ITY BEGDA ENDDA.
       IF PNP-SW-AUTH-SKIPPED-RECORD NE '0'.
          WRITE: / 'Authorization for time data missing'.
          WRITE: / 'for personnel number', PERNR-PERNR. REJECT.
       ENDIF.

  • Archive log generation in standby

    Dear all,
    DB: 11.1.0.7
    We are configuring physical standby for our production system.we have the same file
    system and configuration for both the servers.. now primary archive
    destination is d:/arch and the standby server also have d:/arch .Now
    archive logs are properly logged into the standby and the data is
    intact . the problem we have archive log generation proper in the
    primary arch destionation. but no archive logs are getting
    generated in the standby archive location. but archive logs are being
    applied to the standby database ?
    is this normal ?..in standby archive logs will not be generated ?
    Please guide
    Kai

    There are no standby logs should be generated on standby side. Why do you think it should. If you are talking about parameter standby_archive_dest then, if you set this parameter oracle will copy applied log to this directory, not create new one.
    in 11g oracle recomended to not use this parameter. Instead oracle recomended to set log_archive_dest_1 and log_archive_dest_3 similar to this:
    ALTER SYSTEM SET log_archive_dest_1 = 'location="USE_DB_RECOVERY_FILE_DEST", valid_for=(ALL_LOGFILES,ALL_ROLES)'
    ALTER SYSTEM SET log_archive_dest_3 = 'SERVICE=<primary_tns> LGWR ASYNC db_unique_name=<prim_db_unique_name> valid_for=(online_logfile,primary_role)'
    /

  • Validation failed for archived log

    Hi,
    oracle database version 11.2.0.4
    OS centOS 6.5
    Recently i have set rman backup scripy on production Database, As we are using dbvisit for standby database for that we have set cron which run in every 10 minutes  it generates archive and copy it to standby side,
    but sometimes backup failed due to expected archive is not represent at location so i put "crosscheck archivelog all" in script now backup is running fine, But i have analyzed backlog file getting
    "validation failed for archived log" the time stamp i have checked validation failed archive is current day and yesterday even though archives are present at the location and CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
    Guys i am worried it shouldn't be a big issue for me,
    please suggest what is wrong

    This forum is for Berkeley DB high availability. We do not have the expertise to help you with your Oracle database 11.2.0.4 issue. You'll need to submit this question to one of the Oracle database forums to get the help you are looking for.
    Paula Bingham

  • Overheads for Archive Logs

    Hi,
    Not sure where I should address this but I would appreciate any helpful feedback.
    I would like to find out what are the overheads (in terms of size) for managing & storing archive logs for capacity planning purposes. Is this information documented anywhere? How can I find out?
    Thanks in advance.
    Thanks,
    Tony

    Since this number is dependent on your database, it would be impossible to answer. If you have a small database with few changes/deletes, you hardly need any space for archive logs. Oracle reccommends that you size your archive logs so that they switch ~1 time per hour.

  • How can I set destination for archived logs?

    I would like to know:
    how to set destination for archived logs?
    how to identify the init.ora that is used for my database?
    With rman using compressed backupset by default and and making
    backup database;
    What does it backup exactly?

    Another thing I am wondering, when I make a backup with rman : backup database.
    It saves the backups in the directory autobackup from the flash_recovery_area but it seems that it only saves the data files and the control files.Isn't there a way to sava archived logs files, control files, datafiles in a single backup?
    In fact I would like to make a full backup using rman on sunday of everything and a incremental backup all days of the week how can I acomplish this with a retention of 7 days?

  • How to control too much of archived log generation

    Hi ,
    This is one of the interview questions,
    I have replied to this. Just like to know what is answer of this.
    How we can control the excessive archived log generation ?
    Thanks,

    796843 wrote:
    Hi ,
    This is one of the interview questions,
    I have replied to this. Just like to know what is answer of this.
    How we can control the excessive archived log generation ?
    Thanks,do not do any DML, since only DML generates REDO

  • Secondary destination for Archived logs

    Version: 10.2, 11.1, 11.2
    We occasionally get 'archiver error' on our production DBs due to our LOG_ARCHIVE_DEST_1 being full. How can I have a secondary location for archive logs in case my 'primary' location (LOG_ARCHIVE_DEST_1) becomes full ?
    I gather that LOG_ARCHIVE_DEST_2 is reserved for shipping archive logs to Dataguard standby DB in which you specify the tns entry of standby using SERVICE parameter.
    Can I specify LOG_ARCHIVE_DEST_3 as my secondary location in case LOG_ARCHIVE_DEST_1 becomes full ? Is it what LOG_ARCHIVE_DEST_n meant for ? Although the documentation says you can have upto 10 locations, I am confused if they are meant to store Multiplexed copies of archive logs ? That is not what I am looking for ?

    >
    Hi again Tom,
    I have one more question:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_4 = 'LOCATION=/disk4/arch';
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_3 = 'LOCATION=/disk3/arch
        ALTERNATE=LOG_ARCHIVE_DEST_4';
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_4=ALTERNATE;
    SQL> SELECT dest_name, status, destination FROM v$archive_dest;
    DEST_NAME               STATUS    DESTINATION
    LOG_ARCHIVE_DEST_1      VALID     /disk1/arch     -------------> Dest1
    LOG_ARCHIVE_DEST_2      VALID     +RECOVERY       -------------> Dest2
    LOG_ARCHIVE_DEST_3      VALID     /disk3/arch     -------------> Dest3
    LOG_ARCHIVE_DEST_4      ALTERNATE /disk4/archMy understanding is (and I'm not terribly sure at the minute - don't have a test system to hand. I haven't
    set up a backup/recovery strategy in a while - I just restore backups from time to time (normally every 4 weeks)
    to ensure that the database recovers as it should) - my understanding is that under the scheme above
    DEST_3 will be a copy of what's in DEST_1. DEST_4 on the other hand will "step in" should DEST_1
    or DEST_3 fill up/fail.
    As to DEST_2, I'm not sure - maybe something to do with Fast Recovery Area? I've Googled but can't
    find anything - the trouble is that all the pages about this contain the word "recovery" and the "+"
    sign doesn't appear to affect the search - does "+" mean something special to Google?
    I don't have a system at the moment - if you do, why don't you test and see? On a test system, fill
    up the file system for DEST_1 with rubbish and check to see what happens?
    All of the above is to be taken with a pinch of salt - I don't have a system to hand and am not certain,
    so CAVEAT EMPTOR
    HTH,
    Paul...
    Edited by: Paulie on 21-Jul-2012 17:20

Maybe you are looking for

  • MacBook Pro Retina doesn't show correct colour spectrum

    I'm having trouble with my late 2013 13" MBPr display, it's displaying low colour spectrum, something like 16 bit colours. I took a photo of my screen right now, and it display something like this. As you can see the cat colour look really washed out

  • IMP.EXE has encountered a problem and needs to close

    Dear all, I have a dmp file which need to import into the oracle9i database. The dmp file size which is 1.6GB. Im using the imp cmd to run the import. When importing half way, i encountered an error which is IMP.EXE has encountered a problem and need

  • Theme problem with 2004s sp08 portal

    Hi, I did some custom wda for EH&S in our DEV ERP2005 that includes some input help from the system. At first, the input help popup was not diplaying right in my DEV Portal (WDA iview), all the outline was blank, the upright "X" to close the window w

  • Importing CSV files into Multiple Tables in One Database

     I have a web based solution using Microsoft SharePoint and SQL Server that is created to centralize dat collection and reporting of program metrics used in montly reviews. A person from each program enters dat manual or by pushing the data using aut

  • Calling a service through the EJB results in java.rmi.UnmarshalException

    I have created a service on WLAI that does a simple query against a table (inventory count by product ID). I have also created a command line class to call this service, provide the correct XML schema input, and wait for a returning XML response. The