Data Warehouse Archive logging questions

Hi all,
I'd like some opinions/advice on archive logging and OWB 10.2 with a 10.2 database.
Do you use archive logging on your non-production OWB instances? I have a development system that only has "on demand" backups done and the archive logs fill frequently. In this scenario, should I disable archive logging? I realize that this limits my recovery options to cold backups but on a development environment, this seems sufficient for me. Would I be messing up any OWB features by turning off archive logging?
For production instances, how large do you make your archive log (as a percentage of your total DW size perhaps)?
How do you manage them? With Flash recovery areas? Manually? RMAN or other tools?
Thanks in Advance,
Mike

Usualy, I don't set any DW tables to log. Since it's a data warehouse, I believe it's better to make cold backups. In some cases, ETL Mappings may work like backup procedures themselves.
In OWB, select the object you need (table or index) to create. Right-click it, select Configuration -> Performance Parameters -> Logging Mode -> NOLOGGING
Flash RecoveryDon't think it's going to help you, since most of your data manipulation is based on batch jobs.
RMANIf you want to make hot backups, this is something that can really help you manage backup procedures.
ManuallyMaybe... Why not?
I don't take hot backups from DW databases. I prefer to take cold backups. In a recovery scenario, you restore the cold backup and if it's 3 days late, I execute the ETL mappings for the last 3 days.
Regards,
Marcos

Similar Messages

  • Data warehouse Archive log

    Are there any performance implications for archive log in DWH?
    Is it recommended to enable archive log in DWH?

    General speaking, it's still recommend to enable archive logging in DWH environment. With non-archivelog your only viable backup strategy is cold backup which require database shutdown.
    Turn on archivelog does have some performance impact on massive loading. However you can use nologging with direct patch/parallele loading to counter that, remember to take an immediate backup before/after loading.

  • Read data from Archive logs

    Does anyone have any recommandations on how to read data from archive logs. When i use log minor, i am getting only bind variables for DML operations. But i need actual data from the archive logs..
    Any thoughts
    Thanks
    -Prasad

    Log miner is the closest to command issued as possible. Depending on the Oracle version you will be able to see DML or DML and DDL. From 9i and on Oracle was able to translate the DML against data dictionary as the actual DDL command. On its first 8i release only DML was visible.
    ~ Madrid

  • How to delete the data in archived log files

    hi
    how can i delete the enteries in archived log files. and what is the disadvantage of deleting archived log enteries.

    There is no documented way to delete data stored in archived log files: you can only remove the archived log files if needed.

  • RAC online and archive logs question

    Hello All,
    I setup a RAC database instances prod1 and prod2 (10.2.0.4). Datafiles and onlinelogs are on ASM.
    Does these results look good queried from two instances. I am kind of concerned about the Group3 that has the same name for both the members.
    Also archived logs are going to the ASM, is this a good practice. I was reading Oracle RMAN book and it mentioned archived logs go to local disk.
    Is it possible to archive to local disk for online that are on ASM? Please advice. Early reply appreciated.. Thanks San~
    PROD1 Instance
    SQL> select member from v$logfile;
    MEMBER
    +DATA/prod/onlinelog/group_2.264.706892209
    +FLASH/prod/onlinelog/group_2.259.706892211
    +DATA/prod/onlinelog/group_1.261.706892209
    +FLASH/prod/onlinelog/group_1.260.706892209
    +DATA/prod/onlinelog/group_3.258.706892235
    +FLASH/prod/onlinelog/group_3.258.706892235
    +DATA/prod/onlinelog/group_4.256.706892237
    +FLASH/prod/onlinelog/group_4.257.706892237
    8 rows selected.
    PROD2 Instance
    SQL> select member from v$logfile;
    MEMBER
    +DATA/prod/onlinelog/group_2.264.706892209
    +FLASH/prod/onlinelog/group_2.259.706892211
    +DATA/prod/onlinelog/group_1.261.706892209
    +FLASH/prod/onlinelog/group_1.260.706892209
    +DATA/prod/onlinelog/group_3.258.706892235
    +FLASH/prod/onlinelog/group_3.258.706892235
    +DATA/prod/onlinelog/group_4.256.706892237
    +FLASH/prod/onlinelog/group_4.257.706892237
    8 rows selected.
    ===
    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 3
    Next log sequence to archive 4
    Current log sequence 4
    ====
    Thanks
    San

    Hi San,
    sannidhi wrote:
    Also archived logs are going to the ASM, is this a good practice. I was reading Oracle RMAN book and it mentioned archived logs go to local disk.
    Is it possible to archive to local disk for online that are on ASM? Please advice. Early reply appreciated.. Thanks San~
    It is recommend to store archived log files on ASM and on Shared disk, check your archive log format which suppose to represent uniqueness across all instances.
    Yes, technically it is possible to archive to local disk, but not recommended as if you loose local disk there will be gaps in the archived log files and also it increases the administration.
    Regards,
    Thota

  • Data Guard  Archive log size

    Hi Experts,
    I would like to know do we have any views where we can see the size of the archive log file transfered and applied to the physical standby database. I wanted to see how much space it takes in a day.
    Thanks
    Shaan
    Message was edited by:
    Shaan_dmp

    SQL> desc v$archived_log
    Name Null? Type
    RECID NUMBER
    STAMP NUMBER
    NAME VARCHAR2(257)
    DEST_ID NUMBER
    THREAD# NUMBER
    SEQUENCE# NUMBER
    RESETLOGS_CHANGE# NUMBER
    RESETLOGS_TIME DATE
    FIRST_CHANGE# NUMBER
    FIRST_TIME DATE
    NEXT_CHANGE# NUMBER
    NEXT_TIME DATE
    BLOCKS NUMBER
    BLOCK_SIZE NUMBER
    CREATOR VARCHAR2(7)
    REGISTRAR VARCHAR2(7)
    STANDBY_DEST VARCHAR2(3)
    ARCHIVED VARCHAR2(3)
    APPLIED VARCHAR2(3)
    DELETED VARCHAR2(3)
    STATUS VARCHAR2(1)
    COMPLETION_TIME DATE
    DICTIONARY_BEGIN VARCHAR2(3)
    DICTIONARY_END VARCHAR2(3)
    END_OF_REDO VARCHAR2(3)
    BACKUP_COUNT NUMBER
    ARCHIVAL_THREAD# NUMBER
    ACTIVATION# NUMBER
    Refer to blocks and block_size
    Other than that, you can look this up in the documentation on v$archived_log.
    Why do you act as a spoiled 3 year old, who wants everything on a golden plate, and can't be bothered to do anything himself?
    Blocks and block_size: That is really obvious, isn't it?
    It is just an issue of using your brains!!!
    Sybrand Bakker
    Senior Oracle DBA

  • Data Guard archive log remove

    Hi,
    I am using 9i Data Guard now. I try to set up automatic procedure to remove the archive log on the standby site once it got applied. But except the manual remove/delete, there is no option to set the automatic procedure in Oracle Data Guard setting.
    Do anyone has solution for it?
    Thanks

    user3076922 wrote:
    Hi
    Standby database configured with broker and applying the redo in really time; however, I want to change this to archive log apply mode without losing the broker configuration. Is it possible? If it is not possible to use broker to do archive log apply, can I remove the broker and use data guard to set up the standby to use archive log apply?
    RegardsHi
    I think mseberg is answered correct, you can use enable/disable apply log with change of state on standby database with DGMGRL, as writen mseberg.
    or you can disable recover standby database with following script from SQL*Plus.
    SQL> alter database recover managed standby database cancel;Regards
    Mahir M. Quluzade
    www.mahir-quluzade.com

  • Data Guard Archive Log Latency Between Primary and Physical Standby

    How can I get the time it takes (latency) for the primary instance to get an archive log over to the physical standby instance and get it "archived" and "applied". I have been looking at the V$ARCHIVED_LOG view on each side but the COLUMN "COMPLETION_TIME" always shows a date "MM/DD/YY" and no timestamp. I thought the DATE datatype include data and time. Any ideas on how I can get the latency info I'm looking for?
    Thanks
    Steve

    the COLUMN "COMPLETION_TIME" always shows a date "MM/DD/YY"
    and no timestamp. Did you try using TO_CHAR ? e.g.
    to_char(completion_time,'dd/mm/yyyy hh24:mi:ss')

  • Dataguard physical standby archive log question

    Hi all,
    I will try to keep this simple..
    I have a 4 node RAC primary shipping logs to a 2 node physical standby.
    On the primary when I run 'alter system archive log current' on an instance I only see 1 log being applied on the standby, that is by querying v$archived_log.
    If I run the following on the standby:
    select thread#,sequence#,substr(name,43,70)"NAME",registrar,applied,status,first_time from v$archived_log where first_time
    in
    (select max(first_time) from v$archived_log group by thread#)
    order by thread#
    I get:
    THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
    1 602 thread_1_seq_602.2603.721918617 RFS YES A 17-jun-2010 12:56:58
    2 314 thread_2_seq_314.2609.721918627 RFS NO A 17-jun-2010 12:56:59
    3 311 thread_3_seq_311.2604.721918621 RFS NO A 17-jun-2010 12:57:00
    4 319 thread_4_seq_319.2606.721918625 RFS NO A 17-jun-2010 12:57:00
    Why do we only see the max(sequence#) having been applied and not all of them?
    This is the same no matter how many times I archive the current log files on any of the instances on the primary and also the standby does not have any gaps.
    Hope this is clear..
    any ideas?
    jd

    ok output from gv$archived_log on standby BEFORE 'alter system archive log current' on primary
    THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
    1 679 thread_1_seq_679.1267.722001505 RFS NO A 18-jun-2010 11:58:22
    1 679 thread_1_seq_679.1267.722001505 RFS NO A 18-jun-2010 11:58:22
    2 390 thread_2_seq_390.1314.722001507 RFS NO A 18-jun-2010 11:58:23
    2 390 thread_2_seq_390.1314.722001507 RFS NO A 18-jun-2010 11:58:23
    3 386 thread_3_seq_386.1266.722001505 RFS YES A 18-jun-2010 11:58:22
    3 386 thread_3_seq_386.1266.722001505 RFS YES A 18-jun-2010 11:58:22
    4 393 thread_4_seq_393.1269.722001507 RFS NO A 18-jun-2010 11:58:23
    4 393 thread_4_seq_393.1269.722001507 RFS NO A 18-jun-2010 11:58:23
    Output from v$archived_log on standby AFTER 'alter system archive log current' on primary
    THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
    1 680 thread_1_seq_680.1333.722004227 RFS NO A 18-jun-2010 11:58:29
    1 680 thread_1_seq_680.1333.722004227 RFS NO A 18-jun-2010 11:58:29
    2 391 thread_2_seq_391.1332.722004227 RFS NO A 18-jun-2010 11:58:30
    2 391 thread_2_seq_391.1332.722004227 RFS NO A 18-jun-2010 11:58:30
    3 387 thread_3_seq_387.1271.722004225 RFS NO A 18-jun-2010 11:58:28
    3 387 thread_3_seq_387.1271.722004225 RFS NO A 18-jun-2010 11:58:28
    4 394 thread_4_seq_394.1270.722004225 RFS YES A 18-jun-2010 11:58:29
    4 394 thread_4_seq_394.1270.722004225 RFS YES A 18-jun-2010 11:58:29
    as a reminder we have a 4 node RAC system shipping logs to a 2 node RAC standby. There are no gaps but only one log is ever registered as being applied.
    Why is that? Why arnt all logs registered as being applied?

  • Archive log question

    i have a db in archive log mode
    i found harddisk always full with archive log
    and db can not work then
    so i decide to comment this 3 parameter
    # log_archive_start = true
    #log_archive_dest_1= "location=/u01/app/oracle/admin/dbname/arch"
    # log_archive_format = arch_%t_%s.arc
    in init.ora
    then i enter into svrmgrl
    shutdown abort
    startup
    db open and i found db continued to archive redolog
    when redolog are full
    db stop work and alertlog report all redolog need archive
    why ?

    A better solution would be to create a job at the OS level that
    would run automatically at set times to backup the archive logs
    to another location, then remove the backed up logs from your
    drive. Depending on the available space, and the amount of log
    that your database creates, you can set the timing of this job
    to keep some space free all the time. On most of our servers,
    the timing ranges from every 2 hours to once a week.
    When you stop archiving, you can only recover the database to
    the time of your last backup. Also, the backup needs to be
    done "cold", that is, with the database shutdown. Worst case;
    You backup tonight, work all day tomorrow and seconds before the
    backup finishes, your disk explodes. Can you afford to lose 24
    hours work?

  • Data Warehouse issues

    Hi there.
    I'll try and be to the point as I can. I undertook an upgrade from SCOM 2012 SP1 to R2, this was successful. However after a few days had past, I noticed that the log file for the DW DB was growing rapidly, to the point where we hit a real issue. The drive
    was a comfortable 50GB drive, but in the end had to be increased to just short of 500GB !! Long story short, I wasn't allowed to get a consultant in to help diagnose the issue, therefore, I was consulting Google, Technet articles etc.
    The end decision was to DELETE the Data Warehouse DB & log file and create them again from scratch. I've done this, and used the following instructions on how to do so (http://www.3packetsofcrisps.com/2014/04/reset-scom-2012-data-warehouse-database.html).
    I have data going into the DW DB, or rather did up until midnight, so had around 3.5 hrs worth. 
    After checking the Operations Manager event log on one of the Management Servers, I can see numerous errors, these include:2115, 4506 and occasionally 8000. 
    The event 4506 states: "Data was dropped due to too much outstanding data in rule "Rule Name" running for instance "instance name" with id "ID name" in management group "xxx". 
    After some consulting Google, I read that this might be expected because a lot of data is trying to be written to the DW DB, however, that doesn't explain why it was writing and for some reason randomly stopped. 
    A quick summary of our environment:
    2x Management Servers running Win 2012 Std x64
    1x Operations Database server running Win 2012 Std x64 & SQL Server 2012 Std x64
    1x Data Warehosue DB server running Win 2012 Std x64 & SQL Server 2012 Std x64
    I've checked account permissions using http://technet.microsoft.com/en-gb/library/hh457003.aspx
    If you need any more information, please say, I've tried to keep the post as small as possible.
    Dave.

    Hi 
    Thanks for responding. It's been over a day so far, however, I've only just flushed the health service state & cache for the servers I'm monitoring. Would you suggest I do this for the Management Servers as well?
    In regards to alerts in the SCOM console, I am getting active alerts. I've just checked and some were 49 minutes ago. 
    I've also taken another look at the DW DB (via Windows Explorer), and I can see the last modified date has changed to 09/09/2014 20:48. So once again, something has happened but it appears to be random..
    This might need to be a separate post, but I've also got issues with my Reporting. If I go to the 'Reporting Server URL', I get the following message: "The version of the report server database is either in a format that is not valid, or it cannot
    be read. The found version is ''. The expected version is '162'. (rsInvalidReportServerDatabase)"
    Any ideas?
    Dave

  • Goldengate extracting from Archive log

    Dear All,
    We are conducting a replication of Oracle Database. Due to high load on Production server, the client is providing us a near DR database server for extraction. This database server is using Oracle Data Guard to be in Sync with Production server. Hence we will be using Archive logs shipped from Production to DR to Extract changed data. I have searched on the Net and read manuals, but I am unable to understand the how to configure OGG for this scenario. I understand that we can used Tranlogoptions parameter to extract data from Archive logs, but since the near DR database is in Mount state, how can we Log in the database using userid /password in extract process. Also we need to have Select any dictionary privileges on the database, but database is in mount state, so how to work on it??
    Awaiting eagerly for the reply.l

    For a physical standby, you can configure GG to read the archived standby logs in archive log only (ALO) mode.
    ARCHIVEDLOGONLY causes Extract to read from the archived logs exclusively, without querying or validating the logs from system views such as v$log and v$archived_log. This parameter puts Extract into Archived Log Only mode (ALO). For more information, see the Oracle GoldenGate Oracle Installation and Setup Guide.
    You can configure the Extract process to read exclusively from the archived logs. This is
    known as Archived Log Only (ALO) mode. In this mode, Extract only reads from archived
    logs that are stored in a specified location. ALO mode allows Oracle GoldenGate to use
    production logs that are shipped over to a secondary database (such as a standby) as the
    data source for Oracle GoldenGate. The online logs will not be used. Oracle GoldenGate
    will connect to the secondary database to get metadata and other required data as needed.
    As an alternative, ALO mode is supported on the production system.
    Extract automatically operates in ALO mode if it detects that the database is a physical standby.
    Lots of other info in the Oracle install guide.

  • Tuning of Redo logs in data warehouses (dwh)

    Hi everybody,
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    Here are the facts:
    - Oracle 10g, 32 GB RAM
    - 6 GB SGA, 20 GB PGA
    - 5 log groups each with 1 Gb log file
    - 4 MB Log buffer
    - every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
    - some sysstat metrics after one etl load:
    Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
    "NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
    "redo synch writes" " 300.636"
    "redo synch time" " 61.421"
    "redo blocks read for recovery"" 0"
    "redo entries" " 327.090.445"
    "redo size" " 159.588.263.420"
    "redo buffer allocation retries"" 95.901"
    "redo wastage" " 212.996.316"
    "redo writer latching time" " 1.101"
    "redo writes" " 807.594"
    "redo blocks written" " 321.102.116"
    "redo write time" " 183.010"
    "redo log space requests" " 10.903"
    "redo log space wait time" " 28.501"
    "redo log switch interrupts" " 0"
    "redo ordering marks" " 2.253.328"
    "redo subscn max counts" " 4.685.754"
    So the questions:
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
    kind regards,
    Mirko

    user5341252 wrote:
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
    >
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
    "redo synch time" " 61.421"
    "redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
    "redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
    Regards
    Jonathan Lewis

  • Data guard real time apply vs archived log apply on physical standby

    Dear DBA's,
    last week i configuared DR , now the phyiscal stanby database is archive apply mode,
    i want to confirm is it better to apply the archived log or should i cahnge it to real time apply .
    give me sugesstions.
    Thanks and Regards
    Raja...

    One question are you using ARCH transport to move the redo? or have you configured standby redo logs and logwr transport (either async or syncronous), if you are using the archiver to transport the logs then you can not use real time apply.
    If you are using log writer to transpor the redo the realtime apply reduces the recovery time required if you need to failover as trher should be less redo to apply to bring the standby up to date, which mode you use to transport redo will depend on what is acceptable in terms of data loss and the impact on performance.

  • BO Data Services - Is it possible to archive/log a raw input XML?

    Hi all!
    I am fairly new to Business Objects Data Services (BODS).
    Scenario:
    I have a BODS job that is receiving an XML file (from external client), then breaking it down and transforming the data to my clients needs.
    Question:
    Is there any way for BODS to archive/log the original XML file received?  I tried to fiddle around with the Trace options but I have not been able to successfully find a way to see the original XML file in the log files.
    Reason:
    For troubleshooting purposes, I'd like to be able to see the original XML file received in order to investigate any problems like missing data, bad data, etc. Unfortunately we do not have access to the external clients logs.
    If there is no way to do this, then I will most likely have BODS dump the data into a temporary table, and build a script to re-construct the XML for if I want to resend the request.
    Thanks!
    Anthony

    Hi,
    There is a wiki which says how to do it, check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    I do this through a much easier way, write a .bat file to do the archiving and call that .bat file from the DS.
    Arun

Maybe you are looking for

  • Sony Cybershot No Longer Recognized in iPhoto

    Just did a recent software update that my iBook let me know about this past Saturday. One of those updates I installed involved something to do with importing pictures from cameras. Now, my Sony camera is no longer recognized in iPhoto and I never ha

  • Enable Fast Web View using ODC

    Hi Friends, Please guide me how to enable "Fast Web View" Property to PDF which generated using Oracle Document Capture. Fast Web View: Fast Web View restructures a PDF document for page-at-a-time downloading (byte-serving) from web servers. With Fas

  • Event model in swing

    Since swing is a light-weight system, why does it use awt event model? Will it fall back to a heavy-weight system?

  • Variable j might not have been initialized

    import java.io.*; import java.util.*; class Multiply public static void main(String args[]) throws IOException int i=1,j; BufferedReader br=new BufferedReader(new InputStreamReader(System.in)); System.out.println("enter a number:"); int n = Integer.p

  • Changing Matrix column title?

    Hi all, Is it possible to change a matrix column title using the ColumnTitle.Caption property? I have tried using the following code but it does not work: 'Sales order form - Add mode. If pVal.FormType = 139 And pVal.FormMode = 3 Then 'After form loa