Exadata redo

Hi, I need advice here.
I created databases on our x3 full rac environment.  DB version is 11204.
I did a exachk, and it shows this warning:
The database machine provides extremely high redo generation rates that may make you consider increasing the log file sizes. Following initial deployment, the online log file sizes should be 4GB each.
My current redo log size is 1g, old non-exadata rac envrionment redo log size is 500m.
This environment has huge data, and a lot OLTP plus query , so it is kind of mix environment.
How do I be sure the redo log size is optimal and where should I place the redo logs? On what kind of disks? flash disk or regular disks?
THanks in advance.

Hello user569151,
The key question here is: what is your redo generation rate currently?  How frequently are you switching your 1.5GB log files at peak?
As for placement, I generally suggest putting them on rotating disk, especially with the advent of smart flash logging.
Marc

Similar Messages

  • Filling of redo log members

    Hello!
    Could you please explain , is the filling of redo log members async or sync?
    Thanks and regards,
    Pavel

    Pavel wrote:
    Hello!
    Could you please explain , is the filling of redo log members async or sync?
    Yes, the filling of redo log members is async or sync.
    For example, on exadata, redo can be written to both SSD and hard disk, and whichever is done first is sync, and the other isn't.
    On DG, you can do it either way.
    If you look at the various internal mechanisms for private redo strands, you can have several situations.
    [url http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:60447282988010]You can also have async commits, as [url http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1415454871121#20061201507022]PL/SQL has done since forever.
    See also http://www.pythian.com/news/1098/tuning-log-file-sync-event-waits/

  • Pin redo logs into smart flash cache (Exadata Flash PCI F20 cache)

    Need to know if we could pin redo logs into smart flash cache.
    For example to pin table..
    we use Alter table dhaval storage (cell_flash_cache keep) -----
    Similarly can we pin redo logs into flash cache ?
    If not, what is the alternative to put redo logs into flash cache ?

    At Oracle OpenWorld the Exadata Smart Flash Log feature was announced. The Smart Flash Log feature requires Exadata Storage 11.2.2.4.0 or later, and Databases version 11.2.0.2 Bundle Patch 11 or greater. This feature allows a modest amount of flash to be used as a secondary write place. It wites redo to both flash and disk and returns the call to the db for the first one that finishes. By doing so it improves user transaction response time, and increases overall database throughput for IO intensive workloads.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Cell multiblock physical read on exadata system

    delete is taking forever and the session is waiting on 'cell multiblock physical read' in exadata system.
    delete from ept.prc_rules_ref where end_dt < ( select min(cycle_dt) from ( select distinct cycle_dt from ept.prc order by 1 desc ) cd where rownum  < 4 ) ;
    where as the select runs in less than few seconds..
    this is the explain plan of the delete..
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart| Pstop |
    | 0 | DELETE STATEMENT | | 1454 | 43620 | 95432 (3)| | |
    | 1 | DELETE | PRC_RULES_REF | | | | | |
    | 2 | TABLE ACCESS STORAGE FULL | PRC_RULES_REF | 1454 | 43620 | 273 (1)| | |
    | 3 | SORT AGGREGATE | | 1 | 9 | | | |
    | 4 | COUNT STOPKEY | | | | | | |
    | 5 | VIEW | | 3 | 27 | 95159 (3)| | |
    | 6 | SORT UNIQUE STOPKEY | | 3 | 24 | 94120 (2)| | |
    | 7 | PARTITION RANGE ALL | | 19M| 152M| 21356 (1)|1048575| 1 |
    | 8 | INDEX STORAGE FAST FULL SCAN| PRC_2_IE | 19M| 152M| 21356 (1)|1048575| 1 |
    when i check the explain plan of the select part, this is what we get ..
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 9 | 95159 (3)| | |
    | 1 | SORT AGGREGATE | | 1 | 9 | | | |
    | 2 | COUNT STOPKEY | | | | | | |
    | 3 | PARTITION RANGE ALL | | 3 | 27 | 95159 (3)|1048575| 1 |
    | 4 | VIEW | | 3 | 27 | 95159 (3)| | |
    | 5 | SORT UNIQUE STOPKEY | | 3 | 24 | 94120 (2)| | |
    | 6 | INDEX STORAGE FAST FULL SCAN| PRC_2_IE | 19M| 152M| 21356 (1)|1048575| 1 |
    Sql Statement
    delete from ept.prc_rules_ref where end_dt < ( select min(cycle_
    dt) from ( select distinct cycle_dt from ept.prc order by 1 desc
    ) cd where rownum  < 4 )
    Event Wait Information
       SID 564 is waiting on event  : cell multiblock physical read
       P1 Text                      : cellhash#
       P1 Value                     : 398250101
       P2 Text                      : diskhash#
       P2 Value                     : 1099358214
       P3 Text                      : bytes
       P3 Value                     : 729088
    Any pointers why it is not going for a smart scan? the table is not huge either it has only 27000+ records.

    Taking away exadata from the picture, delete's have lot of overhead to be performed ( extra level of locking, maintaining indexes, buffer cache operations, redo/undo maintenance  etc ..) than selects. From the delete sql explain plan , its going for a  FTS(TABLE ACCESS STORAGE FULL), which is good and expected with Exadata.

  • IBM Guardium with Exadata !

    hi,
    I have a question:
    Do The IBM's Guardium can monitor the Exadata database completely ?
    Are there any unsupported from Oracle when using Exadata monitored by IBM's Guardium ?
    Thanks.

    File change monitoring on Exadata would work much the same way as on any other Oracle 11gR2 database running RAC and ASM. My question would be: what type of files are you looking to monitor for changes? In the Exadata context:
    - Oracle datafiles, redo logs, control files, and parameter files are stored in ASM
    - Archivelogs, flashback logs, etc are stored in the FRA, which is typically ASM as well
    - Database software files, audit trail, and message logs are stored on local Linux ext3 filesystems
    - Storage server software and logs are stored on storage servers. Oracle odes not permit third-party monitoring agents, such as Guardium, to be installed on storage servers. This is akin to most SAN vendors who do not allow third-party monitoring tools to run on SAN controllers either.
    Hope this helps!
    Marc

  • Exadata IOPS for write intensive system

    Hi
    Exadata half rack advt that the disk iops is 25,000 .
    Is the advt IOPS of 25,000 for write ?.
    Does this mean that i can do 25,000 writes per second ?
    For high end write intensive system what are the typical writes you folks have encountered so far i have seen less then 7000IOPS in my whole carrer and i am interested to know whats the write IOPS on your systems
    If the write IOPS is very high like 22,000 IOPS then should i provison the flash disks as grid disks ?

    Hello hrishy,
    Your post brings up an important point: not all inserts are created equal. In a worst-case scenario, a single-row insert could require a redo writes, undo writes, and datafile block writes, and multiply everything by 2 or 3 with ASM redundancy. But if you can do large, paralle direct path inserts from, say, DBFS-hosted external tables, your throughput rate can approach the maximum dataload rate of 12TB per hour on a full rack.
    Ideally you'll be able to gather metrics based on either an existing system or a performance test environment. You cuold even do this in a non-Exadata environemnt (with the exception of hybrid columnar compression testing): just do your dataloads, and measure the IOPS volume and write throughput on disk, per volume of rows inserted.
    I don't generally recommend using the flash memory for permanent data storage, if just because, with even normal ASM redundancy, you're cutting usable space in half, and the only way to expand flash capacity is to buy more storage servers.
    Marc

  • ExaData and Stand By

    Hi,
    Please advice regarding the following senario :
    Have an Exadata machive x2-2 half rac.
    I want to use Exadata Hybrid Columnar Compression in order to speed performance.
    In the DRP site have a linux rehdat x86-64 11gr2 single instance.
    What need to be done in the None exadata instance in the DRP site in order to allow it to apply the archives that was compressed in the Exadata machine ?
    Is it possible ?
    Thanks

    Standby databases can apply redo generated for changes to EHCC objects. However, when the standby is opened, those EHCC compressed segments will not be able to be queried. To enable query against those segments, you'll have to migrate them to a non-EHCC format (either OLTP compression, standard compression, or no compression). This will require space and time that depend on the size of the object and its compression ratio. This decompression can be done on a non-Exadata platform.
    For more details, see the whitepaper at http://www.oracle.com/technetwork/database/features/availability/maa-wp-dr-dbm-130065.pdf

  • How do I manually archive 1 redo log at a time?

    The database is configured in archive mode, but automatic archiving is turned off.
    For both Oracle 901 and 920 on Windows, when I try to manually archive a single redo log, the database
    archives as many logs as it can up to the log just before the current log:
    For example:
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    6 rows selected.
    SQL> alter system archive log next;
    System altered.
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    See - instead of only 1 log being archive, 4 of them were. Oracle behaves the same way if I use the "sequence" option:
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    6 rows selected.
    SQL> alter system archive log next;
    System altered.
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    Is there some default system configuration property telling Oracle to archive as many logs as it can?
    Thanks,
    DGR

    Thanks Yoann (and Syed Jaffar Jaffar Hussain too),
    but I don't have a problem finding the group to archive or executing the alter system archive log command.
    My problem is that Oracle doesn't work as I expect it.
    This comes from the Oracle 9.2 online doc:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_23a.htm#2053642
    "Specify SEQUENCE to manually archive the online redo log file group identified by the log sequence number integer in the specified thread."
    This implies that Oracle will only archive the log group identified by the log sequence number I specify in the alter system archive log sequence statement. However, Oracle is archiving almost all of the log groups (see my first post for an example).
    This appears to be a bug, unless there is some other system parameter that is configured (by default) to allow Oracle to archive as many log groups as possible.
    As to the reason why - it is an application requirement. The Oracle db must be in archive mode, automatic archiving must be disabled and the application must control online redo log archiving.
    DGR

  • Select from .. as of - using archived redo logs - 10g

    Hi,
    I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
    I've been searching for a while and cant find an answer.
    My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
    I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
    When is issue the following query
    select * from supplier_codes AS OF TIMESTAMP
    TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
    I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
    My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
    Any help would be greatly appreciated!
    Thanks
    Robert

    If you want to go back 24 hours, you need to undo the changes...
    See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search].

  • Problem with rds on exadata

    Hi,
    I have a problem with rds on exadata. After upgrading to uek kernel and patching to cell 11.2.3.3.0, I get following error:
    Aug 18 16:17:36 exadb01 kernel: RDS/tcp: send to 192.168.10.5 returned -32, disconnecting and reconnecting
    Aug 18 16:17:36 exadb01 kernel: RDS/tcp: send to 192.168.10.5 returned -32, disconnecting and reconnecting
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,4> dropped
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,0> dropped
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,0> dropped
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,4> dropped
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,0> dropped
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,0> dropped
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,4> dropped
    Aug 18 16:17:36 exadb01 kernel: RDS/IB: connection <192.168.10.9,192.168.10.5,0> dropped
    Aug 18 16:17:36 exadb01 kernel: BUG: scheduling while atomic: kworker/u:0/5/0x10000200
    Aug 18 16:17:36 exadb01 kernel: Modules linked in: rds_rdma rds_tcp rds hidp rfkill lockd sunrpc ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr video sbs sbshc hed acpi_memhotplug acpi_ipmi ipmi_msghandler lp snd_seq_dummy snd_seq_oss snd_seq_midi_event snd_seq snd_seq_device serio_raw snd_pcm_oss snd_mixer_oss snd_intel8x0 snd_ac97_codec ac97_bus snd_pcm snd_timer drm snd parport_pc parport soundcore snd_page_alloc e1000 pata_acpi i2c_piix4 pcspkr i2c_core ata_generic dm_snapshot dm_zero dm_mirror dm_region_hash dm_log dm_mod ata_piix sd_mod crc_t10dif be2iscsi iscsi_boot_sysfs bnx2i cnic uio ipv6 cxgb3i libcxgbi cxgb3 mdio sg sr_mod cdrom iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi mpt2sas scsi_transport_sas raid_class ahci libahci ext3 jbd mbcache [last unloaded: rds]
    Aug 18 16:17:36 exadb01 kernel: Pid: 5, comm: kworker/u:0 Not tainted 2.6.39-400.128.17.el5uek #1
    Aug 18 16:17:36 exadb01 kernel: Call Trace:
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8105dc16>] __schedule_bug+0x66/0x70
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81504a15>] __schedule+0x645/0x6d0
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8148cb58>] ? tcp_write_xmit+0x238/0x2e0
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa02afee8>] ? e1000_tx_map+0x2b8/0x450 [e1000]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8150704e>] ? common_interrupt+0xe/0x13
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8106a1ca>] __cond_resched+0x2a/0x40
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81504b2f>] _cond_resched+0x2f/0x40
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8147e505>] do_tcp_setsockopt+0x215/0x700
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8143b410>] ? dev_hard_start_xmit+0x200/0x530
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81506b3e>] ? _raw_spin_lock+0xe/0x20
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81457524>] ? sch_direct_xmit+0x84/0x1d0
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff812595d9>] ? put_dec+0x59/0x60
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81259918>] ? number+0x338/0x370
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81475496>] ? ip_finish_output+0x146/0x320
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81041009>] ? default_spin_lock_flags+0x9/0x10
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81506b84>] ? _raw_spin_lock_irqsave+0x34/0x50
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8106fdc8>] ? console_unlock+0xd8/0x110
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8147ea4a>] tcp_setsockopt+0x2a/0x40
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff814276f4>] sock_common_setsockopt+0x14/0x20
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa01e949e>] rds_tcp_cork+0x4e/0x60 [rds_tcp]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa01e94ce>] rds_tcp_xmit_prepare+0x1e/0x20 [rds_tcp]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa0616ec4>] rds_send_xmit+0x94/0x730 [rds]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa06146b3>] ? rds_message_alloc+0x23/0x90 [rds]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81041009>] ? default_spin_lock_flags+0x9/0x10
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa061762c>] rds_send_hb+0xcc/0x120 [rds]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81041009>] ? default_spin_lock_flags+0x9/0x10
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa0613179>] rds_conn_probe_lanes+0x69/0x80 [rds]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa06132c8>] rds_conn_drop+0x138/0x1f0 [rds]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81428180>] ? lock_sock_nested+0x60/0x60
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffffa01e87ac>] rds_tcp_state_change+0x7c/0x90 [rds_tcp]
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8147c131>] tcp_done+0x51/0x80
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff8148404e>] tcp_reset+0x3e/0x70
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff81484258>] tcp_validate_incoming+0x1d8/0x220
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff814906ac>] ? tcp_v4_inbound_md5_hash+0x6c/0x1b0
    Aug 18 16:17:36 exadb01 kernel:  [<ffffffff814890a7>] tcp_rcv_state_process+0x47/0x570
    Can you help me?
    Thank you,
    Regards,

    Hello 976232,
    This type of issue would definitely be Oracle Support territory, so I'd be definitely checking in with them. A few clarifying questions you can expect to be asked:
    - What is the impact on your system because of this error message
    - How often does it happen
    - Can you trigger it on demand?
    Marc

  • Airport Utility no longer sees Airport Extreme and I think I need to redo my entire network - can someone help?

    I just purchased this new Airport Extreme after my older Airport went down.  This Airport Extreme at times is super fast, but it gives me more headaches than the older version because it seems to drop its signal too often.  Last week I was able to see my Airport Extreme in my Airport Utility and was able to configure it - now Airport Utility can't find any Airport Extreme, although, it shows that I have an internet connection.  I was hoping that someone who knew how to do a nice easy network set-up could help me start from scratch so this is set up right.  As of now, our home uses the wi-fi for 2 Wii consoles, a wi-fi printer, cell phones, and an internet connection for my MacBook Pro - not too many connections, but still my connection drops all the time, all day long for no particular reason whatsoever.  I want to be able to have one good strong connection for my computer and the rest of the connections could be secondary.  I would like if the secondary items used a password, but not the same password as the main connection.  It seems to me like I'm not asking a lot, but I have no idea how to set this up because now I can't even find my Airport Extreme to configure it.
    So, if anybody has some advice on the best way to set this up and how I would be able to start that process without being able to even detect my Airport Extreme, I would sure appreciate any effort to help me out on this.

    Is your profile up to date?? Lion is not so commonly used now.. but was a lot more reliable than Yosemite.
    What modem do you have? Actual make and model is a big help.
    What is the main router? Is the modem supplied by ISP? Is it a gateway device?
    If you use the airport as bridge or router, I also need to know how it is connected into the system. Mostly people have it via ethernet to the main modem or router.. please confirm.
    One very important point.. the Wii need WEP security at least for older models.. You must NOT change security on the Extreme.. It must be set to WPA2 Personal (or nothing if you live at least 2km from other humans).
    So, if anybody has some advice on the best way to set this up and how I would be able to start that process without being able to even detect my Airport Extreme
    reset to factory and you will get control back.
    The best way to test is full factory reset.
    Factory reset universal
    Power off the AE.. ie pull the power cord or power off at the wall.. wait 10sec.. hold in the reset button.. be gentle.. power on again still holding in reset.. and keep holding it in for another 10sec. You may need some help as it is hard to both hold in reset and apply power. It will show success by rapidly blinking the front led. Release the reset.. and wait a couple of min for the AE to reset and come back with factory settings. If the front LED doesn’t blink rapidly you missed it and simply try again. The reset is fairly fragile in these.. press it so you feel it just click and no more.. I have seen people bend the lever or even break it. I use a toothpick as tool.
    Then redo the setup from the computer or iOS device with latest utility.
    1. Use very short names.. NOT APPLE RECOMMENDED names. No spaces and pure alphanumerics.
    eg AEgen6 and AEwifi for basestation and wireless respectively.
    Even better if the issue is more wireless use AE24ghz and AE5ghz for each band respectively.. with fixed channels as this also seems to help stop the nonsense.
    2. Use all passwords that also comply but can be a bit longer. ie 8-20 characters mixed case and numbers.. no non-alphanumerics.
    3. Ensure the AE always takes the same IP address.. this is not a problem if you are using AE as main router but is a problem for when it is used in bridge.. you have to use dhcp reservation from main router or use static IP on the AE setup.
    4. Check your share name on the computer is not changing.. make sure it also complies with the above.. short no spaces and pure alphanumeric..
    5. Make sure IPv6 is set to link-local only in the computer. For example wireless open the network preferences, wireless and advanced / TCP/IP.. and fix the IPv6. to link-local only.
    6. If the AE is bridged and still having dropouts use a LAN port to connect to the main router instead of WAN.. we are finding the AE is problematic on the WAN port.

  • Error: ORA-16778: redo transport error for one or more databases

    Hi all
    I have 2 database servers"Primary database and physical standby" in test environment( before going to Production)
    Before Dataguard broker configuration , DG setup was running fine , redo was being applied and archived on phy standby.
    but while enabling configuration i got "Warning: ORA-16607: one or more databases have failed" listener.ora & tnsnames.ora are updated with global_name_DGMGRL
    Please help me how can i resolve this issue .Thanks in advance.
    [oracle@PRIM ~]$ dgmgrl
    DGMGRL for Linux: Version 10.2.0.1.0 - Production
    Copyright (c) 2000, 2005, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    DGMGRL> connect sys
    Password:
    Connected.
    DGMGRL> show configuration
    Configuration
    Name: test
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    prim - Primary database
    stan - Physical standby database
    Current status for "test":
    Warning: ORA-16607: one or more databases have failed
    DGMGRL> show database
    show database
    ^
    Syntax error before or at "end-of-line"
    DGMGRL> remove configuration
    Warning: ORA-16620: one or more databases could not be contacted for a delete operation
    Removed configuration
    DGMGRL> exit
    [oracle@PRIM ~]$ connect sys/sys@prim as sysdba
    bash: connect: command not found
    [oracle@PRIM ~]$ lsnrctl stop
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:52:30
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    The command completed successfully
    [oracle@PRIM ~]$ lsnrctl start
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:52:48
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
    TNSLSNR for Linux: Version 10.2.0.1.0 - Production
    System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
    Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521)))
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
    Start Date 08-OCT-2006 19:52:48
    Uptime 0 days 0 hr. 0 min. 0 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
    Listener Log File /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521)))
    Services Summary...
    Service "PLSExtProc" has 1 instance(s).
    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "PRIM_DGMGRLL" has 1 instance(s).
    Instance "PRIM", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully
    [oracle@PRIM ~]$ lsnrctl stop
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:54:46
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    The command completed successfully
    [oracle@PRIM ~]$ lsnrctl start
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:54:59
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
    TNSLSNR for Linux: Version 10.2.0.1.0 - Production
    System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
    Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521)))
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    [oracle@PRIM ~]$ dgmgrl
    DGMGRL for Linux: Version 10.2.0.1.0 - Production
    Copyright (c) 2000, 2005, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    DGMGRL> connect /
    Connected.
    DGMGRL> create configuration test as
    primary database is PRIM
    connect identifier is PRIM
    ;Configuration "test" created with primary database "prim"
    DGMGRL> add database STAN as
    connect identifier is STAN
    maintained as physical;Database "stan" added
    DGMGRL> show configuration
    Configuration
    Name: test
    Enabled: NO
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    prim - Primary database
    stan - Physical standby database
    Current status for "test":
    DISABLED
    DGMGRL> enable configuration
    Enabled.
    DGMGRL> show configuration
    Configuration
    Name: test
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    prim - Primary database
    stan - Physical standby database
    Current status for "test":
    Warning: ORA-16607: one or more databases have failed
    DGMGRL> show database verbose prim
    Database
    Name: prim
    Role: PRIMARY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    PRIM
    Properties:
    InitialConnectIdentifier = 'prim'
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '1'
    ReopenSecs = '300'
    NetTimeout = '180'
    LogShipping = 'ON'
    PreferredApplyInstance = ''
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '30'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = '/u01/app/oracle/oradata/STAN, /u01/app/oracle/oradata/PRIM'
    LogFileNameConvert = '/u01/app/oracle/oradata/STAN, /u01/app/oracle/oradata/PRIM'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName = 'PRIM'
    SidName = 'PRIM'
    LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521))'
    StandbyArchiveLocation = '/u01/app/oracle/flash_recovery_area/PRIM/archivelog/'
    AlternateLocation = ''
    LogArchiveTrace = '0'
    LogArchiveFormat = '%t_%s_%r.arc'
    LatestLog = '(monitor)'
    TopWaitEvents = '(monitor)'
    Current status for "prim":
    Error: ORA-16778: redo transport error for one or more databases
    DGMGRL> show database verbose stan
    Database
    Name: stan
    Role: PHYSICAL STANDBY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    STAN
    Properties:
    InitialConnectIdentifier = 'stan'
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '1'
    ReopenSecs = '300'
    NetTimeout = '180'
    LogShipping = 'ON'
    PreferredApplyInstance = ''
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '30'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = '/u01/app/oracle/oradata/PRIM, /u01/app/oracle/oradata/STAN'
    LogFileNameConvert = '/u01/app/oracle/oradata/PRIM, /u01/app/oracle/oradata/STAN'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName = 'STAND'
    SidName = 'STAN'
    LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=STAND)(PORT=1521))'
    StandbyArchiveLocation = '/u01/app/oracle/flash_recovery_area/STAN/archivelog/'
    AlternateLocation = ''
    LogArchiveTrace = '0'
    LogArchiveFormat = '%t_%s_%r.arc'
    LatestLog = '(monitor)'
    TopWaitEvents = '(monitor)'
    Current status for "stan":
    Error: ORA-12545: Connect failed because target host or object does not exist
    DGMGRL>

    This:
    Current status for "stan":
    Error: ORA-12545: Connect failed because target host or object does not exist
    says that your network setup is not correct. You need to resolve that first.
    As for Broker setup steps how about the doc or our Data Guard 11g Handbook?
    It's 3 DGMGRL commands so I am not sure what 'steps' you need?
    Larry

  • How to disable write to redo log file in oracle7.3.4

    in oracle 8, alter table no logged in redo log file like: alter table tablename nologging;
    how to do this in oracle 7.3.4?
    thanks.

    user652965 wrote:
    Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
    Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
    And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving.

  • Is There a Way to Run a Redo log for a Single Tablespace?

    I'm still fairly new to Oracle. I've been reading up on the architecture and I am getting the hang of it. Actually, I have 2 questions.
    1) My first question is..."Is there a way to run the redo log file...but to specify something so that it only applies to a single tablespace and it's related files?"
    So, in a situation where, for some reason, only a single dbf file has become corrupted, I only have to worry about replaying the log for those transactions that affect the tablespace associated with that file.
    2) Also, I would like to know if there is a query I can run from iSQLPlus that would allow me to view the datafiles that are associated with a tablespace.
    Thanks

    1) My first question is..."Is there a way to run the
    redo log file...but to specify something so that it
    only applies to a single tablespace and it's related
    files?"
    No You can't specify a redolog file to record the transaction entries for a particular tablespace.
    In cas if a file gets corrupted.you need to apply all the archivelogs since the last backup plus the redologs to bring back the DB to consistent state.
    >
    2) Also, I would like to know if there is a query I
    can run from iSQLPlus that would allow me to view the
    datafiles that are associated with a tablespace.Select file_name,tablespace_name from dba_data_files will give you the
    The above will give you the number of datafiles that a tablespace is made of.
    In your case you have created the tablespace iwth one datafile.
    Message was edited by:
    Maran.E

  • Can one instance have multiple redo threads ?

    i was reading the Oracle Data guard 10g guide and it says
    Determine the appropriate number of standby redo log file groups.
    Minimally, the configuration should have one more standby redo log file group than the number of online redo log file groups on the primary database. However, the recommended number of standby redo log file groups is dependent on the number of threads on the primary database. Use the following equation to determine an appropriate number of standby redo log file groups:
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    Using this equation reduces the likelihood that the primary instance's log writer (LGWR) process will be blocked because a standby redo log file cannot be allocated on the standby database. For example, if the primary database has 2 log files for each thread and 2 threads, then 6 standby redo log file groups are needed on the standby database
    while Oracle's definition of redo log states the following
    Redo Threads
    When speaking in the context of multiple database instances, the redo log for each database instance is also referred to as a redo thread. In typical configurations, only one database instance accesses an Oracle Database, so only one thread is present. In an Oracle Real Application Clusters environment, however, two or more instances concurrently access a single database and each instance has its own thread of redo.
    this is confusing, in a typical environment where only one instance accesses a database, can we have more than one redo thread ?

    Though you can create multiple threads but of no use in NON-RAC or non-parallel server configuration.
    Following is the generic formula. You should consider
    maximum number of threads = 1 for single instance (NON-RAC).
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    Message was edited by:
    Reega

Maybe you are looking for