A Data Guard Question
Dear experts,
This time I have a question regarding Data Guard.
Database: Oracle 10g Release 2 (10.2.0.3)
OS: IBM - AIX 5.3 - ML-5
Data Guard: Physical Standby
We have multiple data guard configuraiton in place and all of them are configured in "MAXIMUM PERFORMANCE" mode.
Currently, we have a separate mount point for archive logs (say /dbarch) on both primary and standby servers.
Once log is archived on primary, it is shipped to standby server and applied.
I think we are wasting space by allocating (/dbarch) on standby server, instead we can share "/dbarch" of primary with standby using NFS.
I remember reading such document. I tried to search in Oracle documentation, google, and metalink for the same but failed :((
Any help in this regard will be very helpful.
Thanks in advance.
Regards
From a DR perpespective, this sounds like a recipe for losing data.
If your primary site has a disaster, and there are logs that have not been applied to the standby then you will never be able to apply them as they will have been lost in the crash.
The point of having the standby is to eliminate a single point of failure - and this mechanism is reintroducing it!
jason.
http://jarneil.wordpress.com
Similar Messages
-
Hello all,
I am a newbie to data guard.
1) Unless using the real time apply feature, standby redo logs must be archived before the data can be applied to the standby database. Am I correct in my understanding?
2) Can we keep standby database in higher version(11i) and primary database in 10g?
3) Do I need to left blank LOG_ARCHIVE_DEST_2 parameter on Standby Database init parameter?
4) Can we face any problem if we have different LOG_ARCHIVE_FORMAT on the primary and standby site?
Waiting for your valuable reply.
With regards994269 wrote:
Hello all,
I am a newbie to data guard.
Wel come to the data guard>
1) Unless using the real time apply feature, standby redo logs must be archived before the data can be applied to the standby database. Am I correct in my understanding?
Not uderstand your question.
2) Can we keep standby database in higher version(11i) and primary database in 10g?
No. Both the version should be same.
3) Do I need to left blank LOG_ARCHIVE_DEST_2 parameter on Standby Database init parameter?
Yes.Until you dont required to send archived log from standby database to any other database. If you have cascaded setup then you would required to set the parameter as required.
4) Can we face any problem if we have different LOG_ARCHIVE_FORMAT on the primary and standby site?
Yes. We should keep default log format or shourl be same.
Waiting for your valuable reply.
With regards -
Hi All,
I'm thinking whether the oracle active data guard can enable "multiple" physical standby database to be opened for read-only access while Redo Apply is active?
As i read a lot of document, seems like only able to enable "one" physical standby database only.
Let say i have one production database, and i have two or more physical standby DB (can be RAC or stand alone) in different country,
can this two physical standby be enable active data guard for read-only access?
Any idea? Thanks in advance.
Best Regards,
hauDear klnghau,
If your project is on hold now then you should have time to read some more!;
http://www.oracle.com/us/products/database/options/active-data-guard/index.htm
Please also read the following thread;
More than One Physical Standby Database
Regards.
Ogan -
I am thinking about creating a logical 10gR2 standby database from my OLTP 10gR2 database to be used as a reporting database. In the OLTP database, there are triggers that perform auditing on certain tables. Should I disable these auditing triggers in the standby database so the audit records are not duplicated? Im assuming that the audit records created in the OLTP will be part of the redo application on the standby database.
Any thoughts?
thanks.Hi
Since triggers are based on events and events occur at primary therefore when triggers are fired at primary, logical standby will automatically reflect the changes (logically). For example you have created a trigger that fire each time a row is updated on a certain table. Now since you will not be updating ANY row in the standby (you simply CANNOT) therefore there will only be one entry in the audit in both databases.
I have never experimented with that but I believe this is how it works.
Maybe Justin/Jaffar or other gurus will be able to shed some light on it.
Rgds
Adnan -
Data Guard Logical Standby DB Questions
Hi,
I am creating Oracle 9i Logical Standby database on AIX 5 on same server ( for
testing only ).
In Data Guard Manual ( PDF file page no : 86 ) it is said that
Step 4 Create a backup copy of the control file for the standby database. On the
primary database, create a backup copy of the control file for the standby database:
SQL> ALTER DATABASE BACKUP CONTROLFILE TO
2> '/disk1/oracle/oradata/payroll/standby/payroll3.ctl';
My Questions regarding this are
1. Does it missing "backup standby controlfile" word ? if not what is the use of
this control file ? Because as per this manual section 4.2.2 step no 1 and 2 I
have stopped the primary database and copied datafiles,logfiles and controlfiles
as specified which are having consistent stage.
2. On primary database I am mirroring 2 controlfiles at two diffrent locations.
On Standby Database I am going to keep same policy. Then on standby database do I have to copy this controlfile to two locations and rename datafiles or I have
to user controlfiles which I have backed up along with datafiles ?
3. Suppose my primary database is "infod" and standby database is "standby".
Then what should be value of log_file_format parameter of standby database ? On
primary db I have defined "infod LOG_ARCHIVE_FORMAT=infod_log%s_%t.arc. Do I have to keep same or change it ? If on standby db I change the value do I
require other parameters to be set ?
regards & thanks
pjpQ/A1) Its correct you dont need the standby keyword. Not sure abut the reason why but we have created logical standby running in prod using the oracle doc.
Q/A2) Yo can specify any location and name for your controlfiles as it is instance specific and not depending on primary database.
Q/A3) You can set any format for your archlogs on standby side. It doesnt bother the primary db or DG configuration.
Regards,
http://askyogesh.com -
Question related to Physical Data Guard (Oracle 10gR2)
Hi,
I have a question regarding Physical Data Guard in a RAC environment (Oracle 10g Release 2).
Say we have 4-node RAC in production and DG is also configured for RAC but number of nodes differ in production and DG. Which node in DG will apply log from production if?
1) If there is 2 node RAC setup in DG?
2) If there is 4 node RAC setup in DG?
3) If there is 5/6/7/... node RAC setup in DG?
Probably, this is a very simple and basic question but your expertise would be of great help.
RegardsHi - Only one instance performs the recovery, but more than one standby node can be an archive log destination for the primary instances.
-
Question on db_unique_name in init.ora for Data Guard
I need to set up only one physical standby on a different box (at a different location) for the primary db in production.
OS: Sun Sparc Solaris 10
Oracle: 10.2.0.3
Can I use the same db_unique_name in init.ora for both primary and standby DBs?
What are the minimal parameters required by Data Guard I have to specify in the init.ora in my case?
Could anyone please post an example of init.ora for both primary and standby DBs?
Thanks very much in advance.http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i63561
-
Data Guard Administration Question.... (10gR2)
After considerable trial and error, I have a running logical standby between 2 10gR2 databases.
1) During the install of the primary database, I didn't comply fully to the OFA standard (I was slightly off on the placement of my database devices). During the Data Guard configuration, the option of "converting to ofa" was selected (per a metalink article that I read regarding a problem choosing to keep the filenames of the primary the same). Of course now I have an issue creating a tablespace on the Primary when keeping the non-OFA directory stucture. When it attempts to do the same on the Standby I'm getting the error that it cannot create the datafile. Makes sense, but what should I do in the future? Create the non-OFA directory structure on the Standby (assuming it would then create the file)? Is't there a filename conversion parameter that handles this as well?
2) I got myself into a pinch this afternoon, partly due to #1. I am importing a file from another instance onto the Primary to begin testing reports on the Secondary. Prior to the import I created a tablespace (which is what got me to problem #1), proceeded to create the owner of the schema that's going to be imported, then performed the import. Now the apply process is erroring and going off line every few seconds as it works it's way through the "cannot create table" errors that the import is running into on the Secondary. How do I handle a large batch of transactions like this? Ultimately I would like to get back to square 1... no user, and no imported data in the Primary and the apply process online.
Thanks:
ChrisSo what I finally did was turned dg offline. Created the tablespace on the secondary, and then the user and then turned apply back online. The import proceeded fairly smoothly. Problem resolved.
However, that I still need some insight as to exactly how the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters work. I have LOG_FILE_NAME_CONVERT setup (correctly I think) but I get a warning message in DG that sez the configuration is inconsistent with the actual setup.
Here's the way things are setup:
I have 3 redo logs:
primary (non-ofa):
/opt/oracle10/product/oradata/ICCORE10G2/redo01.log
... redo02.log
... redo03.log
secondary (ofa):
/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/redo01.log
... redo02.log
... redo03.log
LOG_FILE_NAME_CONVERT=('/opt/oracle10/product/oradata/ICCORE10G2/', '/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/')
Is the above parameter set correctly?
DB_NAME_FILE_CONVERT is unset as of now, but the directory structure above is the same. I assume the parameter needs to be set just like LOG_FILE_NAME_CONVERT above.
Thanks -
10g Data Guard Install Questions... Solaris 9
Firstly, I've done several Failsafe installes on Wintel platforms, but I'm having a tough time getting started installing Dataguard on Solaris. According to the manual:
Oracle® Data Guard Broker
10g Release 1 (10.1)
Part Number B10822-01
"The Oracle Data Guard graphical user interface (GUI), which you can use to manage broker configurations, is installed with the Oracle Enterprise Manager software."
I don't see any link or otherwise access to Data Guard via the 10g Enterprise Manager. Is there something that I missed during install that will allow me access Data Guard GUI?
I'm stuck
http://download-east.oracle.com/docs/cd/B14117_01/server.101/b10822/concepts.htm#sthref14rajeysh wrote:
refer the link:- hope this will help you.
http://blogs.oracle.com/AlejandroVargas/gems/DataGuardBrokerandobserverst.pdf
http://oracleinstance.blogspot.com/2010/01/configuration-of-10g-data-guard-broker.html
http://gjilevski.wordpress.com/2010/03/06/configuring-10g-data-guard-broker-and-observer-for-failover-and-switchover/
Good luck.
SQL> show parameter broker
NAME TYPE VALUE
dg_broker_config_file1 string /u03/KMC/db/tech_st/10.2.0/dbs
/dr1KMC_PROD.dat
dg_broker_config_file2 string /u03/KMC/db/tech_st/10.2.0/dbs
/dr2KMC_PROD.dat
dg_broker_start boolean FALSE
SQL>so i need only:
ALTER SYSTEM SET DG_BROKER_START=true scope=both;only to act in dgmgrl.
please confirm me ...... -
In dataguard when i check view log(OBJECT-Viewlog) it return
"Data Guard Remote Process Startup Fail"?
Regardshi there,
What's your question and you need to provide the version and the type of standby db too? -
Data Guard adding new data files to a tablespace.
In the past, if you were manually updating an Oracle physical standby database there were issues with adding a data file to a tablespace. It was suggested that the data file should be created small and then the small physical file copied to the standby database. Once the small data file was in place it would be resized on the primary database then the repication would change the size on the standby.
My question is, does Data Guard take care of this automaticlly for a physical standby? I can't find any specific reference on how it handles a new datafile.Never mind, I found the answer.
STANDBY_FILE_MANAGEMENT=auto
Set on the standby database will create the datafiles. -
Data Guard Failover after primary site network failure or disconnect.
Hello Experts:
I'll try to be clear and specific with my issue:
Environment:
Two nodes with NO shared storage (I don't have an Observer running).
Veritas Cluser Server (VCS) with Data Guar Agent. (I don't use the Broker. Data Guard agent "takes care" of the switchover and failover).
Two single instance databases, one per node. NO RAC.
What I'm being able to perform with no issues:
Manual switch(over) of the primary database by running VCS command "hagrp -switch oraDG_group -to standby_node"
Automatic fail(over) when primary node is rebooted with "reboot" or "init"
Automatic fail(over) when primary node is shut down with "shutdown".
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:
This is the portion of the alert log at Standby site when Real Time Replication is working fine:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
At this moment, node1 (Primary) is completely disconnected from the network. SEE at the end when the database (standby which should be converted to PRIMARY) is not getting all the archived logs from the Primary due to the abnormal disconnect from the network:
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Terminal Recovery: applying standby redo logs.
Terminal Recovery: thread 1 seq# 7 redo required
Terminal Recovery:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Attempt to do a Terminal Recovery (primary_db)
Media Recovery Start: Managed Standby Recovery (primary_db)
started logmerger process
Mon Dec 23 17:13:33 2013
Managed Standby Recovery not using Real Time Apply
Media Recovery failed with error 16157
Recovery Slave PR00 previously exited with exception 283
ORA-283 signalled during: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH...
Mon Dec 23 17:13:34 2013
Shutting down instance (immediate)
Shutting down instance: further logons disabled
Stopping background process MMNL
Stopping background process MMON
License high water mark = 38
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
ALTER DATABASE DISMOUNT
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
ARCH shutting downARCH shutting down
ARCH shutting down
ARC0: Relinquishing active heartbeat ARCH role
ARC2: Archival stopped
ARC0: Archival stopped
ARC1: Archival stopped
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:40 2013
Stopping background process VKTM
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:43 2013
Instance shutdown complete
Mon Dec 23 17:13:44 2013
Adjusting the default value of parameter parallel_max_servers
from 1280 to 470 due to the value of parameter processes (500)
Starting ORACLE instance (normal)
************************ Large Pages Information *******************
Per process system memlock (soft) limit = 64 KB
Total Shared Global Region in Large Pages = 0 KB (0%)
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
RECOMMENDATION:
Total System Global Area size is 3762 MB. For optimal performance,
prior to the next instance restart:
1. Increase the number of unused large pages by
at least 1881 (page size 2048 KB, total size 3762 MB) system wide to
get 100% of the System Global Area allocated with large pages
2. Large pages are automatically locked into physical memory.
Increase the per process memlock (soft) limit to at least 3770 MB to lock
100% System Global Area's large pages into physical memory
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Initial number of CPU is 32
Number of processor cores in the system is 16
Number of processor sockets in the system is 2
CELL communication is configured to use 0 interface(s):
CELL IP affinity details:
NUMA status: NUMA system w/ 2 process groups
cellaffinity.ora status: cannot find affinity map at '/etc/oracle/cell/network-config/cellaffinity.ora' (see trace file for details)
CELL communication will use 1 IP group(s):
Grp 0:
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =88
LICENSE_MAX_USERS = 0
SYS auditing is disabled
NUMA system with 2 nodes detected
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
ORACLE_HOME = /u01/oracle/product/11.2.0.4
System name: Linux
Node name: node2.localdomain
Release: 2.6.32-131.0.15.el6.x86_64
Version: #1 SMP Tue May 10 15:42:40 EDT 2011
Machine: x86_64
Using parameter settings in server-side spfile /u01/oracle/product/11.2.0.4/dbs/spfileprimary_db.ora
System parameters with non-default values:
processes = 500
sga_target = 3760M
control_files = "/u02/oracle/orafiles/primary_db/control01.ctl"
control_files = "/u01/oracle/fast_recovery_area/primary_db/control02.ctl"
db_file_name_convert = "standby_db"
db_file_name_convert = "primary_db"
log_file_name_convert = "standby_db"
log_file_name_convert = "primary_db"
control_file_record_keep_time= 40
db_block_size = 8192
compatible = "11.2.0.4.0"
log_archive_dest_1 = "location=/u02/oracle/archivelogs/primary_db"
log_archive_dest_2 = "SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary_db"
log_archive_dest_state_2 = "ENABLE"
log_archive_min_succeed_dest= 1
fal_server = "primary_db"
log_archive_trace = 0
log_archive_config = "DG_CONFIG=(primary_db,standby_db)"
log_archive_format = "%t_%s_%r.dbf"
log_archive_max_processes= 3
db_recovery_file_dest = "/u02/oracle/fast_recovery_area"
db_recovery_file_dest_size= 30G
standby_file_management = "AUTO"
db_flashback_retention_target= 1440
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=primary_dbXDB)"
job_queue_processes = 0
audit_file_dest = "/u01/oracle/admin/primary_db/adump"
audit_trail = "DB"
db_name = "primary_db"
db_unique_name = "standby_db"
open_cursors = 300
pga_aggregate_target = 1250M
dg_broker_start = FALSE
diagnostic_dest = "/u01/oracle"
Mon Dec 23 17:13:45 2013
PMON started with pid=2, OS id=29108
Mon Dec 23 17:13:45 2013
PSP0 started with pid=3, OS id=29110
Mon Dec 23 17:13:46 2013
VKTM started with pid=4, OS id=29125 at elevated priority
VKTM running at (1)millisec precision with DBRM quantum (100)ms
Mon Dec 23 17:13:46 2013
GEN0 started with pid=5, OS id=29129
Mon Dec 23 17:13:46 2013
DIAG started with pid=6, OS id=29131
Mon Dec 23 17:13:46 2013
DBRM started with pid=7, OS id=29133
Mon Dec 23 17:13:46 2013
DIA0 started with pid=8, OS id=29135
Mon Dec 23 17:13:46 2013
MMAN started with pid=9, OS id=29137
Mon Dec 23 17:13:46 2013
DBW0 started with pid=10, OS id=29139
Mon Dec 23 17:13:46 2013
DBW1 started with pid=11, OS id=29141
Mon Dec 23 17:13:46 2013
DBW2 started with pid=12, OS id=29143
Mon Dec 23 17:13:46 2013
DBW3 started with pid=13, OS id=29145
Mon Dec 23 17:13:46 2013
LGWR started with pid=14, OS id=29147
Mon Dec 23 17:13:46 2013
CKPT started with pid=15, OS id=29149
Mon Dec 23 17:13:46 2013
SMON started with pid=16, OS id=29151
Mon Dec 23 17:13:46 2013
RECO started with pid=17, OS id=29153
Mon Dec 23 17:13:46 2013
MMON started with pid=18, OS id=29155
Mon Dec 23 17:13:46 2013
MMNL started with pid=19, OS id=29157
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = /u01/oracle
Mon Dec 23 17:13:46 2013
ALTER DATABASE MOUNT
ARCH: STARTING ARCH PROCESSES
Mon Dec 23 17:13:50 2013
ARC0 started with pid=23, OS id=29210
ARC0: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Successful mount of redo thread 1, with mount id 2071851082
Mon Dec 23 17:13:51 2013
ARC1 started with pid=24, OS id=29212
Allocated 15937344 bytes in shared pool for flashback generation buffer
Mon Dec 23 17:13:51 2013
ARC2 started with pid=25, OS id=29214
Starting background process RVWR
ARC1: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Mon Dec 23 17:13:51 2013
RVWR started with pid=26, OS id=29216
Physical Standby Database mounted.
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Mon Dec 23 17:13:51 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:13:51 2013
MRP0 started with pid=27, OS id=29219
MRP0: Background Managed Standby Recovery process started (primary_db)
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC2: Becoming the heartbeat ARCH
ARC2: Becoming the active heartbeat ARCH
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
At this moment, I've lost service and I have to wait until the prmiary server goes up again to receive the missing log.
This is the rest of the log:
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:52
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:55
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
started logmerger process
Mon Dec 23 17:13:56 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_29230.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:58
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Mon Dec 23 17:14:01 2013
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:14:01
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Error 12543 received logging on to the standby
FAL[client, ARC0]: Error 12543 connecting to primary_db for fetching gap sequence
Archiver process freed from errors. No longer stopped
Mon Dec 23 17:15:07 2013
Using STANDBY_ARCHIVE_DEST parameter default value as /u02/oracle/archivelogs/primary_db
Mon Dec 23 17:19:51 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Mon Dec 23 17:26:18 2013
RFS[1]: Assigned to RFS process 31456
RFS[1]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:26:47 2013
flashback database to scn 15921680
ORA-16157 signalled during: flashback database to scn 15921680...
Mon Dec 23 17:27:05 2013
alter database recover managed standby database using current logfile disconnect
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:27:05 2013
MRP0 started with pid=28, OS id=31481
MRP0: Background Managed Standby Recovery process started (primary_db)
started logmerger process
Mon Dec 23 17:27:10 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_31486.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: alter database recover managed standby database using current logfile disconnect
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Mon Dec 23 17:27:18 2013
RFS[2]: Assigned to RFS process 31492
RFS[2]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:28:18 2013
RFS[3]: Assigned to RFS process 31614
RFS[3]: No connections allowed during/after terminal recovery.
Do you have any advice?
Thanks!
Alex.Hello;
What's not clear to me in your question at this point:
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:"
Are you trying a failover to the Standby?
Please advise.
Is it possible your "valid_for clause" is set incorrectly?
Would also review this:
ORA-16014 and ORA-00312 Messages in Alert.log of Physical Standby
Best Regards
mseberg -
How do i find dataloss in Data Guard?
We are using redo log, in async mode, following is our setting.
SERVICE=xxx_sb max_failure=100 reopen=600 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=xxx_sb
When i query V$managed_Standby for delay_mins, its always zero. Meaning there is no delay in copying a log. I have 2 questions..
1. How can i communicate to business that in worst case we will lose x Mintues of data? Its an OLTP, where the transactions are less then 2mins. Also during the night there are some batch jobs where the transactions are 60mins longs.
2. Most of the time during peak hours there is a log switch happening every 10-15mins but during non-peak it may not happen for a long period of time, is it advisable to set ARCHIVE_LAG_TIME to 10 mins? as im not using archiver , we are using log writer for standby.
any explanation or point to documentation would be appreciated,
Thanks,Production databases who are running with fully fined configured Data Guard, do'nt have any dataloss because failover operation ensures zero data loss if dataguard is configured with maximum protection mode or maximum availability mode at failover time.
http://www.dbazone.com/docs/oracle_10gDataGuard_overview.pdf
The above pdf is oracle white paper which too confirmed it.
LGWR SYNC AFFIRM in Oracle Data Guard is used for zero data loss. How does one ensure zero data loss? Well, the redo block generated at the primary has to reach the standby across the network (that's where the SYNC part comes in - i.e. it is a synchronous network call), and then the block has to be written on disk on the standby (that's where the AFFIRM part comes in) - typically on a standby redo log.
Can you have LGWR SYNC NOAFFIRM? Yes sure. Then you will have synchronous network transport, but the only thing you are guaranteed is that the block has reached the remote standby's memory. It has not been written on to disk yet. So not really a zero data loss solution (e.g. what if the standby instance crashes before the disk I/O).
To sum up -> LGWR SYNC AFFIRM means primary transaction commits are waiting for ntk I/O + disk I/O acks. LGWR SYNC NOAFFIRM means primary transaction commits are waiting for ntk I/O only.
Source:http://www.dbasupport.com/forums/showthread.php?t=54467
HTH
Girish Sharma -
Clarification on Data Guard(Physical Standyb db)
Hi guys,
I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
However I need clarification on the setup and whether or not it is working as expected.
My environment is Windows 32bit (Windows 2003)
Oracle 10.2.0.2 (Client/Server)
2 Physical machines
Here is what I have done.
Machine 1
1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
2. Modify the pfile to include the following:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgp'
*.fal_server='oradgs'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
*.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgp
The locations on the harddisk are all available and archived redo are created (e:\archlogs)
3. I then add the necessary (4) standby logs on primary.
4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
RMAN> run
{allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
backup database plus archivelog delete input;
5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
6. Then created a standby controlfile. (At this time the db was in open/write mode).
7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
Machine2
8. I created an Oracle service called the same as primary (oradgp).
9. Created a listener also.
9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
10. I then copied over the pfile from the primary to standby and created an spfile with this one.
It looks like this:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgs'
*.fal_server='oradgp'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
*.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgs
log_file_name_convert='junk','junk'
11. User RMAN to restore the db as:-
RMAN> startup mount;
RMAN> restore database;
Then RMAN created the datafiles.
12. I then added the same number (4) of standby redo logs to machine2.
13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
I wanted to enable realtime apply so, I cancelled the recover by :-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
and issued:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
Also performed a log swith on primary and it got transported to the standby and was applied (YES).
Also ensured that there are no gaps via some queries where no rows were returned.
15. I now wanted to perform a switchover, hence issued:-
Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
All the archivers stopped as expected.
16. Now on machine2:
Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
17. On machine1:
Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
Primary_Now_Standby_SQL>STARTUP MOUNT;
Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
17. On machine2:
Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
However, here are my questions for clarifications:-
Q1. There is a question about ONLINE REDO LOGS within "#" characters.
Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
MRP0 APPLYING_LOG 1 47 452 1024000
but :
SQL> select max(sequence#) from v$archived_log;
46
Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
42 NO
43 YES
44 YES
45 YES
46 YES
What could be the possible reasons why sequence# 42 didn't get applied but the others did?
After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
Thank you very much in advance.
Regards,
Bharath
Edited by: Bharath3 on Jan 22, 2010 2:13 AMParameters:
Missing on the Primary:
DB_UNIQUE_NAME=oradgp
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
Missing on the Standby:
DB_UNIQUE_NAME=oradgs
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
Your questions (Q1 answered above):
You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Up to you. Not a requirement.
You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary. -
Need suggestion on Active data guard or Logical Stand by
Hi All,
Need a suggestion of on below scenario.
We have a production database ( oracle version 11g R2 ) and planning to have a Logical standby or physical standy (Active data guard). Our usage of the standby database is below.
1) Planning to run online reports (100+) 24x7. So might create additional indexes,materialized views etc.
2) daily data feed ( around 300+ data files ) to data warehouse. daily night, jobs will be scheduled to extract data and send to warehouse. Might need additional tables for jobs usage.
Please suggest which one is good.
Regards,
vara.Hello,
In active dataguad Whig is feature from 11gRx ,
If you choose active dataguard, you have couple of good options, one is you can make a high availability of your production database, which can act as image copy of production, as you are asking in 11g you have more advantage where you can open in read only mode and at the sometime MRP will be active, so you can redirect users to connect standby to perform select operations for reporting purpose. So that you can control much load on production ,
Even uou can perform switchover in case of role change, perform failover if your primary is completely lost. Also you can convert to physical to logical standby databases & you can configure FSFO
You have plenty of options with active dataguard.
Refer http://www.orafaq.com/node/957
consider closing the thread if answered and keep the forum clean.
>
User Profile for user11261773
user11261773
Handle: user11261773
Status Level: Newbie
Registered: Jul 14, 2011
Total Posts: 12
Total Questions: 6 (5 unresolved)
>
Edited by: CKPT on Mar 18, 2012 8:14 PM
Maybe you are looking for
-
Java.rmi.ConnectException
hi I am trying to connect to a remote server from my client as follows $ java TestRcf -Djava.security.policy=policy and my policy file is as follows.... grant { permission java.net.SocketPermission "*:1024-65535", "connect"; I get the following error
-
Work flow mails not processing from SAP Outbox to Approvers
Hello, We are working on the implementation of Work flow for Purchase orders. Work flow customization is completed and even work flow is triggering when ever I create Purchase Order. Even Agent assignment also completed for work flow tasks via T code
-
How to export/import prefs.js? Copying the file doesnt work
I want to export/import about:config changes. The paths is "Firefox/Profiles/i63866ho.default-1384517947097/prefs-js". I copied prefs.js, all the content of the folder "i63866ho.default-1384517947097" and "Profiles" folder also. Nothing worked. When
-
Love Problem Solution BaBa Ji +91-9815247710
आप की जिंदगी कि हर परेसानी को दुर करे घर बैठे वशीकरण मुठकरनी आपका बेटा-बेटी पति-प्रेमी किसी के वश में तुरन्त वापस आपके कदमो में सौतन दुश्मन छुटकारा - समाधान वशीकरण कराने के लिए Call करें काले इलम मोहब्बत मे नाकामी तलाक का मसला हल समस्या Call करे तु
-
Water Resist Alert (Notification) Not Work
Hello , I have a problem with notifications when cover is open , water warning Not work! Also when the plug charger notifications also not work ... I tought error with Smart Connect and I deleted data's in smart connect , booth still not working ...