Data Guard: standby archivelog files not the same as the primary archivelog
From reading the Data Guard doco, one gets the impression that that the standby redo archive logs are just a mirror of those from the primary, that the shipped and applied archived redo logs from the primary are what end up in the standby archive directories.
However I have noted that the files are not the same (md5sum is different), which is concerning as using Oracle's archive log naming recommendation, the files have identical names.
Does this mean that the Standbys redologs can not be used as a source of redo logs if one had to restore using a backup of the primary database?
i.e. Can you recover using an RMAN backup of the primary, and redo logs from the standby?
Hi MSK, I agree, that's exactly what I thought. I'm confused though why the logs do not match. Perhaps it is just metadata in the log that has changed, not redo data.
Anyone have any knowledge on why the archivelogs are different on the standby to those on the primary?
Similar Messages
-
I would like to plott data from a text file in the same way as a media player does from a video file. I’m not sure how to create the pointer slide function. The vi could look something like the attached jpg.
Please, can some one help me?
Martin
Attachments:
Plotting from a text file like a media player example.jpg 61 KBHI Martin,
i am not realy sure what you want!?!?
i think you want to display only a part of the values you read from XYZ
so what you can do:
write all the values in an array.
the size of the array is the max. value of the slide bar
now you can select a part of the array (e.g. values from 100 to 200) and display this with a graph
the other option is to use the history function of the graphes
regards
timo -
I'd copied some pictures from a PC to a folder in Finder in my new iMac. The 'Date Created' in iMac is not the same as the 'Date Created' in the PC. Why? Is there a way to fix that?
View Menu -> Sort Photos is a good way to start.
Some organising possibilities in iPhoto:
I use Events simply as big buckets of Photos: Spring 08, July - Nov 06 are typical Events in my Library. I use keywords and Smart Albums extensively. I title the pics broadly.
I keyword on a
Who
What
Where basis (The When is in the photos's Exif metadata). I also rate the pics on a 1 - 5 star basis.
Using this system I can find pretty much find any pic in my 50k library in a couple of seconds.
So, for example, I have a batch of pics titled 'Seattle 08' and a typical keywording might include: John, Anne, Landscape, mountain, trees, snow. With a rating included it's so very easy to find the best pics we took at Mount Rainier.
File -> New Smart Album
set it to 'All"
title contains Seattle
keyword is mountain
keyword is snow
rating is 5 stars
Or, want a chronological album of John from birth to today?
New Smart Album
Keyword is John
Set the View options to Sort By Date Ascending
Want only the best pics?
add Rating is greater than 4 stars
The best thing about this system is that it's dynamic. If I add 50 more pics of John to the Library tomorrow, as I keyword and rate them they are added to the Smart Album.
In the end, organisation is about finding the pics. The point is to make locating that pic or batch of pics findable fast. This system works for me. -
Error: ORA-16532: Data Guard broker configuration does not exist
Hi there folks. Hope everyone is having a nice weekend.
Anyways, we have a 10.2.0.4 rac primary and a 10.2.0.4 standby physical standby. We recently did a switchover and the dgbroker files automatically got created in the Oracle_home/dbs location of the primary. Now need to move these files to the common ASM DG. For this, I followd the steps from this doc:
How To Move Dataguard Broker Configuration File On ASM Filesystem (Doc ID 839794.1)
The only exception to this case is that I have to do this on a Primary and not a standby so I am disabling and enabling the Primary(and not standby as mentioned in below steps)
To rename the broker configuration files in STANDBY to FRA/MYSTD/broker1.dat and FRA/MYSTD/broker2.dat, Follow the below steps
1. Disable the standby database from within the broker configuration
DGMGRL> disable database MYSTD;
2. Stop the broker on the standby
SQL> alter system set dg_broker_start = FALSE;
3. Set the dg_broker_config_file1 & 2 parameters on the standby to the appropriate location required.
SQL> alter system set dg_broker_config_file1 = '+FRA/MYSTD/broker1.dat';
SQL> alter system set dg_broker_config_file2 = '+FRA/MYSTD/broker2.dat'
4. Restart the broker on the standby
SQL> alter system set dg_broker_start = TRUE
5. From the primary, enable the standby
DGMGRL> enable database MYSTD;
6. Broker configuration files will be created in the new ASM location.
I did so but when I try to enable the Primary back I get this:
Error: ORA-16532: Data Guard broker configuration does not exist
Configuration details cannot be determined by DGMGRL
Form this link,(Errors setting up DataGuard Broker it would seem that I would need to recreate the configuration....Is that correct ? If yes then how come Metalink is missing this info of recreating the configuration... OR is it that that scenario wouldnt be applicable in my case ?
Thanks for your help.Yes I can confirm from the gv$spparameter view that the changes are effective for all 3 instances. From the alert log the alter system didnt throw u pany errros. I didnt restart the instances though since I dont have the approvals yet. But I dont think thats required.
-
Remove Data Guard Broker configuration file
Hi,
We have a data guard configuration running on Oracle 10g RAC (2 nodes) ASM. After failover, we rebuild the old primary as physical standby. But every time when enable the data guard broker, it created problems. So the best solution for us seems to completely remove the data guard broker configuration and recreate them. Here are our steps
On both primary and standby
1) alter system set DG_BROKER_START=FALSE
2) shutdown database
3) use asmcmd command to remove the broker directory under +DATA/dbname/DG_BROKER
4) start the database
5) alter system set DG_BROKER_START=TRUE
6) go through the many steps to recreate data guard broker configuration
I am just curious whether this is the right way and whether there is any simple way?
Thanks in advanceHi, I saw same issue when doing switchover testing in my lab environment.prerequisite is primary role and standby role switched and og can be applied without data guard broker.
Here is the step I resolved the issue
1)on both primary and standby database
SQL> alter system set dg_borker_start=false;
on primary DB:
SQL>alter system set dg_broker_config_file1='?/dbs/dr1afterswichoverpry.dat';
SQL>alter system set dg_broker_config_file1='?/dbs/dr2afterswichoverpry.dat';
on standby DB:
SQL>alter system set dg_broker_config_file1='?/dbs/dr1afterswichoverstby.dat';
SQL>alter system set dg_broker_config_file1='?/dbs/dr2afterswichoverstby.dat';
2) enable dg_borker_start on both primay and sandbby db
SQL> alter system set dg_borker_start=true;
3)on primary database to create configuration
Hope this can help you!
email: [email protected] -
Data guard standby restart in rocover mode
Hi,
I inherited a data guard database and found that the secondary DB has been out of sync with primary. I perform SCN recovery to the secondary and now they are fine syncing. I figured I had to manually put the standby in recover mode after open in read only mode in the one I worked on which is our qual environment where as it is not like that in the prod. Can anyone advise what I need to do to have this standby in recovery mode even after the DB go down for maintenance and come back on?
Thanks for your input,
fakordHi,
If you are using database guard broker then it will automatically starts the apply as it controls the apply. If not probably a shell script to start the standby database followed by this line can help.
alter database recover managed standby database using current logfile disconnect from session;
HTH,
Pradeep -
Data Guard Standby Lost A Drive Or Two
So a data guard standby is somewhat disgruntled because the SAN imploded and the standby lost a couple mount points. It wasn't running ASM so no self-repairing can occur. Since this is the first time I've had a serious issue with a standby and since much of what I do is restricted (RMAN complains because it's a standby and read-only)... I'm just curious if someone could point me towards a comprehensive "what to do when your STANDBY has issues" book, document, or link. They are surprisingly scarce. Most comments online simply say (paraphrased) "scorch the earth, salt the fields, and rebuild fresh."
Thank youHello;
Depends what you lost.
1. You can just rebuild.
2. You might be able to use RMAN roll forward.
http://www.oracle-ckpt.com/?s=standby+scn&op.x=30&op.y=17
http://emrebaransel.blogspot.com/2008/12/recover-dataguard-from-lost-archivelog.html
3. If you are Oracle 11 you can use RMAN active duplicate to rebuild.
http://www.visi.com/~mseberg/standby_creation_from_active_database_using_rman.html
4. Other
Recover Lost Datafile On Standby
http://hemendraoracleblogs.blogspot.com/2012/11/recover-lost-datafile-from-standby.html
http://allthingsoracle.com/data-guard-physical-standby-database-best-practices-part-ii/
http://stackoverflow.com/questions/8135393/how-do-you-re-duplicate-a-broken-physical-standby-database
Best Regards
mseberg -
Error while uploading data from a flat file to the hierarchy
Hi guys,
after i upload data from a flat file to the hierarchy, i get a error message "Please select a valid info object" am loading data using PSA, having activated all external chars still get the problem..some help on this please..
regards
Srithere is o relation of infoobject name in flat file and infoobjet name at BW side.
please check with the object in the BW and their lengths and type of the object and check your flat file weather u have the same type there,
now check the sequence of the objects in the transfer rules and activate them.
there u go. -
In search results, Summary sometimes shows the top of file, not the text I searched for
I'm searching lots of html files, it's a web site that's not imported into Sharepoint. Search is returning the correct files, but about half the time the Summary contains the text at the top of the html file, not the text surrounding the text I searched
for. It's a big problem, users have to open the file then find the search text to see if it's useful.
If I search for 'potato' here's an example Summary which is ok:
All pages in issue FTDA-1983-0409:
00010002000300040005000600070008000900100011001200130014001500160017001800190020
… today's dieticians, the potato … most of us enjoy a
potato cooked …
Here's a Summary just showing text from the top of the file, 'potato' is present, but lower in the file:
0001000200030004000500060007000800090010001100120013001400150016001700180019002000210022002300240025002600270028
… issue: date: January 19, 1980, page …
Could anyone help me get it to consistently show the text around the search text please?You can't attach a screenshot to the first post that starts a thread, but you can do that in subsequent replies.
Does this only happen on one specific website?
Can you post a link to a publicly accessible page (i.e. no authentication or signing on required)?
It is possible that there is a problem with that web page and that there is a clear:both CSS rule missing that causes Firefox to start the next line at the horizontal position where the previous line ended.
If you have made changes to Advanced font settings like increasing the minimum/default font size then try the default minimum setting "none" and the default font size 16 in case the current setting is causing problems.
*Firefox > Preferences > Content : Fonts & Colors > Advanced > Minimum Font Size (none)
Make sure that you allow pages to choose their own fonts.
*Firefox > Preferences > Content : Fonts & Colors > Advanced: [X] "Allow pages to choose their own fonts, instead of my selections above" -
Is there a way to see previous data usage if not the primary account holder?
I am not the primary account and therefore cannot look at bills to see previous usage. Is there anywhere on the website I could find this? I know in the past, I've been able to get the last 3 months (either from website or phone app, I don't remember which), but now Verizon has gone backwards and made it harder to get that information. Perhaps they don't want us to easily find how much is being used so they can bill us for extra overage charges, especially now that data is where their big bucks come from???
Only the primary I believe can. It has always been that way from what I remember.
-
Select statement from a join file deletes the primary records with MS Expl.
Hello,
is reality,
Select statement from a join file deletes the primary records with MS Explorer 6, Firefox not, this with a normal data provider or with a normal "select..." statement.
This is very strange, I should have a.a.s.p. a solution or workaround
Thanks, Franco.
Message was edited by:
fbiaggiPlease see the following excerpt from the online documentation.
http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_modes.htm#SUTIL1332
Enabled Constraints
During a direct path load, the constraints that remain enabled are as follows:
NOT NULL
UNIQUE
PRIMARY KEY (unique-constraints on not-null columns)
NOT NULL constraints are checked at column array build time. Any row that violates the NOT NULL constraint is rejected.
Even though UNIQUE constraints remain enabled during direct path loads, any rows that violate those constraints are loaded anyway (this is different than in conventional path in which such rows would be rejected). When indexes are rebuilt at the end of the direct path load, UNIQUE constraints are verified and if a violation is detected, then the index will be left in an Index Unusable state. See "Indexes Left in an Unusable State". -
If I have two emails for my iMessage how can I see or receive my msj or the email that is not the primary email
It's basically ment for iMessage and FaceTime. I might be wrong but this can only be done between two Apple products that allow this function. You won't receive an email if I sent you an iMessage to a secondary email you use for iMessages. You would only receive a text message. Now if someone using a PC sent an email to the secondary email you use you will receive an email not a text message. The secondary email is used basically if you do not have a data plan and you're connected to wifi you can still send iMessages using those emails you provided.
-
How do i send a text from verizon messages when im not the primary phone on the account
I have no service at work in the building. I would like to send and recieve texts via Verizon Messages but I am not the primary on the account, How do I fix this.
mvener11,
We certainly want you to have access to Verizon Messages while at work. Each line of service is eligible for its own My Verizon account and Verizon Messages. There is no need for you to be the account owner to register for My Verizon. That being said, since you are not the Account owner, you will be limited on the account information you can see, but you will still have full access to Verizon Messages for your line of service. Click here for more info http://vz.to/1IPhplk
SandyS_VZW
Follow us on Twitter @VZWSupport -
Data Guard configuration-Archivelogs not being transferred
Hi Gurus,
I have configured data guard in Linux with 10g oracle, although I am new to this concept. My tnsping is working well both sides. I have issued alter database recover managed standby using current logfile disconnect command in standby site. But I am not receiving the archive logs in the standby site. I have attached my both pfiles below for your reference:
Primary database name: Chennai
Secondary database name: Mumbai
PRIMARY PFILE:
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_domain=""
background_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/bdump
core_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/cdump
user_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/udump
db_create_file_dest=/u01/app/oracle/product/10.2.0/db_1/oradata
db_recovery_file_dest=/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.2.0.1.0
processes=150
sga_target=285212672
audit_file_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/adump
remote_login_passwordfile=EXCLUSIVE
dispatchers="(PROTOCOL=TCP) (SERVICE=chennaiXDB)"
pga_aggregate_target=94371840
undo_management=AUTO
undo_tablespace=UNDOTBS1
control_files=("/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/controlfile/o1_mf_82gl1b43_.ctl", "/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/controlfile/o1_mf_82gl1bny_.ctl")
DB_NAME=chennai
DB_UNIQUE_NAME=chennai
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chennai'
LOG_ARCHIVE_DEST_2=
'SERVICE=MUMBAI LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=mumbai'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=mumbai
FAL_CLIENT=chennai
DB_FILE_NAME_CONVERT=(/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/,/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/)
LOG_FILE_NAME_CONVERT='/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/'
STANDBY_FILE_MANAGEMENT=AUTO
SECONDARY PFILE:
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_domain=""
db_name=chennai
background_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/bdump
core_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/cdump
user_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/udump
db_recovery_file_dest=/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/
db_create_file_dest=/home/oracle/oracle/product/10.2.0/db_1/oradata/
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.2.0.1.0
processes=150
sga_target=285212672
audit_file_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/adump
remote_login_passwordfile=EXCLUSIVE
dispatchers="(PROTOCOL=TCP) (SERVICE=mumbaiXDB)"
pga_aggregate_target=94371840
undo_management=AUTO
undo_tablespace=UNDOTBS1
control_files="/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/controlfile/standby01.ctl","/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/controlfile/standby02.ctl"
DB_UNIQUE_NAME=mumbai
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mumbai'
LOG_ARCHIVE_DEST_2='SERVICE=chennai LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chennai'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
FAL_SERVER=chennai
FAL_CLIENT=mumbai
DB_FILE_NAME_CONVERT=(/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/,/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/)
LOG_FILE_NAME_CONVERT='/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/'
STANDBY_FILE_MANAGEMENT=AUTO
Any help would be greatly appreciated. Kindly, help me someone please..
-Vimal.Thanks Balazs, Mseberg, CKPT for all your replies...
CKPT....I just did what you said..Comes below primary output & standby output...
PRIMARY_
SQL> set feedback off
SQL> set trimspool on
SQL> set line 500
SQL> set pagesize 50
SQL> column name for a30
SQL> column display_value for a30
SQL> column ID format 99
SQL> column "SRLs" format 99
SQL> column active format 99
SQL> col type format a4
SQL> column ID format 99
SQL> column "SRLs" format 99
SQL> column active format 99
SQL> col type format a4
SQL> col PROTECTION_MODE for a20
SQL> col RECOVERY_MODE for a20
SQL> col db_mode for a15
SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
NAME DISPLAY_VALUE
db_file_name_convert /home/oracle/oracle/product/10
.2.0/db_1/oradata/MUMBAI/dataf
ile/, /u01/app/oracle/product/
10.2.0/db_1/oradata/CHENNAI/da
tafile/
db_name chennai
db_unique_name chennai
dg_broker_config_file1 /u01/app/oracle/product/10.2.0
/db_1/dbs/dr1chennai.dat
dg_broker_config_file2 /u01/app/oracle/product/10.2.0
/db_1/dbs/dr2chennai.dat
dg_broker_start FALSE
fal_client chennai
fal_server mumbai
local_listener
log_archive_config DG_CONFIG=(chennai,mumbai)
log_archive_dest_2 SERVICE=MUMBAI LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,P
RIMARY_ROLE)
DB_UNIQUE_NAME=mumbai
log_archive_dest_state_2 ENABLE
log_archive_max_processes 30
log_file_name_convert /home/oracle/oracle/product/10
.2.0/db_1/oradata/MUMBAI/onlin
elog/, /u01/app/oracle/product
/10.2.0/db_1/oradata/CHENNAI/o
nlinelog/, /home/oracle/oracle
/product/10.2.0/db_1/flash_rec
overy_area/MUMBAI/onlinelog/,
/u01/app/oracle/product/10.2.0
/db_1/flash_recovery_area/CHEN
NAI/onlinelog/
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management AUTO
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE,switchover_status from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
CHENNAI chennai MAXIMUM PERFORMANCE PRIMARY READ WRITE NOT ALLOWED
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
1 210
SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
2 FROM
3 (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
4 (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
5 WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
Thread Last Sequence Received Last Sequence Applied Difference
1 210 210 0
SQL> col severity for a15
SQL> col message for a70
SQL> col timestamp for a20
SQL> select severity,error_code,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') "timestamp" , message from v$dataguard_status where dest_id=2;
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16191 15-AUG-2012 12:46:02 LGWR: Error 16191 creating archivelog file 'MUMBAI'
Error 16191 15-AUG-2012 12:46:02 FAL[server, ARC1]: Error 16191 creating remote archivelog file 'MUMBAI
Error 16191 15-AUG-2012 12:51:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 12:56:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:01:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:06:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:11:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:16:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:21:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:26:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:31:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:36:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:41:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:47:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:52:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:57:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:02:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
SEVERITY ERROR_CODE timestamp MESSAGE
16191.
Error 16191 15-AUG-2012 14:07:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:12:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:17:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:22:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:27:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:32:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:37:03 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 18:21:40 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 18:26:41 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
SQL> select ds.dest_id id
2 , ad.status
3 , ds.database_mode db_mode
4 , ad.archiver type
5 , ds.recovery_mode
6 , ds.protection_mode
7 , ds.standby_logfile_count "SRLs"
8 , ds.standby_logfile_active active
9 , ds.archived_seq#
10 from v$archive_dest_status ds
11 , v$archive_dest ad
12 where ds.dest_id = ad.dest_id
13 and ad.status != 'INACTIVE'
14 order by
15 ds.dest_id;
ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 210
2 ERROR UNKNOWN LGWR UNKNOWN MAXIMUM PERFORMANCE 0 0 0
SQL> column FILE_TYPE format a20
SQL> col name format a60
SQL> select name
2 , floor(space_limit / 1024 / 1024) "Size MB"
3 , ceil(space_used / 1024 / 1024) "Used MB"
4 from v$recovery_file_dest
5 order by name;
NAME Size MB Used MB
/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area 2048 896
SQL> spool offspool u01/app/oracle/vimal.log
SP2-0768: Illegal SPOOL command
Usage: SPOOL { <file> | OFF | OUT }
where <file> is file_name[.ext] [CRE[ATE]|REP[LACE]|APP[END]]
SQL> spool /u01/app/oracle/vimal.log
Standby output_
SQL> set feedback off
SQL> set trimspool on
SQL> set line 500
SQL> set pagesize 50
SQL> set linesize 200
SQL> column name for a30
SQL> column display_value for a30
SQL> col value for a10
SQL> col PROTECTION_MODE for a15
SQL> col DATABASE_Role for a15
SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
NAME DISPLAY_VALUE
db_file_name_convert /u01/app/oracle/product/10.2.0
/db_1/oradata/CHENNAI/datafile
/, /home/oracle/oracle/product
/10.2.0/db_1/oradata/MUMBAI/da
tafile/
db_name chennai
db_unique_name mumbai
dg_broker_config_file1 /home/oracle/oracle/product/10
.2.0/db_1/dbs/dr1mumbai.dat
dg_broker_config_file2 /home/oracle/oracle/product/10
.2.0/db_1/dbs/dr2mumbai.dat
dg_broker_start FALSE
fal_client mumbai
fal_server chennai
local_listener
log_archive_config DG_CONFIG=(chennai,mumbai)
log_archive_dest_2 SERVICE=chennai LGWR ASYNC VAL
ID_FOR=(ONLINE_LOGFILES,PRIMAR
Y_ROLE) DB_UNIQUE_NAME=chennai
log_archive_dest_state_2 ENABLE
log_archive_max_processes 2
log_file_name_convert /u01/app/oracle/product/10.2.0
/db_1/oradata/CHENNAI/onlinelo
g/, /home/oracle/oracle/produc
t/10.2.0/db_1/oradata/MUMBAI/o
nlinelog/, /u01/app/oracle/pro
duct/10.2.0/db_1/flash_recover
y_area/CHENNAI/onlinelog/, /ho
me/oracle/oracle/product/10.2.
0/db_1/flash_recovery_area/MUM
BAI/onlinelog/
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management AUTO
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
CHENNAI mumbai MAXIMUM PERFORM PHYSICAL S MOUNTED
ANCE TANDBY
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
SQL> select process, status,thread#,sequence# from v$managed_standby;
PROCESS STATUS THREAD# SEQUENCE#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
MRP0 WAIT_FOR_LOG 1 152
SQL> col name for a30
SQL> select * from v$dataguard_stats;
NAME VALUE UNIT TIME_COMPUTED
apply finish time day(2) to second(1) interval
apply lag day(2) to second(0) interval
estimated startup time 10 second
standby has been open N
transport lag day(2) to second(0) interval
SQL> select * from v$archive_gap;
SQL> col name format a60
SQL> select name
2 , floor(space_limit / 1024 / 1024) "Size MB"
3 , ceil(space_used / 1024 / 1024) "Used MB"
4 from v$recovery_file_dest
5 order by name;
NAME Size MB Used MB
/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/ 2048 150
SQL> spool off
-Vimal. -
Total of line items in line item file not the same as in statement file
Hi,
Can anyone give an idea about this error?
I analysed teh multicash file for the totals and line items total ties out with the statement total and formats of the file also seem to be fine.
not sure why the error persists?
Raj/You should give the exact message, with message number.
If it is FB 777, "DTAUS: Number of line items not equal to control total; see long text", you should see the long text. I think it contains sufficient information:
Text
DTAUS: Number of line items not equal to control total; see long text
Diagnosis
data records of record type 'C' were handed over in the file you
imported. There must be data records according to the control total
from the fourth field of record type 'E'.
Processing was therefore terminated.
The DTAUS file was not imported into the bank data clipboard. No
postings were generated.
Procedure
Inform your credit institution about the error which has occurred and
let them give you a correct DTAUS file.
No actions are necessary in the SAP System.
Maybe you are looking for
-
When I am in my classroom where I am teaching, I sometimes open a duplicate tab so I can go back and forth while I am grading. This gives me the assignment and grade book open at the same time. This is much faster than opening and closing the two scr
-
Hi All, I have a requirement in ApEx where in I need to restrict the number of times a particular survey is answered. I know that the IP address can be found, but again this is not enough.(As multiple users use the same system due to shifts based ope
-
Display the size of attached PDF form
Hi I created a form and allow the user to attach an file.No probs with tat.Now i want to dispaly the size of an attached pdf and also want to save a form in local machine.Could anyone help me to come out of this.
-
Sybase ASE & BODS connectivity issue
Datastore creation give server not found error with Sybase ASE. Unable to create a data store when use Sybase ASE database Tried different combination of DB Server name like server name, server name + port number, IP, & IP + port number. It giving me
-
DBMS_SQLTUNE.REPORT_TUNING_TASK
Hi, In 10g R2 i receive the following : SQL> SET LONG 1000 SQL> SET LONGCHUNKSIZE 1000 SQL> SET LINESIZE 100 SQL> SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task') 2 FROM DUAL; DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK') GE