Error during implenting Data Guard

I am implementing Data guard on two different PC's by following this URL,
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/ha/dataguard/physstby/physstdby.htm
i the last step when i execute RMAN script, it is giving errors, i need your help in resolving this.
RMAN> run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'tmdb','tbdb'
set db_unique_name='tbdb'
set db_file_name_convert='/tmdb/','/tbdb/'
set log_file_name_convert='/tmdb/','/tbdb/'
set control_files='/u01/app/oracle/oradata/tbdb/tbdb1.ctl'
set log_archive_max_processes='5'
set fal_client='tbdb'
set fal_server='tmdb'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(tmdb,tbdb)'
set log_archive_dest_2='service=tmdb ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=tmdb'
Output of Script
using target database control file instead of recovery catalog
allocated channel: prmy1
channel prmy1: SID=16 device type=DISK
allocated channel: prmy2
channel prmy2: SID=21 device type=DISK
allocated channel: prmy3
channel prmy3: SID=152 device type=DISK
allocated channel: prmy4
channel prmy4: SID=24 device type=DISK
allocated channel: stby
channel stby: SID=135 device type=DISK
Starting Duplicate Db at 19-APR-11
contents of Memory Script:
backup as copy reuse
targetfile '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtmdb' auxiliary format
'/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtbdb' targetfile
'/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletmdb.ora' auxiliary format
'/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletbdb.ora' ;
sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletbdb.ora''";
executing Memory Script
Starting backup at 19-APR-11
RMAN-03009: failure of backup command on prmy1 channel at 04/19/2011 14:46:28
ORA-17627: ORA-12577: Message 12577 not found; product=RDBMS; facility=ORA
continuing other job steps, job failed will not be re-run
released channel: prmy1
released channel: prmy2
released channel: prmy3
released channel: prmy4
released channel: stby
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 04/19/2011 14:46:29
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of backup command on prmy2 channel at 04/19/2011 14:46:28
ORA-17627: ORA-12577: Message 12577 not found; product=RDBMS; facility=ORA

https://supporthtml.oracle.com/ep/faces/secure/km/BugDisplay.jspx?id=8406972&bugProductSource=Oracle&h=Y
WORKAROUND:
Use RMAN duplicate database based on backup
https://supporthtml.oracle.com/ep/faces/secure/km/BugDisplay.jspx?id=10339515&bugProductSource=Oracle&h=Y
WORKAROUND:
Make sure that the raw partition on the Standby has equal size than the one
on the Primary.

Similar Messages

  • Error: ORA-16532: Data Guard broker configuration does not exist

    Hi there folks. Hope everyone is having a nice weekend.
    Anyways, we have a 10.2.0.4 rac primary and a 10.2.0.4 standby physical standby. We recently did a switchover and the dgbroker files automatically got created in the Oracle_home/dbs location of the primary. Now need to move these files to the common ASM DG. For this, I followd the steps from this doc:
    How To Move Dataguard Broker Configuration File On ASM Filesystem (Doc ID 839794.1)
    The only exception to this case is that I have to do this on a Primary and not a standby so I am disabling and enabling the Primary(and not standby as mentioned in below steps)
    To rename the broker configuration files in STANDBY to FRA/MYSTD/broker1.dat and FRA/MYSTD/broker2.dat, Follow the below steps
    1. Disable the standby database from within the broker configuration
    DGMGRL> disable database MYSTD;
    2. Stop the broker on the standby
    SQL> alter system set dg_broker_start = FALSE;
    3. Set the dg_broker_config_file1 & 2 parameters on the standby to the appropriate location required.
    SQL> alter system set dg_broker_config_file1 = '+FRA/MYSTD/broker1.dat';
    SQL> alter system set dg_broker_config_file2 = '+FRA/MYSTD/broker2.dat'
    4. Restart the broker on the standby
    SQL> alter system set dg_broker_start = TRUE
    5. From the primary, enable the standby
    DGMGRL> enable database MYSTD;
    6. Broker configuration files will be created in the new ASM location.
    I did so but when I try to enable the Primary back I get this:
    Error: ORA-16532: Data Guard broker configuration does not exist
    Configuration details cannot be determined by DGMGRL
    Form this link,(Errors setting up DataGuard Broker it would seem that I would need to recreate the configuration....Is that correct ? If yes then how come Metalink is missing this info of recreating the configuration... OR is it that that scenario wouldnt be applicable in my case ?
    Thanks for your help.

    Yes I can confirm from the gv$spparameter view that the changes are effective for all 3 instances. From the alert log the alter system didnt throw u pany errros. I didnt restart the instances though since I dont have the approvals yet. But I dont think thats required.

  • Incorrect XML disply for ALERTCATEGORY error during deserialization data lo

    Hi,
    I am getting below error while executing first scenario in Quality enviornment, i have not defined and trasported any ALERTS.
    Incorrect XML disply for ALERTCATEGORY error during deserialization data loss occured when converting 011136c23c9430359b57c0ef8949c630
    this error stoping my interface executing...did anyone experienced this issue??
    put some light on this.
    many thanks,
    Raj

    Have you checked this SAP Note # 1394710 ?

  • ERPI Error during Export data to GL for period 2012

    Hi, experts
    When I write back data from Planning to EBS, I encounter the following error during Export data to GL for period 2012. I have confirmed that Planning data has been upload to ERPI.
    Do you have any hints on this error for me? Thanks a lot.
    Log:
    ,COALESCE(ppa.YEARTARGET, pp.YEARTARGET) YEARTARGET
    ,CASE
    WHEN (INSTR(UPPER(wld.TEMP_COLUMN_NAME),'AMOUNT',1) = 1) THEN
    CAST(SUBSTR(wld.TEMP_COLUMN_NAME,7,LENGTH(wld.TEMP_COLUMN_NAME)) AS NUMERIC(15,0))
    ELSE 0
    END ENTITY_NAME_ORDER
    FROM (
    AIF_WRITEBACK_LOAD_DTLS wld
    LEFT OUTER JOIN TPOVPERIODADAPTOR_FLAT_V ppa
    ON ppa.INTSYSTEMKEY = 'JC2PLN'
    AND ppa.PERIODTARGET = wld.DIMENSION_NAME
    AND ppa.YEARTARGET = 'FY12'
    ) LEFT OUTER JOIN TPOVPERIOD_FLAT_V pp
    ON pp.PERIODTARGET = wld.DIMENSION_NAME
    AND pp.YEARTARGET = 'FY12'
    WHERE wld.LOADID = 176
    AND wld.COLUMN_TYPE = 'DATA'
    ) query
    GROUP BY PERIODTARGET
    ,YEARTARGET
    ,ENTITY_NAME_ORDER
    ) q
    ,TPOVPERIOD p
    ,AIF_GL_PERIODS_STG prd
    WHERE p.PERIODKEY = q.PERIODKEY
    AND NOT EXISTS (
    SELECT 1
    FROM AIF_PROCESS_DETAILS pd
    WHERE pd.PROCESS_ID = 176
    AND pd.ENTITY_TYPE = 'PROCESS_WB_EXP'
    AND pd.ENTITY_ID = prd.YEAR
    AND prd.SOURCE_SYSTEM_ID = 3
    AND prd.SETID = '0'
    AND prd.CALENDAR_ID = '10000'
    AND prd.PERIOD_TYPE = 'Month'
    AND prd.START_DATE > p.PRIORPERIODKEY
    AND prd.START_DATE <= p.PERIODKEY
    AND prd.ADJUSTMENT_PERIOD_FLAG = 'N'
    ORDER BY prd.YEAR
    2012-10-12 11:05:10,078 INFO [AIF]: COMM Writeback Period Processing - Insert Periods into Process Details - END
    2012-10-12 11:05:10,516 INFO [AIF]: COMM End Process Detail - Update Process Detail - START
    2012-10-12 11:05:10,594 DEBUG [AIF]:
    UPDATE AIF_PROCESS_DETAILS
    SET STATUS = 'FAILED'
    ,RECORDS_PROCESSED = CASE
    WHEN RECORDS_PROCESSED IS NULL THEN 0
    ELSE RECORDS_PROCESSED
    END + 0
    ,EXECUTION_END_TIME = CURRENT_TIMESTAMP
    ,LAST_UPDATED_BY = CASE
    WHEN ('native://DN=cn=911,ou=People,dc=css,dc=hyperion,dc=com?USER' IS NULL) THEN LAST_UPDATED_BY
    ELSE 'native://DN=cn=911,ou=People,dc=css,dc=hyperion,dc=com?USER'
    END
    ,LAST_UPDATE_DATE = CURRENT_TIMESTAMP
    WHERE PROCESS_ID = 176
    AND ENTITY_TYPE = 'PROCESS_WB_EXP'
    AND ENTITY_NAME = '2012'
    2012-10-12 11:05:10,594 INFO [AIF]: COMM End Process Detail - Update Process Detail - END
    2012-10-12 11:05:10,719 INFO [AIF]: ERPI Process End, Process ID: 176

    I do not have any hints on this.
    Would need to see the ODI Operator when the process is ran.

  • Invalid data status error during the data extraction

    Hi,
    while extracting capacity data from the SNP Capacity view to BW. i get the "invalid data status error" and the data extraction fails.
    when debugged the bad requests of the ODS object, i found that for a certain product(which has both positive and negative input and out qtys) co-product manufacturing orders were created. but this product was not marked as the co-product and functionally its fine.
    how can i rectify the data extraction problem..can you advice.
    Thanks,
    Dhanush

    Sir,
    In my company for some production order status some are having "errors in cost calculation" ie "cser" .how to deal these kind of errors.

  • Data upload error during uploading data in catalogs ( CCM 2.0 )

    Hi ,
    I am having problem during upload of data into catalogs of CCM 2.0. I think there is some problem with my file. When i click "Upload Schema" it just upload schema not the data. But if i am removing schema from file it is giving me error.
    Kindly if anyone can send me sample file for data upload in CCM 2.0 or steps to upload data correctly.
    Thanks & Regards,
    Kamal

    Hi,
    I don't understand very well, the standard characteristic for images is /CCM/PICTURE.
    It's a complex characteristic composed by
    /CCM/DESCRIPTION
    /CCM/URL
    /CCM/MIME_TYPE. (for example image/jpeg).
    If your problems still exist, you can send me your file
    Regards

  • ERROR DURING LOADING DATA FROM PSA TO DSO IN "PRUEFMBKT"

    Respected all
    I am trying to load data through an r/3 field( pruefmbkt, char-40length) to infoobject. the data is coming correctly to psa but when i am loading it in to dso or infocube request is turning red giving following error message--
    Value '010/12.05-112.15/hold on not operate' (hex.
    '3031302F31322E30352D3131322E31352F686F6C64206F6E20') of characteristic
    PRU_LSIL contains invalid characters
    Message no. BRAIN060
    Diagnosis
        Only the following standard characters are valid in characteristic
        values by default:
        !"%&''()*,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ.+
        Furthermore, characteristic values that only consist of the character #
        or that begin with ! are not valid.
        You are trying to load the invalid characteristic value 1. (hexidecimal
        representation 3031302F31322E30352D3131322E31352F686F6C64206F6E20).
    Procedure for System Administration
        Try to change the invalid characters to valid characters.
        If you want values that contain invalid characters to be admitted into
        the system, make the appropriate setting in BW Customizing. Refer to the
        documentation describing the requirements for special characters and the
        possible consequences.
        For more information on the problems involved with valid and invalid
        characters, click here.
    kindly give me the solution how to deal with this problem.
    thanks
    abhay

    HI Abhay,
    please check if the infoobject corresponding to field PRU_LSIL in BW system has lower case checkbox enabled in the infoobject properties, as the  value '010/12.05-112.15/hold on not operate' coming from R/3 has lower case alphabets.
    Hope this helps.
    Regards,
    Umesh.

  • Error during registering Data Template in Oracle BI Publisher

    Hi,
    While creating report in BI Publisher. I am taking type as "Data Template".
    Below is the error:
    Failed to save data.
    Error occured when creating xml data.
    TypeError
    Object Required
    Thanks,
    Abdul.

    Hi ,
    what abt the other type is it working for you..
    after selecting the report r u not able to see the below format
    <dataTemplate>
    <dataQuery>
    </dataQuery>
    </dataTemplate>
    Thanks,
    Ananth

  • Error during DG data distribution

    Hi SAP DG Experts,
    I am configuring and testing DG data distribution from central EHS system to multiple ERP systems.
    I am able to send the DG idoc and ERP system is receiving that.
    But the DG data creation is failing and when I check DGp7 I can get an error message "data records were not save as no selection date was available".
    Please let me know if you have come across this issue?
    Thanks
    PS

    Dear Pugazendhi
    to distrbute sucessfully DG master via ALE you need to disztrbute
    a.) the phrases
    b.) do a proper set up of receiver system
    Check e.g.
    Distribution (ALE) of Exceptions to Dangerous Goods Regulations - Dangerous Goods Management (EHS-DGP) - SAP Library
    This topic is asked for very rarely
    C.B:

  • Error during extraction data from APO system

    HI Gurus,
    We are extracting data from APO system at regular intervals. As poart of business requirement, we enhanced the data existing data source in APO dev system and replicated the same in BW dev system and activated the Tx structure and Tx rules.
    When we checked it is working fine in dev environment.
    Now we moved the same in Quality system, and we tried to extract data, it throws error " error in data selection in source system".
    We tried to check in RSA3 in APO and found that the extract program still refers to the old extract structre and gives dump " Extract structure unknown".
    Could anybody throw some light on this issue.
    Tnx in advance.
    Guru

    Hi,
    Have u collected all the objects in R3,like the Table/View , DS,Extract Structure.
    And
    Has replication been done in BI side for the same.
    Rgds
    SVU
    Edited by: svu123 on Sep 24, 2009 7:50 AM

  • Error during master data delta load

    Hi All,
    I am loading delta master data to an info object 0DPM_DCAS through a flexible update.
    Received an error "101 data records in table /BI0/XDPM_DCAS marked for deletion...
    158 duplicate record found. 126 recordings used in table /BI0/XDPM_DCAS"  at the Update level.
    The status is red. There is no additional info in the Error Message button.
    Also subsequent deltas cant be triggered and it says "Repeat cant be requested".
    Please suggest an explanation and an way out.

    Hello VishNo,
    How r u ?
    This error is because of the Duplicate records existing in the InfoObject. Also, seems to be the problem in the Master Data Time independent navigational attributes table. Check the data and also the Monitor screen Details Tab entries & come back again.
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Error during populating data in combobox

    Hi,
    I am getting the ArrayCollection through RemoteObject call
    and I am trying to populate that into the combobox. But I am
    getting the following error popping up in the browser.
    Error: Unknown Property: '-1'.
    at mx.collections::ListCollectionView/
    http://www.adobe.com/2006/actionscript/flash/proxy::getProperty()
    at SearchReports/getReportCodeAndNameData()
    at SearchReports/populateDocumentList()
    at SearchReports/__ReportsDelegate5_result()
    at
    flash.events::EventDispatcher/flash.events:EventDispatcher::dispatchEventFunction()
    at flash.events::EventDispatcher/dispatchEvent()
    at mx.rpc::AbstractService/dispatchEvent()
    at mx.rpc.remoting.mxml::RemoteObject/dispatchEvent()
    at mx.rpc::AbstractOperation/
    http://www.adobe.com/2006/flex/mx/internal::dispatchRpcEvent()
    at mx.rpc::AbstractInvoker/
    http://www.adobe.com/2006/flex/mx/internal::resultHandler()
    at mx.rpc::Responder/result()
    at mx.rpc::AsyncRequest/acknowledge()
    at
    ::NetConnectionMessageResponder/NetConnectionChannel.as$37:NetConnectionMessageResponder: :resultHandler()
    at mx.messaging::MessageResponder/result()
    Can anybody let me know what could be the reason?
    Regards,
    -Sameer

    Fixed the problem. Please ignore.
    -Sameer
    quote:
    Originally posted by:
    sam_the_best
    Hi,
    I am getting the ArrayCollection through RemoteObject call
    and I am trying to populate that into the combobox. But I am
    getting the following error popping up in the browser.
    Error: Unknown Property: '-1'.
    at mx.collections::ListCollectionView/
    http://www.adobe.com/2006/actionscript/flash/proxy::getProperty()
    at SearchReports/getReportCodeAndNameData()
    at SearchReports/populateDocumentList()
    at SearchReports/__ReportsDelegate5_result()
    at
    flash.events::EventDispatcher/flash.events:EventDispatcher::dispatchEventFunction()
    at flash.events::EventDispatcher/dispatchEvent()
    at mx.rpc::AbstractService/dispatchEvent()
    at mx.rpc.remoting.mxml::RemoteObject/dispatchEvent()
    at mx.rpc::AbstractOperation/
    http://www.adobe.com/2006/flex/mx/internal::dispatchRpcEvent()
    at mx.rpc::AbstractInvoker/
    http://www.adobe.com/2006/flex/mx/internal::resultHandler()
    at mx.rpc::Responder/result()
    at mx.rpc::AsyncRequest/acknowledge()
    at
    ::NetConnectionMessageResponder/NetConnectionChannel.as$37:NetConnectionMessageResponder: :resultHandler()
    at mx.messaging::MessageResponder/result()
    Can anybody let me know what could be the reason?
    Regards,
    -Sameer

  • Error during loading of transaction data: An RFC call-up triggered the ...

    Dear Sirs,
    during loading of master data from a 4.6C SAP system to BI 7.0 SP9 I get the following error during processing (data packet).
    <i>"An RFC call-up triggered the exception (ID: RSAR NO: 503) "</i>
    The strange thing with this error is that it emerged today, after successfully loading master data and a subset of transaction data yesterday from same source system.
    Has anyone else come across this error, and can tell us where we should look for a possible solution? (or a better error explaination).
    The RFC connections works fine when testing in SM59, and was functioning yesterday. No relevant dumps in either system.

    Hi folks,
    I am also interested in this error.
    Thanks.

  • Data Guard Administration Question.... (10gR2)

    After considerable trial and error, I have a running logical standby between 2 10gR2 databases.
    1) During the install of the primary database, I didn't comply fully to the OFA standard (I was slightly off on the placement of my database devices). During the Data Guard configuration, the option of "converting to ofa" was selected (per a metalink article that I read regarding a problem choosing to keep the filenames of the primary the same). Of course now I have an issue creating a tablespace on the Primary when keeping the non-OFA directory stucture. When it attempts to do the same on the Standby I'm getting the error that it cannot create the datafile. Makes sense, but what should I do in the future? Create the non-OFA directory structure on the Standby (assuming it would then create the file)? Is't there a filename conversion parameter that handles this as well?
    2) I got myself into a pinch this afternoon, partly due to #1. I am importing a file from another instance onto the Primary to begin testing reports on the Secondary. Prior to the import I created a tablespace (which is what got me to problem #1), proceeded to create the owner of the schema that's going to be imported, then performed the import. Now the apply process is erroring and going off line every few seconds as it works it's way through the "cannot create table" errors that the import is running into on the Secondary. How do I handle a large batch of transactions like this? Ultimately I would like to get back to square 1... no user, and no imported data in the Primary and the apply process online.
    Thanks:
    Chris

    So what I finally did was turned dg offline. Created the tablespace on the secondary, and then the user and then turned apply back online. The import proceeded fairly smoothly. Problem resolved.
    However, that I still need some insight as to exactly how the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters work. I have LOG_FILE_NAME_CONVERT setup (correctly I think) but I get a warning message in DG that sez the configuration is inconsistent with the actual setup.
    Here's the way things are setup:
    I have 3 redo logs:
    primary (non-ofa):
    /opt/oracle10/product/oradata/ICCORE10G2/redo01.log
    ... redo02.log
    ... redo03.log
    secondary (ofa):
    /opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/redo01.log
    ... redo02.log
    ... redo03.log
    LOG_FILE_NAME_CONVERT=('/opt/oracle10/product/oradata/ICCORE10G2/', '/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/')
    Is the above parameter set correctly?
    DB_NAME_FILE_CONVERT is unset as of now, but the directory structure above is the same. I assume the parameter needs to be set just like LOG_FILE_NAME_CONVERT above.
    Thanks

  • Database issues after starting data guard

    Hi. We run OEM12c in Linux. All is working well and we monitor several targets and DBs.
    In our QA db server, sun solaris 11 running oracle 10g, we started to test data guard configuration for several instances we are running there. our standy server is called STDBY. All works fine, but now we have a problem managing some aspects of data guard with OEM12c.
    First problem is that the instances appear as SID_IPaddress of server, as opposed as DB_UNIQUENAME.DB_DOMAIN. curiously enough, the std by copy, appears correctly (SID_sbyp.db_domain)... This is very puzzling...
    Second problem is when we try to access the Data Guard performance pane, it fails showing (both in primary and secondary DBs)
    Data Guard Internal Error : See the OMS log for details.
    No clue where to look for this problem.
    All other functions, TOP, performance home, etc... look fine.

    Hi ,
    Regarding "Data Guard Internal Error : See the OMS log for details"
    Follow the below steps
    On the Data Guard Page run the 'Verify Configuration'-Option twice. The first Execution will show an Output like
    Initializing
    Connected to instance test.oracle.com:mydb
    Starting alert log monitor...
    Updating Data Guard link on database homepage...
    WARNING: Broker name (mytest) and target name (mydb) do not match.
    WARNING: The broker name will be renamed to match the target name.
    Skipping verification of fast-start failover static services check.
    Data Protection Settings:
    Protection mode : Maximum Performance
    Redo Transport Mode settings:
    pnjpcep1: ASYNC
    cnjpcep1: ASYNC
    Checking standby redo log files.....not checked due to broker name mismatch. Run verify again.
    Checking Data Guard status
    mydb : Normal
    my11g : Normal
    Tthe second Execution does not show this Warning any more, ie. it got fixed during the first Execution. Now it's possible to access the Data Guard Performance Page without Errors and you can see the Statistics.
    Ref
    Cloud Control: "Data Guard Internal Error" raised on Data Guard Performance Page (Doc ID 1484028.1)
    Regards,
    Rahul

Maybe you are looking for