Database consistency issue..

Hi.
SCCM 2012 English, RTM
For unknown reason the DR link broke between CA and primary.
It says Site Replication status both way Active, but data replication is mixed. Parent to child Global data - failed and child to parent site data failed. Child to parent global - Active. Replication link analyzer performs several checks and then informs
- Database consistency issues detected for site XXX.   (parent). Skiping rule  - shows different replication groups which are failed. I tried to replicate those, but nothing changed.(according to following article last step)
http://blogs.msdn.com/b/minfangl/archive/2012/05/16/tips-for-troubleshooting-sc-2012-configuration-manager-data-replication-service-drs.aspx
Any more ideas?

Now i have read more about DRS and SCCM 2012 and get in state where 2 from 3 replications are active.
Mainly with puting  relevatnt pub  files to RCM.BOX. But global data replication from CAs to PRI is still failed. No idea ....
 More related articles:  
http://blogs.technet.com/b/sudheesn/archive/2012/10/21/drs-initialization-in-configuration-manager-2012.aspx
http://anoopcnair.com/2012/06/20/sccm-configmgr-2012-site-to-site-replication-sql-data-replication-service-replication-configuration-management-sql-service-broker-replication-groups-and-manual-sync/
And one more thing - do not know it is related or not
Severity Type Site code Date / Time System Component Message ID Description
Error Milestone XXX 12/14/2012 10:34:29 AM server.domain.local SMS_POLICY_PROVIDER 620 Microsoft SQL Server reported SQL message 547, severity 16: [23000][547][Microsoft][SQL Server Native Client 10.0][SQL Server]The INSERT
statement conflicted with the FOREIGN KEY constraint "ResPolicyMap_PolicyAssignment_FK". The conflict occurred in database "CM_XXX", table "dbo.PolicyAssignment", column 'PADBID    Please refer to your Configuration Manager documentation, SQL
Server documentation, or the Microsoft Knowledge Base for further troubleshooting information.
It is jaming replication ? and how fix DB inconsistency detected by Replication Link analyzer ?

Similar Messages

  • Logical Standby Data Consistency issues

    Hi all,
    We have been running a logical standby instance for about three weeks now. Both our primary and logical are 11g (11.1.0.7) databases running on Sun Solaris.
    We have off-loaded our Discoverer reporting to the logical standby.
    About three days ago, we started getting the following error message (initially for three tables, but from this morning on a whole lot more)
    ORA-26787: The row with key (<coulmn>) = (<value>) does not exist in table <schema>.<table>
    This error implies that we have data consistency issues between our primary and logical standby databases, but we find that hard to believe
    because the "data guard" status is set to "standby", implying that schemas' being replicated by data guard are not available for user modification.
    any assistance in this regard would be greatly appreciated.
    thanks
    Mel

    It is a bug : Bug 10302680 . Apply the corresponding Patch 10302680 to your standby db.

  • Urgent help needed; Database shutdown issues.

    Urgent help needed; Database shutdown issues.
    Hi all,
    I am trying to shutdown my SAP database and am facing the issues below, can someone please suggest how I can go about resolving this issue and restart the database?
    SQL> shutdown immediate
    ORA-24324: service handle not initialized
    ORA-24323: value not allowed
    ORA-01089: immediate shutdown in progress - no operations are permitted
    SQL> shutdown abort
    ORA-01031: insufficient privileges
    Thanks and regards,
    Iqbal

    Hi,
    check SAP Note 700548 - FAQ: Oracle authorizations
    also check Note 834917 - Oracle Database 10g: New database role SAPCONN
    regards,
    kaushal

  • Database performance issue (8.1.7.0)

    Hi,
    We are having tablespace "payin" in our database (8.1.7.0) .
    This tablespace is the main Tablespace of our database which is dictionary managed and heavily accessed by the user SQL statements.
    Now we are facing the database performance issue during the peak time (i.e. at the month end) when no. of users use to run the no. of large reports.
    We have also increased the SGA sufficiently on the basis of RAM size.
    This tablespace is heavily accessed for the reports.
    Now my question is,
    Is this performance issue is because the tablespace is "dictionary managed" instead of locally managed ?
    because when i monitor the different sessions through OEM, the no. of hard parses is more for the connected users.
    Actually the hard parses should be less.
    In oracle 8.1.7.0 Can we convert dictionary managed tablespace to locally managed tablespace ?
    by doing so will the problem will get somewhat resolve ? will it reduce the overhead on the dictionary tables and on the shared memory ?
    If yes then how what is procedure to convert the tablespace from dictionary to locally managed ?
    With Regards

    If your end users are just running reports against this tablespace, I don't think that the tablespace management (LM/DM) matters here. You should be concerned more about the TEMP tablespace (for heavy sort operations) and your shared pool size (as you have seen hard parses go up).
    As already stated, get statspack running and also try tracing user sessions with wait events. Might give you more clues.

  • Check Database Consistency

    Dear All,
    i m trying to run Check Database Consistency through DB13,i m getting following error.
    Unable to start execution of step 1 (reason: The
    ob step contains one or more tokens. For SQL Serv
    r 2005 Service Pack 1 or later, all job steps wit
    tokens must be updated with a macro before the j
    b can run.).  The step failed.
    i m runing SAP ecc5 On SQL 2005 SP1.
    Please advice how to resolved this error.
    Thanks & Regards,
    Hiten Modi

    Ok, I don't know much about MS SQL. Really.
    But just looking for "step contains one or more token" in the SDN search brought up this note:
    [SAP Note 947702 DB13 - DBCC Checkdb job fails in SQL 2005 SP1|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bc_db/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d393437373032%7d]
    Why does apparently no one ever tries this before posting a question?
    cheers,
    Lars

  • Calendar Consistency Issues

    We've begun seeing some calendar consistency issues on iPad devices. We are using an Exchange 2007 "back end", most of the users are using Outlook in cached mode on their desktop, and many will have another mobile device of one type or another. The consistency issues are appointments appearing in Outlook or being accepted in Outlook and not showing up on the iPad device. Every hardware vendor we've spoken with, including Microsoft, and RIM have emphasized the client/device side and implementing "best practives" such as the following:
    -minimizing number of instances of mail clients running against mailbox
    -minimizing delegates
    -having only 1 person notified of appointment
    -avoiding forwarding appointments
    -deleting changed recurring appt's rather than changing them
    -not accepting appointments from mobile device
    -running Outlook in online as opposed to cached mode where possible
    I'm hoping someone out there has found a "silver bullet" that goes beyond the suggestions we've gotten which are listed above. The problem is our high profile users that have the devices don't want to be told it's their behavior that's causing the problem!!!
    I appreciate in advance any ideas.

    Same thing happening on my iMac.  Can't do much on the computer.

  • What does a package database consist of?

    What does a package database consist of? can any one give me complete description.

    Hi,
    This is Prabhuram Devar,
    A package database consists of:
    /var/sadm/pkg: This is a directory containing one directory entry for every package installed on the system.
    /var/sadm/pkg/<packagename>/pkginfo
    /var/sadm/pkg/<packagename>/save/<patchid>/undo.Z for the backout packages.
    /var/sadm/pkg/<packagename>/save/pspool/<packagename>: A sparse package that is used for non-global zone install.
    /var/sadm/install/contents: This file has an entry for every file in the system that has been installed through a package. Entries are added or removed automatically using the binaries installf and removef.

  • Any place I can find Oracle database known issues

    Hi
    Is there any place I can find Oracle database known issues list for specified release?
    Thanks

    Specifically, see MOS Docs
    555579.1 - 10.2.0.4 Patch Set - Availability and Known Issues
    738538.1 - 11.1.0.7 Patch Set - Availability and Known Issues
    HTH
    Srini

  • LMS Database consistency verification tool

    Hello all,
    I upgraded from LMS 3.1 to 3.2 yesterday. The upgrade went well, but DFM and Campus Manager didn't work afterwards.
    After a few searches on the forum, and the help of a great Cisco TAC engineer, we were able to solve the PIDM database inconsistencies that were preventing DFM links from showing up. And Campus Manager needed a kick in the butt to start again it seems.
    My PIDM table had many duplicate device ids, some registered to DFM 2.0 (on the previous server) and some registered to DFM 3.1 (current server). So it seems the previous upgrade (from 2.6 to 3.0 and then 3.1) had some bad entries.
    My question for the pros is this : is there a tool that can be run against the LMS databases to check for inconsistencies like these ? Duplicate entries, wrong server names, application version, and other nasty stuff ?
    My next chore is to move LMS from Windows 2003 to Windows 2008, but before I do that, I'd like to make sure the DB is clean
    Thanks!
    Alex.

    No, there are no tools to verify data consistency.  There is a tool to validate the file structure consistency of the database, but if the database is running, that is typically not an issue.
    The PIDM table is the source of quite a few problems, but they are generally easy to sort out.  If you have a good backup of your 2003 server data, I would say you can do the 2008 migration.

  • HFM 11.1.2.1 Database Configuration issue - EPMCFG-10398

    Hi All,
    I was configuring EPM 11.1.2.1 on Windows 2003 32 bit instance with Oracle 11.2.0.1+ as DB
    Products covered were :
    1. Foundation Services
    2. FDM
    3. Reperting and Analysis
    4. Financial Reporting
    5. Web Analysis
    6. HFM
    7. Disclosure Management
    I have created separate SID's and grouped them as
    Foundation Services + Workspace = hypss
    FDM + DM + Calc Mger = hypepm
    RA + FR + DM = hyprna
    HFM = hyphfm
    I also configured them in above order only. However when I was configuring HFM I was given warning as - *"EPMCFG-10398: For successful completion of this task, the Oracle Database tnsnames.ora file must exist"*
    Subsequently my following HFM configurations were failed
    1. Configure Database
    2. Register Application Servers/Clusters
    My tnsnames.ora file is as follows
    # tnsnames.ora Network Configuration File: C:\app\Administrator\product\11.2.0\dbhome_1\network\admin\tnsnames.ora
    # Generated by Oracle configuration tools.
    HYPSS =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = payous12)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = hypss)
    HYPHFM =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = payous12)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = hyphfm)
    HYPRNA =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = payous12)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = hyprna)
    HYPEPM =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = payous12)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = hypepm)
    ORACLR_CONNECTION_DATA =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    (CONNECT_DATA =
    (SID = CLRExtProc)
    (PRESENTATION = RO)
    I changed the entry localhost to payous12 (my machin name) in tnsnames.ora. That was the only change I carried out in that file
    Further I have not yet installed ethernet drivers on my machine and there is no IP address as well
    All other configurations were successful. Can anyone guide for successful configuration of HFM
    Thanks
    Payous

    Hi Bharat,
    To resolve this issue you need to go to
    Drive where ODAC is installed \app\Administrator\product\11.1.0\client_1\Network\Admin\Sample
    Then Copy the tnsnames.ora file from Sample folder to following location
    Drive where ODAC is installed \app\Administrator\product\11.1.0\client_1\Network\Admin
    Make necessary changes to that file with respect to your HFM database
    Sample tnsnames.ora will look like as follows
    HYPHFM =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = payous12)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = hyphfm)
    Payous

  • Need suggestion for a database connection issue in an EJB Application

    Hi Friends,
    I am facing some serious problem in my EJB application. Some times, my application is waiting for unknown reason while it connects to database as executing a stored procedure to get and assign a unique number to a transaction.
    This stored procedure is used to get a unique number from database and update the database with the new unique number which is incremented by previous number.
    During this waiting time, if any other transaction hits my application, then my application assigning same unique number to these 2 transactions.
    But It is not happening always. It is happening very less number of times. What should I take care of the code to not happen this? Can I implement any synchronization kind of thing here?
    If I implement synchronization kind of thing, If the first transaction gets the lock and waits for some time, then I think it will effect for subsequent transactions. Could You please suggest me on this issue?
    Thanks in Advance to All.

    Here is my datasources.xml :
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd">
    <beans>     
         <bean id="myDataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
              <property name="jndiName" value="jdbc/myDS"/>
              <property name="resourceRef" value="false"/>
         </bean>
    </beans>

  • Consistent issues with wifi in one particular location

    I consistently have a problem with connectivity in a coffee shop with wireless internet.  It is a free wifi spot and works about 80% of the time.  However, sometimes I simply cannot connect.  I will reset the PRAM, change my network settings (e.g. add a network location, change the DNS settings to 8.8.8.8, turn off the wifi, etc), and delete my system config folder.  Sometimes it comes back and other times it just hangs out.  At one point I found a page explaining how to delete certain files through Terminal, but I cannot locate that again.  The issues I face often happen if I have gone from one wireless setting (work, school, home) and then into the coffee shop without turning the computer fully off - going into sleep mode.  Any thoughts.
    System Information:
    Processor 2.4 GHZ Intel Core 2 Duo
    Memory 4 GB 667 MHZ DDR2 SDRAM
    Graphics NVIDIA GeForce 8600M GT 256 MB
    Software OS X 10.8.1 (Mountain Lion)

    I've run across several sites that use a captive portal to login their network, rather than WEP/WPA.
    These networks use a protocol that allows your computer to discover DNS service automatically on the network without manually typing the address. This is why you are not provided with a DNS server address for their network.
    If you specify a DNS address, your laptop cannot find the DND of the wireless router.
    The simple fix is to add a new location and leave the DNS Servers field blank. That should allow your machine to find the router.
    Good luck.

  • DataBase Adapter Issue

    Hi all,
    When i try to query the table using the Database adapter , i m getting the following error message
    *" ORA-04030 : out of process memory when trying to allocate 2024 bytes( kxs-heap-c,kghsstk)* "
    Experts please post the possible resolution for this issue..
    Thanks in Advance,
    Karthik

    Hi-
    Is the Database and Weblogic installed on same machine ??
    When you execute queries from SQL worksheet this is the only program that is consuming memory..
    But when you run from DB Adapter both your adapter and database consumes the memory and datbase is not able to get the required memory to run the script..
    I am not quite sure of the issue, u can check the task manager and find it out when executing the program

  • Database Poller issues in Clustered env

    Hi,
    I am running Oracle BPEL 10.1.3.4 on top of Weblogic version 9.x in a clustered env. I created a simple database poller BPEL process (for target table in Oracle database) with below parameters:
    Max Raise Size: 1
    Max Transaction Size: 2
    Polling Interval:30
    Distributed Polling is enabled
    All other BPEL engine and domain level parameters have default values.
    In my clustered env two instances of database poller BPEL process are running on diff nodes. Now I populated my table with 1000 records in one go. According to parameters I expect:
    - In each polling interval max 2 record will be processed by database poller BPEL process running on single node. And there will be 2 instances of this bpel process as max raise size is 1. Same I expect for database poller BPEL process running on another node. So overall 4 records should be processed in each polling interval and 4 instances of BPEL processes should be visible in BPEL console. But in actual its always 2.
    Is I am missing something here? Won't the load balancer distributes records to BPEL processes on both nodes equally?
    Also then I raise Max transaction size to higher value e.g. 20. But in this case after processing nearly 1/3 rd of records, the BPEL process stopped picking up any further records. Is there any known issue where Adapter stops picking any further database records if transaction size limit is higher?
    thanks
    Ankit
    Edited by: AnkitAggarwal on 22-Feb-2010 03:36

    Hi Ankit,
    I have Oracle BPEL env11.1.1.2.0 clustered over 2 nodes.
    I am facing a similar issue. In my scenario I have Bpel (with DB poller) and its deployed on the cluster. And I am connected to DB via Multi data source (MDS-1) with only one datasource (DS-1) configured in it.
    So whenever I update the table which is being polled, in some cases I have TWO instances and in some cases I have ONE Instance running. My requirement is to have only one instance running every time the DB poller is initiatied.
    Kindly help me out.
    Thank You
    Best Regards
    Prasanth

  • Database connectivity issues since upgrading to 2011

    I have several reports that were created in CR 2008.  Since upgrading to 2011, none of them connect to the SQL server when I try to refresh the data.  The error message says invalid logon.  I can get the reports to refresh if I follow this process:
    1.     Close the report
    2.     Delete connections to the SQL server
    3.     Close CR
    4.     Open CR
    5.     Create a new blank report and connect to the same data source as the failed report
    6.     Open the report that was failing
    7.     Refresh the data
    Everything works at that point.  Once I close CR, next time I access the report I have to go through the entire process again.  Saving the reports after successfully establishing a connection does not fix the issue.  I have reset the data source location from the database menu as well as established new system and file DSNs.  Any help would be greatly appreciated.
    Thank you,
    Darren

    Hi Darren,
    What I meant for version of CR 2011 is to click on Help About... you'll see the version number there.
    Interesting, So if it fails in the ODBC Admin test then of course it's going to fail in CR Designer, CR simply uses what connection info you specify in the DSN or in the file. If you can get the DSN to work non-kerberose then adding that configuration to your File DSN should allow CR to work also.
    Couple of possibilities, Older versions of CR ignored the security configuration, or at least this one test, and it was fixed in CR 2011, or possibly the File DSN needs to have the security type enabled through the connection property. Check MS for help on what the info in the line is to enable it.
    Another possible solution is according to this MS Kbase they recommend that you use Kerberose...
    http://support.microsoft.com/kb/909801
    So the result is, Set Kerberose on the Server or find out how MS will support non-kerberose security client side and see if that works. But because MS recommends using it discuss with your DBA and recommend to all to enable it and use it.
    Thanks again
    Don

Maybe you are looking for

  • How to configure a custom link in ESS for LTA?

    I have a requirement to display a pop-up alv report, once user click on a link "Leave_Details" in ESS. I am able to display pop-up alv but background screen is empty as its a separate WD application however I want that as soon as user selects any row

  • Wheel of iPod Shuffle no longer works

    The wheel on my iPod shuffle no longer works (forward and reverse music, volume, etc). Music plays, but otherwise have no control. Reset device, but problem persists. Anyone have a suggestions?

  • Loading flat idoc via report RSEINB00 into integration server fails

    Hi, we have PI/700. We get idocs (in flat format NOT XML) via filesystem from an external partner. Now I try to load these idocs with the report RSEINB00 into my integration server. My problem is, that the report says that the sender port + client(of

  • Converting Audio Files to AAC format

    I have a mix of MPEG and AAC files in iTunes. Is there an advantage to converting all files to AAC?

  • Image size problem with Retina Macbook Pro

    I exported raw images to Jpg and then resized as 1200 x 800 resolution images through "Easy batch Pro" app. When I uploaded them, they were extremely large. I've tried it several times and the result was always the same. Is it a problem with "Easy ba